Based on the theory of meadows an equational axiomatisation is given for probability functions on finite event spaces. Completeness of the axioms is stated with some pointers to how that is shown.Then a simplified model courtroom subjective probabilistic reasoning is provided in terms of a protocol with two proponents: the trier of fact (TOF, the judge), and the moderator of evidence (MOE, the scientific witness). Then the idea is outlined of performing of a step of Bayesian reasoning by way of applying a transformation of the subjective probability function of TOF on the basis of different pieces of information obtained from MOE. The central role of the so-called Adams transformation is outlined. A simple protocol is considered where MOE transfers to TOF first a likelihood ratio for a hypothesis H and a potential piece of evidence E and thereupon the additional assertion that E holds true. As an alternative a second protocol is considered where MOE transfers two successive likelihoods (the quotient of both being the mentioned ratio) followed with the factuality of E. It is outlined how the Adams transformation allows to describe information processing at TOF side in both protocols and that the resulting probability distribution is the same in both cases. Finally it is indicated how the Adams transformation also allows the required update of subjective probability at MOE side so that both sides in the protocol may be assumed to comply with the demands of subjective probability.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
We review parameter inference for stochastic modelling in complex scenario, such as bad parameters initialization and near-chaotic dynamics. We show how state-of-art methods for state-space models can fail while, in some situations, reducing data to summary statistics (information reduction) enables robust estimation. Wood's synthetic likelihoods method is reviewed and the lecture closes with an example of approximate Bayesian computation methodology.
Accompanying code is available at https://github.com/umbertopicchini/pomp-ricker and https://github.com/umbertopicchini/abc_g-and-k
Readership lecture given at Lund University on 7 June 2016. The lecture is of popular science nature hence mathematical detail is kept to a minimum. However numerous links and references are offered for further reading.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
We review parameter inference for stochastic modelling in complex scenario, such as bad parameters initialization and near-chaotic dynamics. We show how state-of-art methods for state-space models can fail while, in some situations, reducing data to summary statistics (information reduction) enables robust estimation. Wood's synthetic likelihoods method is reviewed and the lecture closes with an example of approximate Bayesian computation methodology.
Accompanying code is available at https://github.com/umbertopicchini/pomp-ricker and https://github.com/umbertopicchini/abc_g-and-k
Readership lecture given at Lund University on 7 June 2016. The lecture is of popular science nature hence mathematical detail is kept to a minimum. However numerous links and references are offered for further reading.
ABC stands for approximate Bayesian computation. It is a method for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces samples from an approximate posterior distribution by simulating parameter and summary statistic values that match the observed summary statistics within a tolerance level. The choice of summary statistics is important but difficult, as there is typically no sufficient statistic. Several strategies have been developed for selecting good summary statistics, including using random forests or the Lasso to evaluate and select from a large set of potential summaries.
Convex Analysis and Duality (based on "Functional Analysis and Optimization" ...Katsuya Ito
In this presentation, we explain the monograph ”Functional Analysis and Optimization” by Kazufumi Ito
https://kito.wordpress.ncsu.edu/files/2018/04/funa3.pdf
Our goal in this presentation is to
-Understand the basic notions of functional analysis
lower-semicontinuous, subdifferential, conjugate functional
- Understand the formulation of duality problem
primal (P), perturbed (Py), and dual (P∗) problem
-Understand the primal-dual relationships
inf(P)≤sup(P∗), inf(P) = sup(P∗), inf supL≤sup inf L
The document discusses Approximate Bayesian Computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or unavailable. ABC works by simulating data from the model, accepting simulations that are close to the observed data based on a distance measure and tolerance level. This provides samples from an approximation of the posterior distribution. The document provides examples that motivate ABC and outlines the basic ABC algorithm. It also discusses extensions and improvements to the standard ABC method.
The document discusses quantiles and quantile regression. It begins by defining quantiles as the inverse of a cumulative distribution function. Quantile regression models the relationship between covariates and conditional quantiles, similar to how ordinary least squares regression models the conditional mean. The document also discusses median regression, which estimates relationships using the 1-norm rather than the 2-norm used in OLS. Median regression provides consistent estimates when the error term has a symmetric distribution.
This document provides an overview of advanced econometrics techniques including simulations, bootstrap methods, and penalization. It discusses how computers allow for numerical standard errors and testing procedures through simulations and resampling rather than relying on asymptotic formulas. Specific techniques covered include the linear regression model, nonlinear transformations, asymptotics versus finite samples using bootstrap, and moving from least squares to other regressions like quantile regression. Historical references for techniques like permutation methods, the jackknife, and bootstrapping are also provided.
On the vexing dilemma of hypothesis testing and the predicted demise of the B...Christian Robert
The document discusses hypothesis testing from both frequentist and Bayesian perspectives. It introduces the concept of statistical tests as functions that output accept or reject decisions for hypotheses. P-values are presented as a way to quantify uncertainty in these decisions. Bayes' original 1763 paper on Bayesian statistics is summarized, introducing the concept of the posterior distribution. Bayesian hypothesis testing is then discussed, including the optimal Bayes test and the use of Bayes factors to compare hypotheses without requiring prior probabilities on the hypotheses.
Predictive Modeling in Insurance in the context of (possibly) big dataArthur Charpentier
This document discusses predictive modeling in insurance in the context of big data. It begins with an introduction to the speaker and outlines some key concepts in actuarial science from both American and European perspectives. It then provides examples of common actuarial problems involving ratemaking, pricing, and claims reserving. The document reviews the history of actuarial models and discusses issues around statistical learning, machine learning, and their relationship to statistics. It also covers model evaluation and various loss functions used in modeling.
The document describes a new method called component-wise approximate Bayesian computation (ABC) that combines ABC with Gibbs sampling. It aims to improve ABC's ability to efficiently explore parameter spaces when the number of parameters is large. The method works by alternating sampling from each parameter's ABC posterior conditional distribution given current values of other parameters and the observed data. The method is proven to converge to a stationary distribution under certain assumptions, especially for hierarchical models where conditional distributions are often simplified. Numerical experiments on toy examples demonstrate the method can provide a better approximation of the true posterior than vanilla ABC.
The document describes Approximate Bayesian Computation (ABC), a technique for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC works by simulating data under different parameter values, and accepting simulations that are close to the observed data according to a distance measure and tolerance level. ABC provides an approximation to the posterior distribution that improves as the tolerance level decreases and more informative summary statistics are used. The document discusses the ABC algorithm, properties of the exact ABC posterior distribution, and challenges in selecting appropriate summary statistics.
random forests for ABC model choice and parameter estimationChristian Robert
The document discusses Approximate Bayesian Computation (ABC). It begins by introducing ABC as a likelihood-free method for Bayesian inference when the likelihood function is unavailable or computationally intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data based on a distance measure.
The document then discusses advances in ABC, including modifying the proposal distribution to increase efficiency, viewing it as a conditional density estimation problem, and including measurement error in the framework. It also discusses the consistency of ABC as the number of simulations increases and sample size grows large. Finally, it discusses applications of ABC to model selection by treating the model index as an additional parameter.
This document discusses the use of machine learning techniques in actuarial science and insurance. It begins with an overview of predictive modeling applications in insurance such as fraud detection, premium computation, and claims reserving. It then covers traditional econometric techniques like Poisson and gamma regression models and how machine learning is emerging as an alternative. The document emphasizes evaluating model goodness of fit and uncertainty, and addresses issues like price discrimination and fairness.
The document discusses various techniques for classifying pictures using neural networks, including convolutional neural networks. It describes how convolutional neural networks can be used to classify images by breaking them into overlapping tiles, applying small neural networks to each tile, and pooling the results. The document also discusses using recurrent neural networks to classify videos by treating them as higher-dimensional tensors.
This document discusses model and variable selection in advanced econometrics. It covers topics like numerical optimization techniques, convex problems, Lagrangian functions, and the Karush–Kuhn–Tucker conditions for solving constrained optimization problems. It also references Bayesian and frequentist approaches to statistical inference and the importance of avoiding overfitting models to ensure good generalization to new data.
Approximate Bayesian model choice via random forestsChristian Robert
The document describes approximate Bayesian computation (ABC) methods for model choice when likelihoods are intractable. ABC generates parameter-dataset pairs from the prior and retains those where the simulated and observed datasets are similar according to a distance measure on summary statistics. For model choice, ABC approximates posterior model probabilities by the proportion of simulations from each model that are retained. Machine learning techniques can also be used to infer the most likely model directly from the simulated summary statistics.
This document discusses polynomial interpolation and outlines the key goals and topics that will be covered in Chapter 10. The goals are to motivate the need for interpolation of both data and functions, derive three methods for computing a polynomial interpolant suitable for different circumstances, derive error expressions, discuss Chebyshev interpolation, and consider interpolating derivative values. The outline lists the topics as monomial basis, Lagrange basis, Newton basis and divided differences, interpolation error, Chebyshev interpolation, and interpolating derivative values. Motivation is provided for interpolating both discrete data samples and continuous functions, with a wish list of properties for a reasonable interpolant. Polynomial interpolation is discussed as a basic and important form of interpolation.
This document provides a probability cheatsheet compiled by William Chen and Joe Blitzstein with contributions from others. It is licensed under CC BY-NC-SA 4.0 and contains information on topics like counting rules, probability definitions, random variables, moments, and more. The cheatsheet is regularly updated with comments and suggestions submitted through a GitHub repository.
This document provides a probability cheatsheet compiled by William Chen and Joe Blitzstein with contributions from others. It is licensed under CC BY-NC-SA 4.0 and contains information on topics like counting rules, probability definitions, random variables, expectations, independence, and more. The cheatsheet is designed to summarize essential concepts in probability.
ABC stands for approximate Bayesian computation. It is a method for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces samples from an approximate posterior distribution by simulating parameter and summary statistic values that match the observed summary statistics within a tolerance level. The choice of summary statistics is important but difficult, as there is typically no sufficient statistic. Several strategies have been developed for selecting good summary statistics, including using random forests or the Lasso to evaluate and select from a large set of potential summaries.
Convex Analysis and Duality (based on "Functional Analysis and Optimization" ...Katsuya Ito
In this presentation, we explain the monograph ”Functional Analysis and Optimization” by Kazufumi Ito
https://kito.wordpress.ncsu.edu/files/2018/04/funa3.pdf
Our goal in this presentation is to
-Understand the basic notions of functional analysis
lower-semicontinuous, subdifferential, conjugate functional
- Understand the formulation of duality problem
primal (P), perturbed (Py), and dual (P∗) problem
-Understand the primal-dual relationships
inf(P)≤sup(P∗), inf(P) = sup(P∗), inf supL≤sup inf L
The document discusses Approximate Bayesian Computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or unavailable. ABC works by simulating data from the model, accepting simulations that are close to the observed data based on a distance measure and tolerance level. This provides samples from an approximation of the posterior distribution. The document provides examples that motivate ABC and outlines the basic ABC algorithm. It also discusses extensions and improvements to the standard ABC method.
The document discusses quantiles and quantile regression. It begins by defining quantiles as the inverse of a cumulative distribution function. Quantile regression models the relationship between covariates and conditional quantiles, similar to how ordinary least squares regression models the conditional mean. The document also discusses median regression, which estimates relationships using the 1-norm rather than the 2-norm used in OLS. Median regression provides consistent estimates when the error term has a symmetric distribution.
This document provides an overview of advanced econometrics techniques including simulations, bootstrap methods, and penalization. It discusses how computers allow for numerical standard errors and testing procedures through simulations and resampling rather than relying on asymptotic formulas. Specific techniques covered include the linear regression model, nonlinear transformations, asymptotics versus finite samples using bootstrap, and moving from least squares to other regressions like quantile regression. Historical references for techniques like permutation methods, the jackknife, and bootstrapping are also provided.
On the vexing dilemma of hypothesis testing and the predicted demise of the B...Christian Robert
The document discusses hypothesis testing from both frequentist and Bayesian perspectives. It introduces the concept of statistical tests as functions that output accept or reject decisions for hypotheses. P-values are presented as a way to quantify uncertainty in these decisions. Bayes' original 1763 paper on Bayesian statistics is summarized, introducing the concept of the posterior distribution. Bayesian hypothesis testing is then discussed, including the optimal Bayes test and the use of Bayes factors to compare hypotheses without requiring prior probabilities on the hypotheses.
Predictive Modeling in Insurance in the context of (possibly) big dataArthur Charpentier
This document discusses predictive modeling in insurance in the context of big data. It begins with an introduction to the speaker and outlines some key concepts in actuarial science from both American and European perspectives. It then provides examples of common actuarial problems involving ratemaking, pricing, and claims reserving. The document reviews the history of actuarial models and discusses issues around statistical learning, machine learning, and their relationship to statistics. It also covers model evaluation and various loss functions used in modeling.
The document describes a new method called component-wise approximate Bayesian computation (ABC) that combines ABC with Gibbs sampling. It aims to improve ABC's ability to efficiently explore parameter spaces when the number of parameters is large. The method works by alternating sampling from each parameter's ABC posterior conditional distribution given current values of other parameters and the observed data. The method is proven to converge to a stationary distribution under certain assumptions, especially for hierarchical models where conditional distributions are often simplified. Numerical experiments on toy examples demonstrate the method can provide a better approximation of the true posterior than vanilla ABC.
The document describes Approximate Bayesian Computation (ABC), a technique for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC works by simulating data under different parameter values, and accepting simulations that are close to the observed data according to a distance measure and tolerance level. ABC provides an approximation to the posterior distribution that improves as the tolerance level decreases and more informative summary statistics are used. The document discusses the ABC algorithm, properties of the exact ABC posterior distribution, and challenges in selecting appropriate summary statistics.
random forests for ABC model choice and parameter estimationChristian Robert
The document discusses Approximate Bayesian Computation (ABC). It begins by introducing ABC as a likelihood-free method for Bayesian inference when the likelihood function is unavailable or computationally intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data based on a distance measure.
The document then discusses advances in ABC, including modifying the proposal distribution to increase efficiency, viewing it as a conditional density estimation problem, and including measurement error in the framework. It also discusses the consistency of ABC as the number of simulations increases and sample size grows large. Finally, it discusses applications of ABC to model selection by treating the model index as an additional parameter.
This document discusses the use of machine learning techniques in actuarial science and insurance. It begins with an overview of predictive modeling applications in insurance such as fraud detection, premium computation, and claims reserving. It then covers traditional econometric techniques like Poisson and gamma regression models and how machine learning is emerging as an alternative. The document emphasizes evaluating model goodness of fit and uncertainty, and addresses issues like price discrimination and fairness.
The document discusses various techniques for classifying pictures using neural networks, including convolutional neural networks. It describes how convolutional neural networks can be used to classify images by breaking them into overlapping tiles, applying small neural networks to each tile, and pooling the results. The document also discusses using recurrent neural networks to classify videos by treating them as higher-dimensional tensors.
This document discusses model and variable selection in advanced econometrics. It covers topics like numerical optimization techniques, convex problems, Lagrangian functions, and the Karush–Kuhn–Tucker conditions for solving constrained optimization problems. It also references Bayesian and frequentist approaches to statistical inference and the importance of avoiding overfitting models to ensure good generalization to new data.
Approximate Bayesian model choice via random forestsChristian Robert
The document describes approximate Bayesian computation (ABC) methods for model choice when likelihoods are intractable. ABC generates parameter-dataset pairs from the prior and retains those where the simulated and observed datasets are similar according to a distance measure on summary statistics. For model choice, ABC approximates posterior model probabilities by the proportion of simulations from each model that are retained. Machine learning techniques can also be used to infer the most likely model directly from the simulated summary statistics.
This document discusses polynomial interpolation and outlines the key goals and topics that will be covered in Chapter 10. The goals are to motivate the need for interpolation of both data and functions, derive three methods for computing a polynomial interpolant suitable for different circumstances, derive error expressions, discuss Chebyshev interpolation, and consider interpolating derivative values. The outline lists the topics as monomial basis, Lagrange basis, Newton basis and divided differences, interpolation error, Chebyshev interpolation, and interpolating derivative values. Motivation is provided for interpolating both discrete data samples and continuous functions, with a wish list of properties for a reasonable interpolant. Polynomial interpolation is discussed as a basic and important form of interpolation.
This document provides a probability cheatsheet compiled by William Chen and Joe Blitzstein with contributions from others. It is licensed under CC BY-NC-SA 4.0 and contains information on topics like counting rules, probability definitions, random variables, moments, and more. The cheatsheet is regularly updated with comments and suggestions submitted through a GitHub repository.
This document provides a probability cheatsheet compiled by William Chen and Joe Blitzstein with contributions from others. It is licensed under CC BY-NC-SA 4.0 and contains information on topics like counting rules, probability definitions, random variables, expectations, independence, and more. The cheatsheet is designed to summarize essential concepts in probability.
Slides: Hypothesis testing, information divergence and computational geometryFrank Nielsen
Bayesian multiple hypothesis testing can be viewed from the perspective of computational geometry. The probability of error can be upper bounded by divergences such as the total variation and Chernoff distance. When the hypotheses are distributions from an exponential family, the optimal MAP Bayesian rule is a nearest neighbor classifier on an additive Bregman Voronoi diagram. For binary hypotheses, the best error exponent is the Chernoff information, which is a Bregman divergence on the exponential family manifold. This viewpoint generalizes to multiple hypotheses, where the best error exponent comes from the closest Bregman pair of distributions.
This document provides a concise probability cheatsheet compiled by William Chen and others. It covers key probability concepts like counting rules, sampling tables, definitions of probability, independence, unions and intersections, joint/marginal/conditional probabilities, Bayes' rule, random variables and their distributions, expected value, variance, indicators, moment generating functions, and independence of random variables. The cheatsheet is licensed under CC BY-NC-SA 4.0 and the last updated date is March 20, 2015.
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
This document discusses an empirical Bayesian approach for estimating regularization parameters in inverse problems using maximum likelihood estimation. It proposes the Stochastic Optimization with Unadjusted Langevin (SOUL) algorithm, which uses Markov chain sampling to approximate gradients in a stochastic projected gradient descent scheme for optimizing the regularization parameter. The algorithm is shown to converge to the maximum likelihood estimate under certain conditions on the log-likelihood and prior distributions.
This document summarizes Arthur Charpentier's presentation on econometrics and statistical learning techniques. It discusses different perspectives on modeling data, including the causal story, conditional distribution story, and explanatory data story. It also covers topics like high dimensional data, computational econometrics, generalized linear models, goodness of fit, stepwise procedures, and testing in high dimensions. The presentation provides an overview of various statistical and econometric modeling techniques.
A Fixed Point Theorem Using Common Property (E. A.) In PM Spacesinventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The document discusses key concepts in probability theory and statistical decision making under uncertainty. It covers topics like data generation processes being modelled as random variables, Bayes' rule for calculating conditional probabilities, discriminant functions for classification, and utility theory for making rational decisions. Bayesian networks and influence diagrams are introduced as graphical models for representing conditional independence between variables and making decisions. Finally, the document notes that future chapters will focus on estimating probabilities from data using parametric, semiparametric, and nonparametric approaches.
1) The document reviews concepts from probability and statistics including discrete and continuous random variables, their distributions (e.g. binomial, Poisson, normal), and multivariate distributions.
2) It then discusses key properties of multivariate normal distributions including their probability density function and how marginal and conditional distributions can be derived from the joint distribution.
3) Concepts like independence, mean vectors, covariance matrices, and their implications are also covered as they relate to multivariate normal distributions.
In this talk we present a framework for splitting data assimilation problems based upon the model dynamics. This is motivated by assimilation in the unstable subspace (AUS) and center manifold and inertial manifold techniques in dynamical systems. Recent efforts based upon the development of particle filters projected into the unstable subspace will be highlighted.
This document summarizes generative models like VAEs and GANs. It begins with an introduction to information theory, defining key concepts like entropy and maximum likelihood estimation. It then explains generative models as estimating the joint distribution P(X,Y) compared to discriminative models estimating P(Y|X). VAEs are discussed as maximizing the evidence lower bound (ELBO) to estimate the latent variable distribution P(Z|X), allowing generation of new X values. GANs are also covered, defining their minimax game between a generator G and discriminator D, with G learning to generate samples resembling the real data distribution Pemp.
1. The document discusses practical representations of imprecise probabilities, which represent uncertainty as a set of probabilities rather than a single probability.
2. It provides an overview of several practical representations, including possibility distributions, P-boxes, probability intervals, and elementary comparative probabilities.
3. The representations aim to be computationally tractable by having a reasonable number of extreme points and satisfying properties like n-monotonicity.
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
This document provides an overview and agenda for a master's level course on probability and statistics. It covers key topics like statistical models, probability distributions, conditional distributions, convergence theorems, sampling, confidence intervals, decision theory, and testing procedures. Examples of common probability distributions and functions are also presented, including the cumulative distribution function, probability density function, independence, and conditional independence. Additional references for further reading are included.
I am Humphrey J. I am a Math Assignment Solver at mathhomeworksolver.com. I hold a Master's in Mathematics, from Las Vegas, USA. I have been helping students with their assignments for the past 11 years. I solved assignments related to Math.
Visit mathhomeworksolver.com or email support@mathhomeworksolver.com. You can also call on +1 678 648 4277 for any assistance with Math Assignments.
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...Alexander Decker
This document presents a unique common fixed point theorem for two self maps satisfying a generalized contraction condition in partial metric spaces using rational expressions. It begins by introducing basic definitions and lemmas related to partial metric spaces. It then presents the main theorem, which states that if two self maps T and f satisfy certain contractive and completeness conditions, including being weakly compatible, then they have a unique common fixed point. The proof considers two cases - when the sequences constructed from the maps are eventually equal, and when they are not eventually equal but form a Cauchy sequence. It is shown in both cases that the maps must have a unique common fixed point.
Uncertainty & Probability
Baye's rule
Choosing Hypotheses- Maximum a posteriori
Maximum Likelihood - Baye's concept learning
Maximum Likelihood of real valued function
Bayes optimal Classifier
Joint distributions
Naive Bayes Classifier
Talk on the design on non-negative unbiased estimators, useful to perform exact inference for intractable target distributions.
Corresponds to the article http://arxiv.org/abs/1309.6473
Similar to Equational axioms for probability calculus and modelling of Likelihood ratio transfer mediated reasoning (20)
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
2024.03.22 - Mike Heddes - Introduction to Hyperdimensional Computing.pdfAdvanced-Concepts-Team
Presentation in Science Coffee of the Advanced Concepts Team of the European Space Agency.
Date: 22.03.2024
Speaker: Mike Heddes (University of California, Irvine)
Topic: Introduction to Hyperdimensional Computing
Abstract:
Hyperdimensional computing (HD), also known as vector symbolic architectures (VSA), is a computing framework capable of forming compositional distributed representations. HD/VSA forms a "concept space" by exploiting the geometry and algebra of high-dimensional spaces. The central idea is to represent information with randomly generated vectors, called hypervectors. Together with a set of operations on these hypervectors, HD/VSA can represent compositional structures, which, in turn, enables features such as reasoning by analogy and cognitive computing. In this introductory talk, I will introduce the high-dimensional spaces and the fundamental operations on hypervectors. I will then cover applications of HD/VSA such as reasoning by analogy and graph classification.
Isabelle Diacaire - From Ariadnas to Industry R&D in optics and photonicsAdvanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency.
Date: 28.02.2024
Speaker: Isabelle Dicaire (CCTT Optech)
Topic: From Ariadnas to Industry R&D in optics and photonics
The ExoGRAVITY project - observations of exoplanets from the ground with opti...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 09.02.2024.
Speaker: Sylvestre Lacour (Paris Observatory/LESIA)
Title: The ExoGRAVITY project - observations of exoplanets from the ground with optical interferometry
Abstract: I will talk about the latest observations and results with the GRAVITY instrument installed at the VLTI, Paranal observatory.
Presentation in the Science Coffee hosted by the Advanced Concepts Team of the European Space Agency on the 12.01.2024.
Speaker: Benoit Famaey (CNRS - Observatoire astronomique de Strasbourg)
Title: Modified Newtonian Dynamics
Abstract: Presentation around the topic of MOND / tests of MOND
Presentation in Science Coffee of ESA’s Advanced Concepts Team on the 24.11.2023 by Pablo Gomet (ESA/ESAC)
Abstract:
Current and upcoming space science missions will produce petascale data in the coming years. This requires a rethinking of data distribution and processing practices. For example, the Euclid mission will be sending more than 100GB of compressed data to Earth every day. Analysis and processing of data on this scale requires specialized infrastructure and toolchains. Further, providing users with this data locally is not practical due to bandwidth and storage constraints. Thus, a paradigm shift of bringing users code to the data and providing a computational infrastructure and toolchain around the data is required. The ESA Datalabs platforms is specifically focused on fulfilling this need. It provides a centralized platform with access to data from various missions including the James Webb Space Telescope, Gaia, and others. Pre-setup environments with the necessary toolchains and standard software tools such as JupyterLab are provided and enable data access with minimal overhead. And, with the built-in Science Application Store, a streamlined environment is given that allows rapid deployment of desired processing or science exploitation pipelines. In this manner, ESA Datalabs provides an accessible and potent framework for high-performance computing and machine learning applications. While users may upload data, there is no need to download data, thus mitigating the bandwidth burden. As the computational load is handled within the computational infrastructure of ESA Datalabs, high scalability is achieved, and resources can be requisitioned as needed. Finally, the platform-centric approach facilitates direct collaboration on code and data. Currently, the platform is already available to several hundred users, regularly showcased in dedicated workshops and interested users may request access online.
Jonathan Sauder - Miniaturizing Mechanical Systems for CubeSats: Design Princ...Advanced-Concepts-Team
ESA/ACT Science Coffee presentation of Nov 3, 2023 by Jonathan Sauder (NASA/JPL/CalTech)
Abstract:
In the past decade CubeSats have evolved from small university educational opportunities to industry and governments using them make new discoveries and monetize space. While originally most missions were restricted to Low Earth Orbit (LEO), CubeSats have begun to increase their reach across the solar system with the advent of Mars Cube One (MarCO) in 2018. However, with the small, constrained CubeSat form factor there is often a need to expand the CubeSat through deployable mechanical systems once the satellite is in space. In reviewing many CubeSat missions, it has been found that over 90% have deployable structures actuated by a mechanical system. These include antennas, solar panels, and instrument booms.
There is a key challenge in CubeSat mechanism design, as one can not just shrink larger spacecraft mechanisms down to the CubeSat form factor. Rather, these mechanisms must be designed in a way to reduce complexity, which means good mechanical design principles are paramount. From experience designing the deployment mechanisms for the MarCO and RainCube missions, working on deployable antenna technology, and reviewing deployables used on hundreds of other CubeSats, several key principles have been identified for developing miniaturized mechanical systems for mechanisms. These principles will be discussed in the presentation, and examples will be provided. Small satellite missions can be made more robust by incorporating good design principles into future miniaturized mechanical systems, which in turn with result in greater reliability of small satellites. This is especially important given that many small satellites have mission critical deployables, and the ever-increasing number of interplanetary small satellite missions and opportunities.
Artificial intelligence (AI) is a potentially disruptive tool for physics and science in general. One crucial question is how this technology can contribute at a conceptual level to help acquire new scientific understanding or inspire new surprising ideas. I will talk about how AI can be used as an artificial muse in quantum physics, which suggests surprising and unconventional ideas and techniques that the human scientist can interpret, understand and generalize to its fullest potential.
EDEN ISS is a European project focused on advancing bio-regenerative life support systems, in particular plant cultivation in space. A mobile test facility was designed and built between March 2015 and October 2017. The facility incorporates a Service Section which houses several subsystems necessary for plant cultivation and the Future Exploration Greenhouse. The latter is built similar to a future space greenhouse and provides a fully controlled environment for plant cultivation. The facility was setup in Antarctica in close vicinity to the German Neumayer Station III in January 2018 and successfully operated between February and November of the same year. During that nine month period around 270 kg of food was produced by the crops cultivated in the greenhouse. Besides the mere production of food for the overwintering crew (10 people) of the Neumayer Station III a large number of experiments were conducted. These experiments delivered valuable data for engineering of space greenhouses, horticultural sciences, microbiology, food quality and safety, psychology and operation of a food production facility in a remote environment. Component and subsystem validation was conducted to better understand engineering issues when building a space greenhouse. Fresh edible and inedible biomass was measured upon every harvest, dry weight ratios were determined and crop life cycle data was collected. More than 400 plant and microbiological samples were taken for the microbiology, and food quality and safety scientists working on the project. Some samples were composed of freeze dried plant tissue, but most samples were frozen at -40°C and shipped to Europe for analysis in specialized laboratories. A survey with the overwintering crew was executed to get information about the impact of the greenhouse on the crew during the nine month long winter season. Operation procedures for horticultural tasks, but also for system maintenance were developed and tested. The required crewtime, energy and resources demands were measured. This presentation shows an overview of the research results of the EDEN ISS research campaign in Antarctica close to the Neumayer Station III.
The quest to create artificial general intelligence has largely followed a “brain in a vat” approach, aiming to build a disembodied mind that can carry out the kinds of logical reasoning and inference that humans are capable of, usually demonstrated through language. This approach may some day pay off, but it’s not how nature did it. Intelligence did not evolve to solve abstract problems – it evolved to adaptively control behaviour in the real world. Living organisms are agents that can act, for their own reasons, in pursuit of their own goals – most fundamentally, to persist as a self through time. By charting the evolution of agency, we can see the origins of action and the concomitant emergence of behavioural control systems; the transition from pragmatic perception-action couplings to more and more internalised semantic representations; and, on our lineage, a trajectory of increasing cognitive depth and ever more sophisticated mapping and modelling of the world and the self. The resultant accumulation of causal knowledge grants the ability to simulate more complex scenarios, to predict and plan over longer timeframes, to optimise over more competing goals at once, and ultimately to exercise conscious rational control over behaviour. In this way, intelligent entities – agents – evolved, with greater and greater autonomy, flexibility, and causal power in the world. To realise intelligence in artificial systems, it may similarly be necessary to develop embodied, situated agents, with meaning and understanding grounded in relation to real-world goals, actions, and consequences.
Brains rely on spiking neural networks for ultra-low-power information processing. Building artificial intelligence with similar efficiency requires learning algorithms to instantiate complex spiking neural networks and brain-inspired neuromorphic hardware to emulate them efficiently. Toward this end, I will briefly introduce surrogate gradients as a general framework for training spiking neural networks and showcase their robustness and self-calibration capabilities on analog neuromorphic hardware. Drawing further inspiration from biology, I will discuss the impact of homeostatic plasticity and network initialization in the excitatory-inhibitory balanced regime on deep spiking neural network training. Finally, I will show how approximations relate surrogate gradients to biologically plausible online learning rules with a minor impact on their effectiveness.
The promise of computer aided manufacturing is to make materializable structures that could not be fabricated using traditional methods. An example is 3D printed lattices, where variation in the lattice geometry and print media can define a vast spectrum of resulting material behaviour, ranging from fully flexible forms to completely stiff examples with high strength. While these “architected materials” offer huge promise for industrial applications, in practice they are difficult to generate and explore digitally, and even harder to simulate for mechanical testing. In this talk I will outline a range of approaches to the study of architected materials using machine learning. I will describe several projects using graph neural networks (GNNs) to model lattice geometry, and report on a few recent works that construct inverse models. These approaches are progress toward better methods for approximation of the material behaviour of the space of all lattice geometries, offering potential for real-time material feedback at the design stage, and a streamlined selection process for architected materials.
Electromagnetically Actuated Systems for Modular, Self-Assembling and Self-Re...Advanced-Concepts-Team
This talk will cover two research projects within the MIT Space Exploration Initiative’s microgravity self-assembly portfolio. While the sizes and geometries of today’s space structures are limited by launch mass and volume, modular reconfigurability may support tightly packing structure modules over multiple launches and provide for adaptation to unforeseen circumstances once deployed. Self-assembly methods also promise to reduce crew EVA construction time on-orbit, when leveraged for large-scale habitat structures. We will report on a quasi-stochastic self-assembly hardware platform, and accompanying robotics simulation, for hollow buckyball shells in orbit. This talk will also introduce a reconfigurable space structure based on electromagnetically pivoting cubes that originated in the ACT. Both projects will show recent hardware for fully untethered modules, results from physical experiments on parabolic flights and a 30-day ISS mission, and simulation approaches for planning and characterizing self-assembly and reconfigurability.
HORUS (Hyper-effective nOise Removal U-net Software) is a cutting-edge AI tool designed to enhance Lunar Reconnaissance Orbiter (LRO) optical low-light imagery of the Moon's shadowed regions by removing most of the CCD-related and photon noise. For the first time, HORUS enables scientists and engineers to identify intra-shadow geologic features (craters, boulders, etc.) as small as 3 meters across, making this tool uniquely useful for applications such as geologic mapping, landing site selection, hazard recognition, and mission planning, directly supporting the robotic and crewed exploration of the Moon's south pole.
META-SPACE: Psycho-physiologically Adaptive and Personalized Virtual Reality ...Advanced-Concepts-Team
This document proposes developing an adaptive virtual reality system called "meta-space" to promote well-being for astronauts and others in isolated environments. It would collect physiological and behavioral data to detect psychological states and adapt VR content accordingly, such as virtual escapes of Earth or interactive games. A proposed development plan includes exploring signals, combining them into an adaptive layer, generating the virtual world, and optimizing the headset through testing.
The Large Interferometer For Exoplanets (LIFE) II: Key Methods and TechnologiesAdvanced-Concepts-Team
The LIFE initiative has the goal to develop the science, the technology and a roadmap for an aspiring space mission that will allow humankind to detect and characterize, via nulling interferometry, the atmospheres of hundreds of nearby extrasolar planets including dozens that may be similar to Earth. This follow-up talk will tackle more of the techniques and technologies that will enable such an ambitious undertaking. I will outline the underlying measuring principle, and provide some overview over essential technologies, their current status and necessary developments.
Black holes have evolved from theoretical prediction to accepted hypothesis, due to the wealth of new discoveries in the last decades. In this talk I will discuss the observational evidence for the existence of black holes of different sizes and what we know about their evolution based on observations and theory. I will also describe what Quasars and Active Galactic Nuclei are, and how these extremely luminous objects can be used to study black holes at the early ages of the Universe.
In vitro simulation of spaceflight environment to elucidate combined effect o...Advanced-Concepts-Team
Long-term exposure to microgravity, ionizing radiation and increased levels of psychological stress can cause changes in the astronauts’ skin, resulting in skin rashes, itches and delayed wound healing during space missions. There is still a lack of understanding how the complex spaceflight environment induces these defects. This PhD project aims to investigate how exposure to a combination of spaceflight stressors can affect the structure and function of the skin, and how they can hamper wound healing. For this we have developed in vitro simulation models and are exposing primary human dermal fibroblasts to hydrocortisone, ionizing radiation and simulated microgravity. Results indicate a significant negative effect of hydrocortisone as well as simulated microgravity on wound healing capability of dermal fibroblasts. Furthermore, a project has been initiated with the support of the European Space Agency Academy “Spin Your Thesis!” Campaign, aiming to investigate the effects of an increased gravitational force on fibroblast function related to wound healing. Altogether the results of this PhD project will give more insights into the effects of combined spaceflight stressors on dermal skin cells, and improve risk assessment for human deep space exploration.
The Large Interferometer For Exoplanets (LIFE): the science of characterising...Advanced-Concepts-Team
Studying the atmospheres of a statistically significant number of rocky, terrestrial exoplanets - including the search for habitable and potentially inhabited planets - is one of the major goals of exoplanetary science and possibly the most challenging question in 21st century astrophysics. However, despite being at the top of the agenda of all major space agencies and ground-based observatories, none of the currently planned projects or missions worldwide has the technical capabilities to achieve this goal. In this talk we present new results from the LIFE Mission initiative, which addresses this issue by investigating the scientific potential of a mid infrared nulling interferometer observatory. Here we will focus on the mission's yield estimates, our simulator software as well as various exemplary science cases such as observing Earth- and Venus-twins or searching for phosphine in exoplanetary atmospheres.
Vernal pools are ephemeral wetland ecosystems that provide habitat for specialized plants and animals. They form "archipelagos" distributed across the landscape. Microbial communities in vernal pool soil and water show environmental filtering between habitats. Next-generation sequencing of soil samples revealed differences in microbial composition between soil, wet soil, and water. Species diversity and community composition changes with increasing spatial distance between pools, following a distance-decay pattern. Vernal pools may provide insights into the origins and mechanisms of biodiversity as well as how biodiversity responds to environmental changes. As a new frontier for science, further study of vernal pool ecosystems can help us understand the role of symbiosis and adaptation in life.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Equational axioms for probability calculus and modelling of Likelihood ratio transfer mediated reasoning
1. Equational axioms for probability calculus and
modelling of Likelihood ratio transfer mediated
reasoning
Jan Bergstra
Informatics Institute, Faculty of Science
University of Amsterdam
j.a.bergstra@uva.nl
ESTEC March 8, 2019
Jan Bergstra Informatics Institute ESTEC March 8, 2019 1 / 23
2. Commutative rings
(x + y) + z = x + (y + z) (1)
x + y = y + x (2)
x + 0 = x (3)
x + (−x) = 0 (4)
(x · y) · z = x · (y · z) (5)
x · y = y · x (6)
1 · x = x (7)
x · (y + z) = x · y + x · z (8)
Jan Bergstra Informatics Institute ESTEC March 8, 2019 2 / 23
3. Division by zero
Add a function symbol for inverse (x−1) and have division (x/y or x
y ) as
a derived operator.
Now what about 0−1? A survey of the 8 known options.
0−1 = 0, material inverse (material division): meadows,
0−1 = 1, (inverse not involutive),
0−1 = 17, (ad hoc value),
0−1 = ⊥ (error value) common inverse,
0−1 = ∞ (unsigned infinite), natural inverse: wheels,
0 · 0−1 = ⊥,
0−1 = +∞ (positive signed infinite), transrational numbers,
transreal numbers,
∞ + (−∞) = ⊥),
0−1 ↑ undefined (divergence), partial inverse.
0 · 0−1 = 1 formal multiplicative inverse.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 3 / 23
4. Meadows: Md = CR + (9) + (10)
(x−1
)−1
= x (9)
x · (x · x−1
) = x (10)
We find: Md 0−1 = 0 · (0 · 0−1) = 0.
x/y = x/y =
x
y
= x ÷ y = x · y−1
(11)
Defining equations for the different operator symbols for division
Jan Bergstra Informatics Institute ESTEC March 8, 2019 4 / 23
5. Signed meadows
Idea for sign function s(0) = 0, x > 0 → s(x) = 1, x < 0 → s(x) = −1.
Axioms (without ordering):
s(x · x−1
) = x · x−1
(12)
s(1 − x · x−1
) = 1 − x · x−1
(13)
s(−1) = −1 (14)
s(x−1
) = s(x) (15)
s(x · y) = s(x) · s(y) (16)
0s(x)−s(y) · (s(x + y) − s(x)) = 0 (17)
|x| = s(x) · x
Sign: axioms for the sign operator & abs. value. Completeness result:
R0(s) |= t = r ⇐⇒ Md + Sign t = r
Jan Bergstra Informatics Institute ESTEC March 8, 2019 5 / 23
6. Event space
Events viewed as propositions about samples.
(x ∨ y) ∧ y = y (18)
(x ∧ y) ∨ y = y (19)
x ∧ (y ∨ z) = (y ∧ x) ∨ (z ∧ x) (20)
x ∨ (y ∧ z) = (y ∨ x) ∧ (z ∨ x) (21)
x ∧ ¬x = ⊥ (22)
x ∨ ¬x = (23)
BA: a self-dual equational basis for Boolean algebras (Padmanabhan
1983)
Jan Bergstra Informatics Institute ESTEC March 8, 2019 6 / 23
7. Probability functions
P( ) = 1 (24)
P(⊥) = 0 (25)
P(x) = |P(x)| (26)
P(x ∨ y) = P(x) + P(y) − P(x ∧ y) (27)
P(x ∧ y) · P(y) · P(y)−1
= P(x ∧ y) (28)
P(x | y) = P(x ∧ y) · P(y)−1
PFP: a version of Kolmogorov’s axioms for a probability function.
Completeness:
Md + Sign + BA + PFP proves all equations t = r which hold in any
structure made from a Boolean algebra E and a probability function
P : E → R0.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 7 / 23
8. Formalizing Kolmogorov’s axioms & Bayes’ rule
Original presentation of Kolmogorov’s axioms: use set theory and real
numbers and define which P’s are probability functions.
Given this definition Md + Sign + BA + PFP is a formalisation of that
definition. In the completeness statement the definition is used and its
correspondence with the formalisation is stated.
Bayes’ rule is derivable from Md + Sign + BA + PFP (without using
P(x ∨ y) = P(x) + P(y) − P(x ∧ y))
P(x | y) =
P(y | x) · P(x)
P(y)
(with inverse instead of division: P(x | y) = P(y | x) · P(x) · P(y)−1.)
Jan Bergstra Informatics Institute ESTEC March 8, 2019 8 / 23
9. Proof of Bayes’ rule (BR) from Md + Sign + BA + PFP
P(x | y) =
P(x ∧ y)
P(y)
=
P(y ∧ x)
P(y)
=
P(y ∧ x) · P(x) · P(x)−1
P(y)
=
P(y∧x)
P(x) · P(x)
P(y)
=
P(y | x) · P(x)
P(y)
In the presence of Md + BA + ”definition of conditional probability”,
BR follows from equation no. 27 (P(x ∧ y) · P(y) · P(y)−1 = P(x ∧ y)).
In fact this works both ways: BR implies equation 27.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 9 / 23
10. Second form of Bayes’ rule
BR2, a second form of Bayes’ rule
P(x | y) =
P(y | x) · P(x)
P(y | z) · P(z) + P(y | ¬z) · P(¬z)
.
BR2 is derivable from Md + Sign + BA + PFP and is equivalent
with P(x ∨ y) = P(x) + P(y) − P(x ∧ y).
BR2 is stronger than BR.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 10 / 23
11. PFP: Alternative axioms for a probability function
P( ) = 1
P(⊥) = 0
P(x) = |P(x)|
P(x | y) =
P(x ∧ y)
P(y)
P(x | y) =
P(y | x) · P(x)
P(y | z) · P(z) + P(y | ¬z) · P(¬z)
Jan Bergstra Informatics Institute ESTEC March 8, 2019 11 / 23
12. Why making inverse total? Four arguments!
1 Raising a run time exception at division by 0 may create a system
risk (if other exceptions are also raised in a real time context).
Proving software correctness (in advance) over a meadow
prevents such exceptions from being raised.
2 Several software verification tools use a total version of division,
because the (any) logic of partial functions is significantly more
complicated than the logic of total functions.
3 Limitation to classical two-valued logic. See next page.
4 Simplification of theoretical work: fewer cases to be distinguished,
fewer (negative) conditions occur.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 12 / 23
13. Why making inverse total? Limitation to classical
two-valued logic
It is a common idea that the follwoing assertion Φ is valid:
Φ ≡ x = 0 → x/x = 1
The idea is that the condition prevents one from having to divide by
zero and that one is comfortable with: ∀x.Φ(x). But how can that be?
Substitution of 0 for x must be allowed and must turn Φ(x) into a valid
assertion so that also Φ(0) holds, i.e.
0 = 0 → 0/0 = 1
and in other words: 0 = 0 ∨ 0/0 = 1. But for the latter to hold (in
classical 2-valued logic) both parts of the disjunction must have a truth
value. Thus we must know either 0/0 = 1 or ¬(0/0 = 1). However,
when viewing 0/0 as undefined (or even worse, as incorrectly typed)
neither of these assertions is plausible.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 13 / 23
14. Bayesian reasoning
CHALLENGE: understand courtroom Bayesian reasoning from first
principles (that is principles which are found in basic papers).
Conclusion: not at all easy. It is an oversimplification to say that judges
should acquire the theoretical background which consists of a few
formulae and their application.The whole subject is deeply puzzling.
Principal agents:
TOF (trier of fact, the judge or a jury),
MOE (moderator of evidence, “getuige deskundige”),
a defendant, a prosecutor, several lawyers.
Here focus on TOF and MOE.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 14 / 23
15. Subjective probability: a crash course
Exam question to person X: what is the probability Pbom that there are
birds on the moon (not in a spacecraft)? Survey of answers by X with
a corresponding assessment (VERY LOW< LOW < DEFECTIVE <
ADEQUATE) of the “probability theory competence (ptc)” of X:
1 X replies that (s)he must visit the moon before answering the
question (ptc VERY LOW because X does not understand the
concept of prior odds).
2 Pbom = 0: valid answer (ptc ADEQUATE).
3 Pbom = 10−5: valid answer (ptc ADEQUATE).
4 Pbom > 0: X has not understood how to work with (subjective)
probabilities as these must be precise! (ptc DEFECTIVE).
5 10−20 ≤ Pbom ≤ 10−10: X has not understood how to work with
(subjective) probabilities, no intervals! (ptc DEFECTIVE, though
NFI experts may produce such intervals for likelihood ratio’s)
6 I don’t know: X has not understood the concept of probability at
all. (Because precisely by assigning a value to Pbom, X may
express his/her lack of knowledge. ptc LOW)
Jan Bergstra Informatics Institute ESTEC March 8, 2019 15 / 23
16. Credal state (partial belief state, most beliefs missed)
H (hypothesis: e.g. the defendant is guilty of criminal action C).
E some assertion about evidence of relevance for H.
H and E are propositions, also called events.
All agents at each moment maintain a proposition space with
probability function (credal state, subjective probability).
for TOF proposition space (= event space) ETOF with probability
function PTOF on ETOF .
for MOE: EMOE with probability function PMOE on EMOE .
proposition kinetics: the event space changes (for instance E is
added to ETOF , or is removed from ETOF ).
conditioning: modification (update) of probability function on the
basis of newly acquired information.
Bayes conditioning, (for processing the information that “L is true”)
Jeffrey conditioning, (for processing the information that “P(L) = p”)
single likelihood Adams conditioning, (for processing a new value
for a conditional probability, i.e. a likelihood),
double likelihood Adams conditioning), (for processing a new value
for a likelihood ratio).
Jan Bergstra Informatics Institute ESTEC March 8, 2019 16 / 23
17. Probability function transformations: a survey
Bayes conditioning (without proposition kinetics). Suppose
SA = SA(L, M, N) and PA(M) = p > 0. Then PA is
obtained by Bayes conditioning if it satisfies the following
equation:
PA = P0
A(•|M)
Jeffrey notation: for all X ∈ SA, PA(X) = P0
A(X|M).
Bayes conditioning with proposition kinetics. Now the resulting credal
state is (SA(L, N), PA) M has been removed from the
proposition space.
Bayes conditioning on a non-primitive proposition. SA = SA(L, M, N).
Φ is a closed propositional sentence making use of
primitives L, N, and M. PA(Φ) = p > 0. Then PA is
obtained by Bayes conditioning on Φ if it satisfies:
PA = P0
A(•|Φ)
the proposition space is not modified.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 17 / 23
18. Jeffrey conditioning. Let for example SA = SA(L, M, N). Suppose
PA(M) > 0. Then PA is obtained by Jeffrey conditioning if
for some p ∈ [0, 1] it satisfies the following equation:
PA = p · P0
A(•|M) + (1 − p) · P0
A(•|¬M)
Jefrey conditioning involves no proposition kinetics.
Proposition space reduction. Consider SA = SA(L, M, N), one may
wish to forget about say M. Proposition kinetics now
leads to a reduced proposition space SA(L, N) in which
only the propositions generated by L and N are left.
Parametrized proposition space expansion. Let SA = SA(H). One may
wish to expand SA to a proposition space by introducing
M to it in such a manner that a subsequent reduct brings
one back in SA.
PA(H) is left unchanged and PA(H ∧ M) and PA(¬H ∧ M)
are be fixed with definite values serving as parameters for
the transformation.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 18 / 23
19. Single likelihood Adams conditioning. Let 0 < l ≤ 1 be a rational
number. Assume that H and E are among the generators
of SA. Single likelihood Adams conditioning leaves the
proposition space unchanged and transforms the
probability function PA to Ql.
Ql =
PA(H ∧ E ∧ •) · l
P0
A
(E|H)
+ PA(H ∧ ¬E ∧ •) · 1−l
P0
A
(¬E|H)
+
PA(¬H ∧ •)
Double likelihood Adams conditioning. Let 0 < l, l ≤ 1 be two rational
numbers. H and E are among the generators of SA.
Double likelihood Adams conditioning leaves the
proposition space SA of A unchanged and transforms the
probability function PA to
Ql,l =
PA(H ∧ E ∧ •) · l
P0
A
(E|H)
+ PA(H ∧ ¬E ∧ •) · 1−l
P0
A
(¬E|H)
+
PA(¬H ∧ E ∧ •) · l
P0
A
(E|¬H)
+ PA(¬H ∧ ¬E ∧ •) · 1−l
P0
A
(¬E|¬H)
Jan Bergstra Informatics Institute ESTEC March 8, 2019 19 / 23
20. LRTMR protocol (likelihood ratio transfer mediated
reasoning)
Often the term likelihood is used to denote a certain conditional
probability. We write Lα for a likelihood and LR0
α for a particular ratio of
likelihoods, commonly referred to as a likelihood ratio.
Lα(X, Y) = Pα(X|Y) and LRα(X, Y, ¬Y) =
Lα(X, Y)
Lα(X, ¬Y)
It is now assumed that both E and H are among the generators of both
proposition spaces STOF and SMOE . Further TOF and MOE have prior
credal states (STOF , PTOF ) and (SMOE , PMOE ). The reasoning protocol
LRTMR involves the following steps:
It is checked by MOE that 0 < PMOE (H) < 1 and
0 < PMOE (E) < 1, otherwise MOE raises an exception and the
protocol aborts.
MOE determines the value r of the likelihood ratio
LRMOE (E, H, ¬H) = LMOE (E,H)
LMOE (E,¬H) = PMOE (E|H)
PMOE (E|¬H) with respect to its
probability function PMOE .
Jan Bergstra Informatics Institute ESTEC March 8, 2019 20 / 23
21. MOE communicates to TOF the value r and a description of
LRMOE (E, H, ¬H), that is a description of what propositions r is a
likelihood ratio of.
MOE communicates its newly acquired information to TOF that it
now considers PMOE (E) = 1, i.e. E being true, to be an adequate
representation of the state of affairs (Thus MOE has updated its
probability function.)
TOF trusts MOE to the extent that TOF prefers those of MOE’s
quantitative values that MOE communicates over its own values
for the same probabilities, likelihoods, and likelihood ratios.
TOF takes all information into account and applies various
conditioning operators to end up with its new (updated, posterior)
belief function PTOF .
TOF becomes aware of it having updated its beliefs, with the effect
that PTOF (H) = r·PTOF (H)
1+(r−1)·PTOF (H). TOF checks whether a threshold
is exceeded so that a sound judgement on H can be made.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 21 / 23
22. Some conclusions
1 Upon receiving the value of the likelihood ratio r from MOE, TOF
can (and must) update its probability function by means of the
double likelihood Adams conditioning. (DOGMA: agents must
always and immediately take all new information into account by
updating their credal states).
2 Upon subsequently receiving the information that E is true
(according to MOE) TOF applies Bayes conditioning (after Adams
conditioning).
3 MOE must first transfer the likelihood ratio. Only thereafter MOE
contemplates the truth of E. (If MOE first settles the truth of E
then the likelihood ratio equals 1 and the protocol collapses, or
MOE fails to communicate its proper beliefs).
4 MOE communicates the truth of E in a separate (second)
message, after having updated its own probability function.
5 After the first message of MOE, TOF must apply Adams
conditioning. This is missed by all accounts that I have read.
Jan Bergstra Informatics Institute ESTEC March 8, 2019 22 / 23
23. Further remarks
MOE may transfer both likelihoods in separate successive
messages. Then TOF can apply single likelihood Adams
conditioning after both messages, with the same effect as in the
protocol.
In principle TOF may receive likelihood ratio’s regarding different
pieces of evidence E, E , E etc. But then E , E , E must be
independent (this requires a highly non-trivial bookkeeping by
TOF).
For TOF there is no way around subjective probability.
It is not clear from the literature of forensic science if MOE is
supposed to think in terms of subjective probability as well. (Not a
necessity as TOF may freely turn MOE’s “objective” probabilities
into its own subjective probabilities, but opinions diverge.)
If MOE must adhere to subjective probability then (i) single
message reporting is not an option, and (ii) TOF must apply at
least two successive updates of its probability function (even in the
simplest case).
Jan Bergstra Informatics Institute ESTEC March 8, 2019 23 / 23