1. The document discusses practical representations of imprecise probabilities, which represent uncertainty as a set of probabilities rather than a single probability.
2. It provides an overview of several practical representations, including possibility distributions, P-boxes, probability intervals, and elementary comparative probabilities.
3. The representations aim to be computationally tractable by having a reasonable number of extreme points and satisfying properties like n-monotonicity.
Bregman divergences from comparative convexityFrank Nielsen
This document discusses generalized divergences and comparative convexity. It introduces Jensen divergences, Bregman divergences, and their generalizations to quasi-arithmetic and weighted means. Quasi-arithmetic Bregman divergences are defined for strictly (ρ,τ)-convex functions using two strictly monotone functions ρ and τ. Power mean Bregman divergences are obtained as a subfamily when ρ(x)=xδ1 and τ(x)=xδ2. A criterion is given to check (ρ,τ)-convexity by testing the ordinary convexity of the transformed function G=Fρ,τ.
The dual geometry of Shannon informationFrank Nielsen
The document discusses the dual geometry of Shannon information. It covers:
1. Shannon entropy and related concepts like maximum entropy principle and exponential families.
2. The properties of Kullback-Leibler divergence including its interpretation as a statistical distance and relation to maximum entropy.
3. How maximum likelihood estimation for exponential families can be viewed as minimizing Kullback-Leibler divergence between the empirical distribution and model distribution.
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
Numerical integration based on the hyperfunction theoryHidenoriOgata
The document discusses a numerical integration method based on the hyperfunction theory. The method represents integrals, including those with singularities, as contour integrals in the complex plane. For integrals over a finite interval, the contour integral is approximated using the trapezoidal rule. For integrals over an infinite interval, the contour is parameterized and the integral is evaluated as an infinite sum, which is accelerated using the DE transform. The method is highly accurate due to the geometric convergence of the trapezoidal rule for analytic functions.
Bregman divergences from comparative convexityFrank Nielsen
This document discusses generalized divergences and comparative convexity. It introduces Jensen divergences, Bregman divergences, and their generalizations to quasi-arithmetic and weighted means. Quasi-arithmetic Bregman divergences are defined for strictly (ρ,τ)-convex functions using two strictly monotone functions ρ and τ. Power mean Bregman divergences are obtained as a subfamily when ρ(x)=xδ1 and τ(x)=xδ2. A criterion is given to check (ρ,τ)-convexity by testing the ordinary convexity of the transformed function G=Fρ,τ.
The dual geometry of Shannon informationFrank Nielsen
The document discusses the dual geometry of Shannon information. It covers:
1. Shannon entropy and related concepts like maximum entropy principle and exponential families.
2. The properties of Kullback-Leibler divergence including its interpretation as a statistical distance and relation to maximum entropy.
3. How maximum likelihood estimation for exponential families can be viewed as minimizing Kullback-Leibler divergence between the empirical distribution and model distribution.
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
Numerical integration based on the hyperfunction theoryHidenoriOgata
The document discusses a numerical integration method based on the hyperfunction theory. The method represents integrals, including those with singularities, as contour integrals in the complex plane. For integrals over a finite interval, the contour integral is approximated using the trapezoidal rule. For integrals over an infinite interval, the contour is parameterized and the integral is evaluated as an infinite sum, which is accelerated using the DE transform. The method is highly accurate due to the geometric convergence of the trapezoidal rule for analytic functions.
This document discusses deep generative models including variational autoencoders (VAEs) and generational adversarial networks (GANs). It explains that generative models learn the distribution of input data and can generate new samples from that distribution. VAEs use variational inference to learn a latent space and generate new data by varying the latent variables. The document outlines the key concepts of VAEs including the evidence lower bound objective used for training and how it maximizes the likelihood of the data.
This document summarizes Frank Nielsen's talk on divergence-based center clustering and their applications. Some key points:
- Center-based clustering aims to minimize an objective function that assigns data points to their closest cluster centers. This is an NP-hard problem when the number of dimensions and data points are greater than 1.
- Mixed divergences use dual centroids per cluster to define cluster assignments. Total Jensen divergences are proposed as a way to make divergences more robust by incorporating a conformal factor.
- For clustering when centroids do not have closed-form solutions, initialization methods like k-means++ can be used which randomly select initial seeds without computing centroids. Total Jensen k-means++
Patch Matching with Polynomial Exponential Families and Projective DivergencesFrank Nielsen
This document presents a method called Polynomial Exponential Family-Patch Matching (PEF-PM) to solve the patch matching problem. PEF-PM models patch colors using polynomial exponential families (PEFs), which are universal smooth positive densities. It estimates PEFs using a Score Matching Estimator and accelerates batch estimation using Summed Area Tables. Patch similarity is measured using a statistical projective divergence called the symmetrized γ-divergence. Experiments show PEF-PM handles noise robustly, symmetries, and outperforms baseline methods.
The document describes Approximate Bayesian Computation (ABC), a technique for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC works by simulating data under different parameter values, and accepting simulations that are close to the observed data according to a distance measure and tolerance level. ABC provides an approximation to the posterior distribution that improves as the tolerance level decreases and more informative summary statistics are used. The document discusses the ABC algorithm, properties of the exact ABC posterior distribution, and challenges in selecting appropriate summary statistics.
The document discusses Approximate Bayesian Computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or unavailable. ABC works by simulating data from the model, accepting simulations that are close to the observed data based on a distance measure and tolerance level. This provides samples from an approximation of the posterior distribution. The document provides examples that motivate ABC and outlines the basic ABC algorithm. It also discusses extensions and improvements to the standard ABC method.
This document describes a new method called component-wise approximate Bayesian computation (ABCG or ABC-Gibbs) that combines approximate Bayesian computation (ABC) with Gibbs sampling. ABCG aims to more efficiently explore parameter spaces when the number of parameters is large. It works by alternately sampling each parameter from its ABC-approximated conditional distribution given current values of other parameters. The document provides theoretical analysis showing ABCG converges to a stationary distribution under certain conditions. It also presents examples demonstrating ABCG can better separate estimates from the prior compared to simple ABC, especially for hierarchical models.
After we applied the stochastic Galerkin method to solve stochastic PDE, and solve large linear system, we obtain stochastic solution (random field), which is represented in Karhunen Loeve and PCE basis. No sampling error is involved, only algebraic truncation error. Now we would like to escape classical MCMC path to compute the posterior. We develop an Bayesian* update formula for KLE-PCE coefficients.
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
This document provides an introduction to Approximate Bayesian Computation (ABC). ABC is a likelihood-free method for approximating posterior distributions when the likelihood function is intractable or expensive to evaluate. The document outlines the basic ABC rejection sampling algorithm and discusses extensions like using summary statistics, ABC-MCMC, and ABC sequential Monte Carlo. It also applies ABC to parameter inference for a hidden Potts model used in Bayesian image segmentation.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
This document discusses curved Mahalanobis distances in Cayley-Klein geometries and their application to classification. Specifically:
1. It introduces Mahalanobis distances and generalizes them to curved distances in Cayley-Klein geometries, which can model both elliptic and hyperbolic geometries.
2. It describes how to learn these curved Mahalanobis metrics using an adaptation of Large Margin Nearest Neighbors (LMNN) to the elliptic and hyperbolic cases.
3. Experimental results on several datasets show that curved Mahalanobis distances can achieve comparable or better classification accuracy than standard Mahalanobis distances.
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsSean Meyn
A tutorial, and very new algorithms -- more details on arXiv and at NIPS 2017 https://arxiv.org/abs/1707.03770
Part of the Data Science Summer School at École Polytechnique: http://www.ds3-datascience-polytechnique.fr/program/
---------
2018 Updates:
See Zap slides from ISMP 2018 for new inverse-free optimal algorithms
Simons tutorial, March 2018 [one month before most discoveries announced at ISMP]
Part I (Basics, with focus on variance of algorithms)
https://www.youtube.com/watch?v=dhEF5pfYmvc
Part II (Zap Q-learning)
https://www.youtube.com/watch?v=Y3w8f1xIb6s
Big 2017 survey on variance in SA:
Fastest convergence for Q-learning
https://arxiv.org/abs/1707.03770
You will find the infinite-variance Q result there.
Our NIPS 2017 paper is distilled from this.
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...Cristiano Longo
The document discusses the quantified fragment of set theory called ∀π0. ∀π0 allows for restricted quantification over sets and ordered pairs. A decision procedure for the satisfiability of ∀π0 formulas works by non-deterministically guessing a skeletal representation and checking if its realization is a model of the formula. The document considers encoding the conditions on skeletal representations as first-order formulas to view ∀π0 as a first-order logic and leverage tools developed for first-order logic fragments.
The document summarizes Approximate Bayesian Computation (ABC). It discusses how ABC provides a way to approximate Bayesian inference when the likelihood function is intractable or too computationally expensive to evaluate directly. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure and tolerance level. Key points discussed include:
- ABC provides an approximation to the posterior distribution by sampling from simulations that fall within a tolerance of the observed data.
- Summary statistics are often used to reduce the dimension of the data and improve the signal-to-noise ratio when applying the tolerance criterion.
- Random forests can help select informative summary statistics and provide semi-automated ABC
Reinforcement learning: hidden theory, and new super-fast algorithms
Lecture presented at the Center for Systems and Control (CSC@USC) and Ming Hsieh Institute for Electrical Engineering,
February 21, 2018
Stochastic Approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. The most famous examples today are TD- and Q-learning algorithms. The first half of this lecture will provide an overview of stochastic approximation, with a focus on optimizing the rate of convergence. A new approach to optimize the rate of convergence leads to the new Zap Q-learning algorithm. Analysis suggests that its transient behavior is a close match to a deterministic Newton-Raphson implementation, and numerical experiments confirm super fast convergence.
Based on
@article{devmey17a,
Title = {Fastest Convergence for {Q-learning}},
Author = {Devraj, Adithya M. and Meyn, Sean P.},
Journal = {NIPS 2017 and ArXiv e-prints},
Year = 2017}
This document describes a stochastic block-coordinate fixed point algorithm. The algorithm updates blocks of variables sequentially in each iteration, where the block to update is chosen randomly. This allows processing high-dimensional problems with less memory than updating all blocks at once. The algorithm is proven to converge almost surely to a fixed point under certain assumptions, such as the operators being quasinonexpansive. Linear convergence can be achieved in the absence of errors, though stochastic errors slow convergence to a non-linear rate. The influence of deterministic versus random block selection is also discussed.
Multiple estimators for Monte Carlo approximationsChristian Robert
This document discusses multiple estimators that can be used to approximate integrals using Monte Carlo simulations. It begins by introducing concepts like multiple importance sampling, Rao-Blackwellisation, and delayed acceptance that allow combining multiple estimators to improve accuracy. It then discusses approaches like mixtures as proposals, global adaptation, and nonparametric maximum likelihood estimation (NPMLE) that frame Monte Carlo estimation as a statistical estimation problem. The document notes various advantages of the statistical formulation, like the ability to directly estimate simulation error from the Fisher information. Overall, the document presents an overview of different techniques for combining Monte Carlo simulations to obtain more accurate integral approximations.
The document provides an overview of the EM algorithm and its application to outlier detection. It begins with introducing the EM algorithm and explaining its iterative process of estimating parameters via E-step and M-step. It then proves properties of the EM algorithm such as non-decreasing log-likelihood and convergence. An example of using EM for Gaussian mixture modeling is provided. Finally, the document discusses directly and indirectly applying EM to outlier detection.
This document provides an introduction to Approximate Bayesian Computation (ABC), a likelihood-free method for approximating posterior distributions when the likelihood function is unavailable or computationally intractable. It describes the ABC rejection sampling algorithm and key concepts like tolerance levels, distance functions, summary statistics, and improvements like ABC-MCMC and ABC-SMC. ABC is presented as an alternative to traditional Bayesian inference methods for models where direct likelihood evaluation is impossible or too expensive.
This document discusses recent advances in Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods. It introduces Markov chain and sequential Monte Carlo techniques such as the Hastings-Metropolis algorithm, Gibbs sampling, data augmentation, and space alternating data augmentation. These techniques are applied to problems such as parameter estimation for finite mixtures of Gaussians.
The home security system document describes a wireless home security system with a security panel that can connect to 99 sensors and has battery backup. The system has programmable security modes and zones, and sensors that are wireless, battery powered, and can detect different gases. It provides features like power and line failure indications, call forwarding, and battery level indicators for troubleshooting.
This document discusses deep generative models including variational autoencoders (VAEs) and generational adversarial networks (GANs). It explains that generative models learn the distribution of input data and can generate new samples from that distribution. VAEs use variational inference to learn a latent space and generate new data by varying the latent variables. The document outlines the key concepts of VAEs including the evidence lower bound objective used for training and how it maximizes the likelihood of the data.
This document summarizes Frank Nielsen's talk on divergence-based center clustering and their applications. Some key points:
- Center-based clustering aims to minimize an objective function that assigns data points to their closest cluster centers. This is an NP-hard problem when the number of dimensions and data points are greater than 1.
- Mixed divergences use dual centroids per cluster to define cluster assignments. Total Jensen divergences are proposed as a way to make divergences more robust by incorporating a conformal factor.
- For clustering when centroids do not have closed-form solutions, initialization methods like k-means++ can be used which randomly select initial seeds without computing centroids. Total Jensen k-means++
Patch Matching with Polynomial Exponential Families and Projective DivergencesFrank Nielsen
This document presents a method called Polynomial Exponential Family-Patch Matching (PEF-PM) to solve the patch matching problem. PEF-PM models patch colors using polynomial exponential families (PEFs), which are universal smooth positive densities. It estimates PEFs using a Score Matching Estimator and accelerates batch estimation using Summed Area Tables. Patch similarity is measured using a statistical projective divergence called the symmetrized γ-divergence. Experiments show PEF-PM handles noise robustly, symmetries, and outperforms baseline methods.
The document describes Approximate Bayesian Computation (ABC), a technique for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC works by simulating data under different parameter values, and accepting simulations that are close to the observed data according to a distance measure and tolerance level. ABC provides an approximation to the posterior distribution that improves as the tolerance level decreases and more informative summary statistics are used. The document discusses the ABC algorithm, properties of the exact ABC posterior distribution, and challenges in selecting appropriate summary statistics.
The document discusses Approximate Bayesian Computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or unavailable. ABC works by simulating data from the model, accepting simulations that are close to the observed data based on a distance measure and tolerance level. This provides samples from an approximation of the posterior distribution. The document provides examples that motivate ABC and outlines the basic ABC algorithm. It also discusses extensions and improvements to the standard ABC method.
This document describes a new method called component-wise approximate Bayesian computation (ABCG or ABC-Gibbs) that combines approximate Bayesian computation (ABC) with Gibbs sampling. ABCG aims to more efficiently explore parameter spaces when the number of parameters is large. It works by alternately sampling each parameter from its ABC-approximated conditional distribution given current values of other parameters. The document provides theoretical analysis showing ABCG converges to a stationary distribution under certain conditions. It also presents examples demonstrating ABCG can better separate estimates from the prior compared to simple ABC, especially for hierarchical models.
After we applied the stochastic Galerkin method to solve stochastic PDE, and solve large linear system, we obtain stochastic solution (random field), which is represented in Karhunen Loeve and PCE basis. No sampling error is involved, only algebraic truncation error. Now we would like to escape classical MCMC path to compute the posterior. We develop an Bayesian* update formula for KLE-PCE coefficients.
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
This document provides an introduction to Approximate Bayesian Computation (ABC). ABC is a likelihood-free method for approximating posterior distributions when the likelihood function is intractable or expensive to evaluate. The document outlines the basic ABC rejection sampling algorithm and discusses extensions like using summary statistics, ABC-MCMC, and ABC sequential Monte Carlo. It also applies ABC to parameter inference for a hidden Potts model used in Bayesian image segmentation.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
This document discusses curved Mahalanobis distances in Cayley-Klein geometries and their application to classification. Specifically:
1. It introduces Mahalanobis distances and generalizes them to curved distances in Cayley-Klein geometries, which can model both elliptic and hyperbolic geometries.
2. It describes how to learn these curved Mahalanobis metrics using an adaptation of Large Margin Nearest Neighbors (LMNN) to the elliptic and hyperbolic cases.
3. Experimental results on several datasets show that curved Mahalanobis distances can achieve comparable or better classification accuracy than standard Mahalanobis distances.
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsSean Meyn
A tutorial, and very new algorithms -- more details on arXiv and at NIPS 2017 https://arxiv.org/abs/1707.03770
Part of the Data Science Summer School at École Polytechnique: http://www.ds3-datascience-polytechnique.fr/program/
---------
2018 Updates:
See Zap slides from ISMP 2018 for new inverse-free optimal algorithms
Simons tutorial, March 2018 [one month before most discoveries announced at ISMP]
Part I (Basics, with focus on variance of algorithms)
https://www.youtube.com/watch?v=dhEF5pfYmvc
Part II (Zap Q-learning)
https://www.youtube.com/watch?v=Y3w8f1xIb6s
Big 2017 survey on variance in SA:
Fastest convergence for Q-learning
https://arxiv.org/abs/1707.03770
You will find the infinite-variance Q result there.
Our NIPS 2017 paper is distilled from this.
Herbrand-satisfiability of a Quantified Set-theoretical Fragment (Cantone, Lo...Cristiano Longo
The document discusses the quantified fragment of set theory called ∀π0. ∀π0 allows for restricted quantification over sets and ordered pairs. A decision procedure for the satisfiability of ∀π0 formulas works by non-deterministically guessing a skeletal representation and checking if its realization is a model of the formula. The document considers encoding the conditions on skeletal representations as first-order formulas to view ∀π0 as a first-order logic and leverage tools developed for first-order logic fragments.
The document summarizes Approximate Bayesian Computation (ABC). It discusses how ABC provides a way to approximate Bayesian inference when the likelihood function is intractable or too computationally expensive to evaluate directly. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure and tolerance level. Key points discussed include:
- ABC provides an approximation to the posterior distribution by sampling from simulations that fall within a tolerance of the observed data.
- Summary statistics are often used to reduce the dimension of the data and improve the signal-to-noise ratio when applying the tolerance criterion.
- Random forests can help select informative summary statistics and provide semi-automated ABC
Reinforcement learning: hidden theory, and new super-fast algorithms
Lecture presented at the Center for Systems and Control (CSC@USC) and Ming Hsieh Institute for Electrical Engineering,
February 21, 2018
Stochastic Approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. The most famous examples today are TD- and Q-learning algorithms. The first half of this lecture will provide an overview of stochastic approximation, with a focus on optimizing the rate of convergence. A new approach to optimize the rate of convergence leads to the new Zap Q-learning algorithm. Analysis suggests that its transient behavior is a close match to a deterministic Newton-Raphson implementation, and numerical experiments confirm super fast convergence.
Based on
@article{devmey17a,
Title = {Fastest Convergence for {Q-learning}},
Author = {Devraj, Adithya M. and Meyn, Sean P.},
Journal = {NIPS 2017 and ArXiv e-prints},
Year = 2017}
This document describes a stochastic block-coordinate fixed point algorithm. The algorithm updates blocks of variables sequentially in each iteration, where the block to update is chosen randomly. This allows processing high-dimensional problems with less memory than updating all blocks at once. The algorithm is proven to converge almost surely to a fixed point under certain assumptions, such as the operators being quasinonexpansive. Linear convergence can be achieved in the absence of errors, though stochastic errors slow convergence to a non-linear rate. The influence of deterministic versus random block selection is also discussed.
Multiple estimators for Monte Carlo approximationsChristian Robert
This document discusses multiple estimators that can be used to approximate integrals using Monte Carlo simulations. It begins by introducing concepts like multiple importance sampling, Rao-Blackwellisation, and delayed acceptance that allow combining multiple estimators to improve accuracy. It then discusses approaches like mixtures as proposals, global adaptation, and nonparametric maximum likelihood estimation (NPMLE) that frame Monte Carlo estimation as a statistical estimation problem. The document notes various advantages of the statistical formulation, like the ability to directly estimate simulation error from the Fisher information. Overall, the document presents an overview of different techniques for combining Monte Carlo simulations to obtain more accurate integral approximations.
The document provides an overview of the EM algorithm and its application to outlier detection. It begins with introducing the EM algorithm and explaining its iterative process of estimating parameters via E-step and M-step. It then proves properties of the EM algorithm such as non-decreasing log-likelihood and convergence. An example of using EM for Gaussian mixture modeling is provided. Finally, the document discusses directly and indirectly applying EM to outlier detection.
This document provides an introduction to Approximate Bayesian Computation (ABC), a likelihood-free method for approximating posterior distributions when the likelihood function is unavailable or computationally intractable. It describes the ABC rejection sampling algorithm and key concepts like tolerance levels, distance functions, summary statistics, and improvements like ABC-MCMC and ABC-SMC. ABC is presented as an alternative to traditional Bayesian inference methods for models where direct likelihood evaluation is impossible or too expensive.
This document discusses recent advances in Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods. It introduces Markov chain and sequential Monte Carlo techniques such as the Hastings-Metropolis algorithm, Gibbs sampling, data augmentation, and space alternating data augmentation. These techniques are applied to problems such as parameter estimation for finite mixtures of Gaussians.
The home security system document describes a wireless home security system with a security panel that can connect to 99 sensors and has battery backup. The system has programmable security modes and zones, and sensors that are wireless, battery powered, and can detect different gases. It provides features like power and line failure indications, call forwarding, and battery level indicators for troubleshooting.
The document describes the components of a video door phone system, including outdoor units with plastic/metal bodies that offer color/black-and-white CCD options and night vision. Indoor units are available in sizes from 3.5 to 7 inches with TFT LCD screens and color/black-and-white options. Additional features include microphones, speakers, keypads, hands-free panels, and connections for multi-apartment buildings.
O documento discute os conceitos de metodologia científica e tipos de conhecimento, incluindo o conhecimento empírico, filosófico, teológico e científico. Também fornece orientações sobre como escolher um tema de pesquisa, revisar a literatura, fazer fichamentos e usar a internet para pesquisas.
This document discusses digital strategies for startups. It outlines skills needed by entrepreneurs in digital marketing, distribution, technology product design and development, and customer relationship management. The company, GKXIM, provides services in these areas to select partners, including strategy development, original technology, and marketing solutions like websites and apps. They aim to align product design with business goals, manage the design process, support deployment, reduce costs, and maximize business value for clients. The document proposes discussing a digital startup's mission, product, user touchpoints, development milestones, and user journey.
Dokumen tersebut merangkum tentang kecerdasan buatan (AI), mulai dari definisi, sejarah, metode, dan aplikasi AI. Juga membahas perkembangan masa depan AI menurut prediksi Ray Kurzweil.
We introduced Serendib Antidiabetic Tea first time in 2004, Sri Lanka and end up obtaining our patent in 2009. We are the first organization to introduce antidiabetic tea in Sri Lanka and hold patent rights and ITI ceritificate.
Of all Sri Lanka’s sparkling treasures “Ceylon Tea” is more vibrant and authenticated, Ceylon tea means hospitality and friendliness impress even the most jaded tea drinkers. Now with the introduction of Serendib Tea by Serendib Holding with every geographical variety like Nuwaraeliya, Dimbula,Lindula, Kotagala, Kandy, and special flavoured Kandy and Ratnapura varieties it should comes as no surprise that our greatest feature is nothing new “Serendib Anti Diabetic and Serendib Shape Up tea”.They still a standard of our great service to customers that guaranteed to raise a friendly smile with satisfaction.
This document provides instructions for using the miRQuest tool to either benchmark miRNA prediction tools or identify miRNAs in a dataset. It describes selecting prediction tools, setting parameters, uploading positive and negative FASTA files for benchmarking, or uploading a single file for identification. The tutorial explains how benchmarking will compare tools and email results, while identification only requires uploading a file to predict miRNAs.
1. Tes pendahuluan membahas tentang pseudecode, flowchart, dan algoritma untuk menyelesaikan masalah perhitungan nilai total dan pertukaran nilai variabel. 2. Materi membahas pengertian dasar logika dan algoritma serta unsur-unsurnya seperti variabel, pertukaran nilai, dan struktur algoritma seperti urut, cabang, dan pengulangan. 3. Diberikan contoh-contoh penggunaan struktur algoritma.
On combination and conflict - Belief function school lectureSebastien Destercke
The document discusses combination and conflict in belief functions. It provides an overview of combination rules and measuring conflict. It aims to warn about potential misunderstandings of combination, give basic elements on combination, and how to measure conflict. It also discusses combination rules in more depth than just the basics, focusing on properties, interpretability, and learnability of rules.
A presentation about the basics of imprecise probability, focusing on its behavioural interpretation and the relation of this latter with the robust interpretation, that is the handling of sets of probabilities.
The document discusses different types of CCTV surveillance cameras and systems, including dome cameras, bullet cameras, C/CS mount cameras, and DVR systems. Dome cameras are compact with fixed or vari focal lenses and are aesthetically pleasing and low cost. Bullet cameras have metal casing and are vandal proof with night vision. C/CS mount cameras have replaceable lenses and vari-focal capabilities. DVR systems are available in 4, 8, and 16 channel models and support audio, alarms, LAN connectivity, and remote access. The systems can be used to monitor various areas like homes, offices, hospitals, and shops.
Serendib Herbal Tea exports pure Ceylon tea and herbal teas from Sri Lanka. Their vision is to offer high quality Sri Lankan tea to customers globally. They produce traditional Ceylon black teas as well as flavored teas including Serendib Spicy Tea, which combines Ceylon tea with cinnamon and cardamom, Serendib Masala Chai Tea, and Serendib Antidiabetic Tea, which contains antioxidants that may help control blood sugar levels.
This document presents methods for computing information flow and quantifying information leakage in non-probabilistic programs using symbolic model checking. It discusses using binary decision diagrams (BDDs) and algebraic decision diagrams (ADDs) to represent program states and calculate fixed points. Algorithms are provided for symbolically computing min-entropy and Shannon entropy leakage by constructing ADDs representing the program summary and sets of possible outputs. The methods were implemented in a tool called Moped-QLeak and evaluated on example programs. Future work includes supporting recursive programs and using other symbolic verification approaches.
Mean Absolute Percentage Error for regression models, presentation of the paper published in Neurocomputing, 2016.
http://www.sciencedirect.com/science/article/pii/S0925231216003325
This document introduces modern variational inference techniques. It discusses:
1. The goal of variational inference is to approximate the posterior distribution p(θ|D) over latent parameters θ given data D.
2. This is done by positing a variational distribution qλ(θ) and optimizing its parameters λ to minimize the KL divergence between qλ(θ) and p(θ|D).
3. The evidence lower bound (ELBO) is used as a variational objective that can be optimized using stochastic gradient descent, with gradients estimated using Monte Carlo sampling and reparametrization.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
We demonstrate how to estimate the expected optimisation time of UMDA, an estimation of distribution algorithm, using the level-based theorem. The talk was given at the GECCO 2015 conference in Madrid, Spain.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPK Lehre
We describe how to estimate the optimisation time of the UMDA, an estimation of distribution algorithm, using the level-based theorem. The paper was presented at GECCO 2015 in Madrid.
Slides: Hypothesis testing, information divergence and computational geometryFrank Nielsen
Bayesian multiple hypothesis testing can be viewed from the perspective of computational geometry. The probability of error can be upper bounded by divergences such as the total variation and Chernoff distance. When the hypotheses are distributions from an exponential family, the optimal MAP Bayesian rule is a nearest neighbor classifier on an additive Bregman Voronoi diagram. For binary hypotheses, the best error exponent is the Chernoff information, which is a Bregman divergence on the exponential family manifold. This viewpoint generalizes to multiple hypotheses, where the best error exponent comes from the closest Bregman pair of distributions.
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
This document discusses nested sampling, a technique for Bayesian computation and evidence evaluation. It begins by introducing Bayesian inference and the evidence integral. It then shows that nested sampling transforms the multidimensional evidence integral into a one-dimensional integral over the prior mass constrained to have likelihood above a given value. The document outlines the nested sampling algorithm and shows that it provides samples from the posterior distribution. It also discusses termination criteria and choices of sample size for the algorithm. Finally, it provides a numerical example of nested sampling applied to a Gaussian model.
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...Alexander Decker
This document presents a unique common fixed point theorem for two self maps satisfying a generalized contraction condition in partial metric spaces using rational expressions. It begins by introducing basic definitions and lemmas related to partial metric spaces. It then presents the main theorem, which states that if two self maps T and f satisfy certain contractive and completeness conditions, including being weakly compatible, then they have a unique common fixed point. The proof considers two cases - when the sequences constructed from the maps are eventually equal, and when they are not eventually equal but form a Cauchy sequence. It is shown in both cases that the maps must have a unique common fixed point.
Equational axioms for probability calculus and modelling of Likelihood ratio ...Advanced-Concepts-Team
Based on the theory of meadows an equational axiomatisation is given for probability functions on finite event spaces. Completeness of the axioms is stated with some pointers to how that is shown.Then a simplified model courtroom subjective probabilistic reasoning is provided in terms of a protocol with two proponents: the trier of fact (TOF, the judge), and the moderator of evidence (MOE, the scientific witness). Then the idea is outlined of performing of a step of Bayesian reasoning by way of applying a transformation of the subjective probability function of TOF on the basis of different pieces of information obtained from MOE. The central role of the so-called Adams transformation is outlined. A simple protocol is considered where MOE transfers to TOF first a likelihood ratio for a hypothesis H and a potential piece of evidence E and thereupon the additional assertion that E holds true. As an alternative a second protocol is considered where MOE transfers two successive likelihoods (the quotient of both being the mentioned ratio) followed with the factuality of E. It is outlined how the Adams transformation allows to describe information processing at TOF side in both protocols and that the resulting probability distribution is the same in both cases. Finally it is indicated how the Adams transformation also allows the required update of subjective probability at MOE side so that both sides in the protocol may be assumed to comply with the demands of subjective probability.
Double Robustness: Theory and Applications with Missing DataLu Mao
When data are missing at random (MAR), complete-case analysis with the full-data estimating equation is in general not valid. To correct the bias, we can employ the inverse probability weighting (IPW) technique on the complete cases. This requires modeling the missing pattern on the observed data (call it the $\pi$ model). The resulting IPW estimator, however, ignores information contained in cases with missing components, and is thus statistically inefficient. Efficiency can be improved by modifying the estimating equation along the lines of the semiparametric efficiency theory of Bickel et al. (1993). This modification usually requires modeling the distribution of the missing component on the observed ones (call it the $\mu$ model). Hence, when both the $\pi$ and the $\mu$ models are correct, the modified estimator is valid and is more efficient than the IPW one. In addition, the modified estimator is "doubly robust" in the sense that it is valid when either the $\pi$ model or the $\mu$ model is correct.
Essential materials of the slides are extracted from the book "Semiparametric Theory and Missing Data" (Tsiatis, 2006). The slides were originally presented in the class BIOS 773 Statistical Analysis with Missing Data in Spring 2013 at UNC Chapel Hill as a final project.
Low Complexity Regularization of Inverse ProblemsGabriel Peyré
This document discusses regularization techniques for inverse problems. It begins with an overview of compressed sensing and inverse problems, as well as convex regularization using gauges. It then discusses performance guarantees for regularization methods using dual certificates and L2 stability. Specific examples of regularization gauges are given for various models including sparsity, structured sparsity, low-rank, and anti-sparsity. Conditions for exact recovery using random measurements are provided for sparse vectors and low-rank matrices. The discussion concludes with the concept of a minimal-norm certificate for the dual problem.
This document proposes a linear programming (LP) based approach for solving maximum a posteriori (MAP) estimation problems on factor graphs that contain multiple-degree non-indicator functions. It presents an existing LP method for problems with single-degree functions, then introduces a transformation to handle multiple-degree functions by introducing auxiliary variables. This allows applying the existing LP method. As an example, it applies this to maximum likelihood decoding for the Gaussian multiple access channel. Simulation results demonstrate the LP approach decodes correctly with polynomial complexity.
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
This document discusses an empirical Bayesian approach for estimating regularization parameters in inverse problems using maximum likelihood estimation. It proposes the Stochastic Optimization with Unadjusted Langevin (SOUL) algorithm, which uses Markov chain sampling to approximate gradients in a stochastic projected gradient descent scheme for optimizing the regularization parameter. The algorithm is shown to converge to the maximum likelihood estimate under certain conditions on the log-likelihood and prior distributions.
1. The document summarizes key concepts in probability and statistics including Venn diagrams, conditional probability, random variables, probability density functions, cumulative distribution functions, the normal distribution, and the standard normal distribution.
2. It provides examples of using these concepts to calculate probabilities for various scenarios involving insurance policyholders, discrete and continuous random variables, and reaction times.
3. Key formulas are presented for probability, density functions, cumulative distribution functions, and transforming a normal distribution to the standard normal distribution for finding probabilities using tables.
This document describes the Space Alternating Data Augmentation (SADA) algorithm, an efficient Markov chain Monte Carlo method for sampling from posterior distributions. SADA extends the Data Augmentation algorithm by introducing multiple sets of missing data, with each set corresponding to a subset of model parameters. These are sampled in a "space alternating" manner to improve convergence. The document applies SADA to finite mixtures of Gaussians, introducing different types of missing data to update parameter subsets. Simulation results show SADA provides better mixing and convergence than standard Data Augmentation.
This document describes an automatic Bayesian method for numerical integration. It begins by introducing the problem of multivariate integration and current approaches like Monte Carlo integration that have limitations. It then presents the Bayesian cubature algorithm which chooses sample points and weights to minimize the error in approximating an integral. This is done by modeling the integrand as a Gaussian process, deriving identities relating the error to properties of the covariance kernel, and estimating its hyperparameters. The kernel used is shift-invariant, allowing fast matrix computations. Simulation results show Bayesian cubature achieves high accuracy with fewer samples compared to other methods.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
SDSS1335+0728: The awakening of a ∼ 106M⊙ black hole⋆Sérgio Sacani
Context. The early-type galaxy SDSS J133519.91+072807.4 (hereafter SDSS1335+0728), which had exhibited no prior optical variations during the preceding two decades, began showing significant nuclear variability in the Zwicky Transient Facility (ZTF) alert stream from December 2019 (as ZTF19acnskyy). This variability behaviour, coupled with the host-galaxy properties, suggests that SDSS1335+0728 hosts a ∼ 106M⊙ black hole (BH) that is currently in the process of ‘turning on’. Aims. We present a multi-wavelength photometric analysis and spectroscopic follow-up performed with the aim of better understanding the origin of the nuclear variations detected in SDSS1335+0728. Methods. We used archival photometry (from WISE, 2MASS, SDSS, GALEX, eROSITA) and spectroscopic data (from SDSS and LAMOST) to study the state of SDSS1335+0728 prior to December 2019, and new observations from Swift, SOAR/Goodman, VLT/X-shooter, and Keck/LRIS taken after its turn-on to characterise its current state. We analysed the variability of SDSS1335+0728 in the X-ray/UV/optical/mid-infrared range, modelled its spectral energy distribution prior to and after December 2019, and studied the evolution of its UV/optical spectra. Results. From our multi-wavelength photometric analysis, we find that: (a) since 2021, the UV flux (from Swift/UVOT observations) is four times brighter than the flux reported by GALEX in 2004; (b) since June 2022, the mid-infrared flux has risen more than two times, and the W1−W2 WISE colour has become redder; and (c) since February 2024, the source has begun showing X-ray emission. From our spectroscopic follow-up, we see that (i) the narrow emission line ratios are now consistent with a more energetic ionising continuum; (ii) broad emission lines are not detected; and (iii) the [OIII] line increased its flux ∼ 3.6 years after the first ZTF alert, which implies a relatively compact narrow-line-emitting region. Conclusions. We conclude that the variations observed in SDSS1335+0728 could be either explained by a ∼ 106M⊙ AGN that is just turning on or by an exotic tidal disruption event (TDE). If the former is true, SDSS1335+0728 is one of the strongest cases of an AGNobserved in the process of activating. If the latter were found to be the case, it would correspond to the longest and faintest TDE ever observed (or another class of still unknown nuclear transient). Future observations of SDSS1335+0728 are crucial to further understand its behaviour. Key words. galaxies: active– accretion, accretion discs– galaxies: individual: SDSS J133519.91+072807.4
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
PPT on Sustainable Land Management presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
11.1 Role of physical biological in deterioration of grains.pdf
Madrid easy
1. Practical representations of probability sets: a
guided tour with applications
Sébastien Destercke
in collaboration with E. Miranda, I. Montes, M. Troffaes, D.
Dubois, O. Strauss, C. Baudrit, P.H. Wuillemin.
CNRS researcher, Laboratoire Heudiasyc, Compiègne
Madrid Seminar
Prac Rep 1
2. Introduction Basics Practical Representations Applications
Plan
G Introduction
G Basics of imprecise probabilities
G A tour of practical representations
G Illustrative applications
Prac Rep 2
4. Introduction Basics Practical Representations Applications
Heudiasyc and LABEX MS2T activities
Heudiasyc
G 140 members
G 6M budget
G 4 teams:
H Uncertainty and machine
learning
H Automatic and robotic
H Artificial intelligence
H Operational research and
networks
LABEX MS2T
G Topic: systems of systems
G 3 laboratories:
H Heudiasyc
H BMBI: Bio-mechanic
H Roberval: mechanic
If interested in collaborations, let
me know
Prac Rep 4
5. Introduction Basics Practical Representations Applications
Talk in a nutshell
What is this talk about
1. (very) Basics of imprecise probability
2. A review of practical representations
3. Some applications
What is this talk not about
G Deep mathematics of imprecise probabilities (you can ask Nacho or
Quique)
G Imprecise parametric models
Prac Rep 5
6. Introduction Basics Practical Representations Applications
Plan
G Introduction
G Basics of imprecise probabilities
G A tour of practical representations
G Illustrative applications
Prac Rep 6
7. Introduction Basics Practical Representations Applications
Imprecise probabilities
What?
Representing uncertainty as a convex set P of probabilities rather than a
single one
Why?
G precise probabilities inadequate to model lack of information;
G generalize set-uncertainty and probabilistic uncertainty;
G can model situations where probabilistic information is partial;
G allow axiomatically alternatives to possibly be incomparable
Prac Rep 7
8. Introduction Basics Practical Representations Applications
Probabilities
Probability mass on finite space X = {x1,...,xn} equivalent to a n
dimensional vector
p := (p(x1),...,p(xn))
Limited to the set PX of all probabilities
p(x) > 0,
x∈X
p(x) = 1 and
The set PX is the (n−1)-unit simplex.
Prac Rep 8
9. Introduction Basics Practical Representations Applications
Point in unit simplex
p(x1) = 0.2, p(x2) = 0.5, p(x3) = 0.3
p(x3)
p(x1)
p(x2)
1
1
1
p(x2)
p(x3) p(x1)
∝
p(X
1 )
∝p(x2)
∝
p(x 3
)
Prac Rep 9
10. Introduction Basics Practical Representations Applications
Imprecise probability
Set P defined as a set of n constraints
E(fi) ≤
x∈X
fi(x)p(x) ≤ E(fi)
where fi :→ R bounded functions
Example
2p(x2)−p(x3) ≥ 0
f(x1) = 0,f(x2) = 2,f(x3) = −1,E(f) = 0
Lower/upper probabilities
Bounds P(A),P(A) on event A equivalent to
P(A) ≤
x∈A
p(x) ≤ P(A)
Prac Rep 10
11. Introduction Basics Practical Representations Applications
Set P example
2p(x2)−p(x3) ≥ 0
p(x3)
p(x1)
p(x2)
1
1
1
p(x2)
p(x3) p(x1)
Prac Rep 11
12. Introduction Basics Practical Representations Applications
Credal set example
2p(x2)−p(x3) ≥ 0
2p(x1)−p(x2)−p(x3) ≥ 0
P
p(x3)
p(x1)
p(x2)
1
1
1
p(x2)
p(x3) p(x1)
Prac Rep 12
13. Introduction Basics Practical Representations Applications
Natural extension
From an initial set P defined by constraints, we can compute
G The lower expectation E(g) of any function g as
E(g) = inf
p∈P
E(g)
G The lower probability P(A) of any event A as
P(A) = inf
p∈P
P(A)
Prac Rep 13
14. Introduction Basics Practical Representations Applications
Some usual problems
G Computing E(g) = infp∈P E(g) of new function g
G Updating P (θ|x) = L(x|θ)P (θ)
G Computing conditional E(f|A)
G Simulating/sampling P
G Building joint over variables X1,...,Xn
can be difficult to perform in general → practical representations reduce
computational cost
Prac Rep 14
15. Introduction Basics Practical Representations Applications
What makes a representation "practical"
G A reasonable, algorithmically enumerable number of extreme points
reminder
p ∈ P extreme iff p = λp1 +(1−λ)p2 with λ ∈ (0,1) implies p1 = p2 = p.
We will denote E (P ) the set of extreme points of P
G n-monotone property of P
2-monotonicity (sub-modularity, convexity)
P(A∪B)+P(A∩B) ≥ P(A)+P(B) for any A,B ⊆ X
∞-monotonicity
P(∪n
i=1Ai) ≥
A ⊆{A1,...,An}
−1|A|+1
P(∪Ai ∈A Ai) for any A1,...,An ⊆ X and n > 0
Prac Rep 15
16. Introduction Basics Practical Representations Applications
Extreme points: illustration
G p(x1) = 1,p(x2) = 0,p(x3) = 0
G p(x1) = 0,p(x2) = 1,p(x3) = 0
G p(x1) = 0.25,p(x2) = 0.25,p(x3) = 0.5
p(x2)
p(x3) p(x1)
Prac Rep 16
17. Introduction Basics Practical Representations Applications
Extreme points: utility
G Computing E(g) → minimal E on ext. points
G Updating → update extreme points, take convex hull
G Conditional E(f|A) → minimal E(f|A) on ext. points
G Simulating P → take convex mixtures of ext. points
G Joint over variables X1,...,Xn → convex hull of joint extreme
Again, if number of extreme points is limited, or inner approximation (by
sampling) acceptable.
Prac Rep 17
18. Introduction Basics Practical Representations Applications
2-monotonicity
Computing E(g)
Choquet integral
E(g) = infg +
supg
infg
P({g ≥ t})dt
In finite spaces → sorting n values of g and compute P(A) for n events
Conditioning
P(A|B) =
P(A∩B)
P(A∩B)+P(Ac ∩B)
And P(A|B) remains 2-monotone (can be used to get E(f|A))
Prac Rep 18
19. Introduction Basics Practical Representations Applications
∞-monotonicity
If P is ∞-monotone, its Möbius inverse m : 2X
→ R
m(A) =
B⊆A
−1|AB|
P(B)
is positive and sums up to one, and is often called belief function
Simulating P
Sampling m and considering the associated set A
Joint model of X1,...,XN
If m1,m2 corresponds to inverses of X1,X2, consider joint m12 s.t.
m12(A×B) = m1(A)·m2(B)
G still ∞-monotone
G outer-approximate other def. of independence between P1, P2
Prac Rep 19
20. Introduction Basics Practical Representations Applications
2-monotonicity and extreme points [3]
Generating extreme points if P 2-monotone:
1. Pick a permutation σ : [1,n] → [1,n] of X
2. Consider sets Aσ
i
= {xσ(1),...,xσ(i)}
3. define Pσ
({xσ(i)}) = P(Aσ
i
)−P(Aσ
i−1
) for i = 1,...,n (Aσ
0
= )
4. then Pσ
∈ E (P )
Some comments
G Maximal value of |E (P )| = n!
G We can have Pσ1 = Pσ2 with σ1 = σ2 → |E (P )| often less than n!
Prac Rep 20
21. Introduction Basics Practical Representations Applications
Example
G X = {x1,x2,x3}
G σ(1) = 2,σ(2) = 3,σ(3) = 1
G Aσ
0
= ,Aσ
1
= {x2},Aσ
2
= {x2,x3},Aσ
3
= X
G Pσ
({xσ(1)}) = Pσ
({x2}) = P({x2})−P( ) = P({x2})
G Pσ
({xσ(2)}) = Pσ
({x3}) = P({x2,x3})−P({x2})
G Pσ
({xσ(3)}) = Pσ
({x1}) = P(X )−P({x2,x3}) = 1−P({x2,x3})
Prac Rep 21
22. Introduction Basics Practical Representations Applications
Plan
G Introduction
G Basics of imprecise probabilities
G A tour of practical representations
H Basics
H Possibility distributions
H P-boxes
H Probability intervals
H Elementary Comparative probabilities
G Illustrative applications
Prac Rep 22
23. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Two very basic models
Probability
G P({xi}) = P({xi}) = P({xi})
G ∞-monotone, n constraints, |E | = 1
Vacuous model PX
Only support X of probability is known
G P(X ) = 1
G ∞-monotone, 1 constraints, |E (P )| = n (Dirac distribution)
Easily extends to vacuous on set A (can be used in robust optimisation,
decision under risk, interval-analysis)
Prac Rep 23
24. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A concise graph
ProbaVacuous
Linear-vacuous Pari-mutuel
Possibilities
P-boxes
Prob. int
Compa.
∞-monotone
2-monotone
Model A
Model B
A special case of B
Prac Rep 24
25. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Neighbourhood models
Build a neighbourhood around a given probability P0
Linear vacuous/ -contamination
G P(A) = (1− )P0(A)+( )PX (A)
G ∞-monotone, n+1 constraints, |E (P )| = n
G ∈ [0,1]: unreliability of information P0
Pari-Mutuel [16]
G P(A) = max{(1+ )P0(A)− ,0}
G 2-monotone, n+1 constraints, |E (P )| =? (n?)
G ∈ [0,1]: unreliability of information P0
Other models exist, such as odds-ratio or distance-based (all q s.t. d(p,q) < δ)
→ often not attractive for |E (P )|/monotonicity, but may have nice properties
(odds/ratio: updating, square/log distances: convex continuous neighbourhood)
Prac Rep 25
27. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A concise graph
ProbaVacuous
Linear-vacuous Pari-mutuel
Possibilities
P-boxes
Prob. int
Compa.
∞-monotone
2-monotone
Model A
Model B
A special case of B
Prac Rep 27
28. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Possibility distributions [10]
Definition
Distribution π : X → [0,1]
with π(x) = 1 for at least one x
P given by
P(A) = min
x∈Ac
1−π(x),
which is a necessity measure
π
Characteristics of P
G Necessitates at most n values
G P is an ∞-monotone measure
Prac Rep 28
29. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Possibility distributions
Alternative definition
Provide nested events
A1 ⊆ ... ⊆ An.
Give lower confidence bounds
P(Ai) = αi
with αi+1 ≥ αi
Ai
αi
Extreme points [19]
G Maximum number is 2n−1
G Algorithm using nested structure of sets Ai
Prac Rep 29
30. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A basic distribution: simple support
G Set E of most plausible
values
G Confidence degree α = P(E)
Extends to multiple sets E1,...,Ep
→ Confidence degrees over
nested sets [18]
pH value ∈ [4.5,5.5] with
α = 0.8 (∼ "quite probable")
π
3 4 4.5 5.5 6 7
0
0.2
0.4
0.6
0.8
1.0
Prac Rep 30
31. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Partially specified probabilities [1] [8]
Triangular distribution M,[a,b]
encompasses all probabilities
with
G mode/reference value M
G support domain [a,b].
Getting back to pH
G M = 5
G [a,b] = [3,7]
1
pH
π
5 73
Prac Rep 31
32. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Normalized likelihood as possibilities [9] [2]
π(θ) = L(θ|x)/maxθ∗∈Θ L(θ∗
|x)
Binomial situation:
G θ = success probability
G x number of observed
successes
G x= 4 succ. out of 11
G x= 20 succ. out of 55
θ
1
π
4/11
Prac Rep 32
33. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Other examples
G Statistical inequalities (e.g., Chebyshev inequality) [8]
G Linguistic information (fuzzy sets) [5]
G Approaches based on nested models
Prac Rep 33
34. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A concise graph
ProbaVacuous
Linear-vacuous Pari-mutuel
Possibilities
P-boxes
Prob. int
Compa.
∞-monotone
2-monotone
Model A
Model B
A special case of B
Prac Rep 34
35. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
P-boxes [6]
Definition
When X ordered, bounds on
events of the kind:
Ai = {x1,...,xi}
Each bounded by
F(xi) ≤ P(Ai) ≤ F(xi)
0.5
1.0
x1 x2 x3 x4 x5 x6 x7
Characteristics of P
G Necessitates at most 2n values
G P is an ∞-monotone measure
Prac Rep 35
36. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
In general
Definition
A set of nested events
A1 ⊆ ... ⊆ An
Each bounded by
αi ≤ P(Ai) ≤ βi
0.5
1.0
x1 x2 x3 x4 x5 x6 x7
Extreme points [15]
G At most equal to Pell number Kn = 2Kn−1 +Kn−2
G Algorithm based on a tree structure construction
Prac Rep 36
37. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
P-box on reals [11]
A pair [F,F] of cumulative
distributions
Bounds over events [−∞,x]
G Percentiles by experts;
G Kolmogorov-Smirnov bounds;
Can be extended to any
pre-ordered space [6], [21] ⇒
multivariate spaces!
Expert providing percentiles
0 ≤ P([−∞,12]) ≤ 0.2
0.2 ≤ P([−∞,24]) ≤ 0.4
0.6 ≤ P([−∞,36]) ≤ 0.8
0.5
1.0
6 12 18 24 30 36 42
E1
E2
E3
E4
E5
Prac Rep 37
38. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A concise graph
ProbaVacuous
Linear-vacuous Pari-mutuel
Possibilities
P-boxes
Prob. int
Compa.
∞-monotone
2-monotone
Model A
Model B
A special case of B
Prac Rep 38
39. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Probability intervals [4]
Definition
Elements {x1,...,xn}.
Each bounded by
p(xi) ∈ [p(xi),p(xi)] x1 x2 x3 x4 x5 x6
Characteristics of P
G Necessitates at most 2n values
G P is an 2-monotone measure
Extreme points [4]
G Specific algorithm to extract
G If n even, maximum number is n+1
n/2
n
2
Prac Rep 39
40. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Probability intervals: example
Linguistic assessment
G x is very probable
G x has a good chance
G x is very unlikely
G x probability is about α
⇒
Numerical translation
G p(x) ≥ 0.75
G 0.4 ≤ p(x) ≤ 0.85
G p(x) ≤ 0.25
G α−0.1 ≤ p(x) ≤ α+1
Prac Rep 40
41. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A concise graph
ProbaVacuous
Linear-vacuous Pari-mutuel
Possibilities
P-boxes
Prob. int
Compa.
∞-monotone
2-monotone
Model A
Model B
A special case of B
Prac Rep 41
42. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Comparative probabilities
definitions
Comparative probabilities on X : assessments
P(A) ≥ P(B)
event A "at least as probable as" event B.
Some comments
G studied from the axiomatic point of view [13, 20]
G few studies on their numerical aspects [17]
G interesting for qualitative uncertainty modeling/representation,
expert elicitation, . . .
Prac Rep 42
43. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A specific case: elementary comparisons [14]
Elementary comparisons
Comparative probability orderings of the states X = {x1,...,xn} in the
form of a subset L of {1,...,n}×{1,...,n}.
The set of probability measures compatible with this information is
P (L ) = {p ∈ PX |∀(i,j) ∈ L ,p(xi) ≥ p(xj)},
Prac Rep 43
44. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Why focusing on this case?
Practical interest
G multinomial models (e.g., imprecise prior for Dirichlet), modal value
elicitation
G direct extension to define imprecise belief functions
Easy to represent/manipulate
G Through a graph G = (X ,L ) with states as nodes and relation L
for edges
G Example: given X = {x1,...,x5},L = {(1,3),(1,4),(2,5),(4,5)}, its
associated graph G is:
x1
x3 x4
x2
x5
Prac Rep 44
45. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Some properties
Characteristics of P
G Necessitates at most n2
values
G No guarantee that P is a 2-monotone measure
Extreme points [14]
G Algorithm identifying subsets of disconnected nodes
G maximal number is 2n−1
Prac Rep 45
46. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A concise list (acc. to my knowledge)
Name Monot. Max. |const.| Max. |E (P )| Algo. to get E (P )
Proba ∞ n 1 Yes
Vacuous ∞ 1 n Yes
2-mon 2 2n
n! Yes [3]
∞-mon ∞ 2n
n! No
Lin-vac. ∞ n+1 n Yes
Pari-mutuel 2 n+1 ? (n) No
Possibility ∞ n 2n−1
Yes [19]
P-box (gen.) ∞ 2n Kn (Pell) Yes [15]
Prob. int. 2 2n n+1
n/2
n
2 Yes [4]
Elem. Compa. × n2
2n−1
Yes [14]
Prac Rep 46
47. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
A concise final graph
ProbaVacuous
Linear-vacuous Pari-mutuel
Possibilities
P-boxes
Prob. int
Compa.
∞-monotone
2-monotone
Model A
Model B
A special case of B
Prac Rep 47
48. Introduction Basics Practical Representations Applications
Basics Possibility distributions P-boxes Probability intervals Elem. Compa.
Some open questions
G study the numerical aspects of comparative probabilities with
numbers/general events.
G study the potential link between possibilities and elementary
comparative probabilities (share same number of extreme points,
induce ordering between states).
G study restricted bounds/information over specific families of events,
other than nested/elementary ones (e.g., events of at most k states).
G look at probability sets induced by bounding specific distances to p0,
in particular L1,L2,L∞ norms.
Prac Rep 48
49. Introduction Basics Practical Representations Applications
Plan
G Introduction
G Basics of imprecise probabilities
G A tour of practical representations
G Illustrative applications
H Numerical Signal processing [7]
H Camembert ripening [12]
Prac Rep 49
50. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Signal processing: introduction
G Impulse response µ
filter
µ
G Filtering: convolving kernel µ and observed output f(x)
Prac Rep 50
51. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Link with probability
G If µ positive and R µ(x) dx = 1
G µ equivalent to probability density
G Convolution: compute mathematical expectation Eµ(f)
G Numerical filtering: discretize (sampling) µ and f
f f
x
0
µ
x
0
µ
G µ > 0, xi
µ(xi) = 1
Prac Rep 51
52. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Which bandwidth?
R
x
µ
∆???
→ use imprecise probabilistic models to model sets of bandwidth
→ possiblities/p-boxes with sets centred around x
Prac Rep 52
53. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Example on simulated signal
time (msec)
signalamplitude
maxitive upper enveloppe
maxitive lower enveloppe
cloudy upper enveloppe
cloudy lower enveloppe
original signal
Prac Rep 53
55. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Results
Zoom on 2 parts
Résultats
CWMF ROAD Us
Prac Rep 55
56. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Motivations
A complex system
The Camembert-type cheese ripening process
t
j1 j14Cheese making Cheese ripening (∼ 13◦
C, ∼ 95%hum) Warehouse (∼ 4◦
C)
P( t = 14
|t = 0
)
G Multi-scale modeling; from microbial activities to sensory properties
G Dynamic probabilistic model
G Knowledge is fragmented, heterogeneous and incomplete
G Difficulties to learn precise model parameters
Use of -contamination for a robustness analysis of the model
Prac Rep 56
57. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Experiments
The network
T(t)
Km(t)
lo(t)
Km(t+1)
lo(t+1)
Time slice t Time slice t +1
Unrolled over 14 time steps (days)
T(1)
Km(1)
lo(1)
T(2)
Km(2)
lo(2)
Km(14)
lo(14)
t
j1 j14Cheese making Cheese ripening (∼ 13◦
C, ∼ 95%hum) Warehouse (∼ 4◦
C)
...
Prac Rep 57
58. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Propagation results
Forward propagation, ∀t ∈ 1,τ ,T(t) = 12o
C (average ripening room temperature) :
Ext{Km(t)|{Km(1),lo(1),T(1),...,T(τ)}}
Ext{lo(t)|{Km(1),lo(1),T(1),...,T(τ)}}
no physical constraints with added physical constraints
Prac Rep 58
59. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
Conclusions
Use of practical representations
G +: "Easy" robustness analysis of precise methods, or approximation
of imprecise ones
G +: allow experts to express imprecision or partial information
G +: often easier to explain/represent than general ones
G -: usually focus on specific events
G -: their form may not be conserved by information processing
Prac Rep 59
60. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
References I
[1] C. Baudrit and D. Dubois.
Practical representations of incomplete probabilistic knowledge.
Computational Statistics and Data Analysis, 51(1):86–108, 2006.
[2] M. Cattaneo.
Likelihood-based statistical decisions.
In Proc. 4th International Symposium on Imprecise Probabilities and
Their Applications, pages 107–116, 2005.
[3] A. Chateauneuf and J.-Y. Jaffray.
Some characterizations of lower probabilities and other monotone
capacities through the use of Mobius inversion.
Mathematical Social Sciences, 17(3):263–283, 1989.
Prac Rep 60
61. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
References II
[4] L. de Campos, J. Huete, and S. Moral.
Probability intervals: a tool for uncertain reasoning.
I. J. of Uncertainty, Fuzziness and Knowledge-Based Systems,
2:167–196, 1994.
[5] G. de Cooman and P. Walley.
A possibilistic hierarchical model for behaviour under uncertainty.
Theory and Decision, 52:327–374, 2002.
[6] S. Destercke, D. Dubois, and E. Chojnacki.
Unifying practical uncertainty representations: I generalized
p-boxes.
Int. J. of Approximate Reasoning, 49:649–663, 2008.
Prac Rep 61
62. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
References III
[7] S. Destercke and O. Strauss.
Filtering with clouds.
Soft Computing, 16(5):821–831, 2012.
[8] D. Dubois, L. Foulloy, G. Mauris, and H. Prade.
Probability-possibility transformations, triangular fuzzy sets, and
probabilistic inequalities.
Reliable Computing, 10:273–297, 2004.
[9] D. Dubois, S. Moral, and H. Prade.
A semantics for possibility theory based on likelihoods,.
Journal of Mathematical Analysis and Applications, 205(2):359 –
380, 1997.
Prac Rep 62
63. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
References IV
[10] D. Dubois and H. Prade.
Practical methods for constructing possibility distributions.
International Journal of Intelligent Systems, 31(3):215–239, 2016.
[11] S. Ferson, L. Ginzburg, V. Kreinovich, D. Myers, and K. Sentz.
Constructing probability boxes and dempster-shafer structures.
Technical report, Sandia National Laboratories, 2003.
[12] M. Hourbracq, C. Baudrit, P.-H. Wuillemin, and S. Destercke.
Dynamic credal networks: introduction and use in robustness
analysis.
In Proceedings of the Eighth International Symposium on Imprecise
Probability: Theories and Applications, pages 159–169, 2013.
Prac Rep 63
64. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
References V
[13] B. O. Koopman.
The axioms and algebra of intuitive probability.
Annals of Mathematics, pages 269–292, 1940.
[14] E. Miranda and S. Destercke.
Extreme points of the credal sets generated by comparative
probabilities.
Journal of Mathematical Psychology, 64:44–57, 2015.
[15] I. Montes and S. Destercke.
On extreme points of p-boxes and belief functions.
In Int. Conf. on Soft Methods in Probability and Statistics (SMPS),
2016.
Prac Rep 64
65. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
References VI
[16] R. Pelessoni, P. Vicig, and M. Zaffalon.
Inference and risk measurement with the pari-mutuel model.
International journal of approximate reasoning, 51(9):1145–1158,
2010.
[17] G. Regoli.
Comparative probability orderings.
Technical report, Society for Imprecise Probabilities: Theories and
Applications, 1999.
[18] S. Sandri, D. Dubois, and H. Kalfsbeek.
Elicitation, assessment and pooling of expert judgments using
possibility theory.
IEEE Trans. on Fuzzy Systems, 3(3):313–335, August 1995.
Prac Rep 65
66. Introduction Basics Practical Representations Applications
Numerical filtering Camembert ripening
References VII
[19] G. Schollmeyer.
On the number and characterization of the extreme points of the
core of necessity measures on finite spaces.
ISIPTA conference, 2016.
[20] P. Suppes, G. Wright, and P. Ayton.
Qualitative theory of subjective probability.
Subjective probability, pages 17–38, 1994.
[21] M. C. M. Troffaes and S. Destercke.
Probability boxes on totally preordered spaces for multivariate
modelling.
Int. J. Approx. Reasoning, 52(6):767–791, 2011.
Prac Rep 66