This document presents a computational framework based on transported meshfree methods. It discusses using Monte Carlo integration with kernels to estimate integration errors. Two types of kernels are introduced: lattice-based kernels suited for Lebesgue measures, and transported kernels where a transport map is applied. An example shows optimal discrepancy errors can be achieved for Monte Carlo integration with a Matern kernel in various dimensions. The framework is applied to machine learning problems, showing how kernels can be used for interpolation and extrapolation of observations with error bounds.
Integration with kernel methods, Transported meshfree methodsMercier Jean-Marc
I made public for discussion a first version (subject to changes) of a talk that will be given at the Particles 2019 conference.
The bold points of this presentation are the following :
1) We present new to my knowledge, sharp, estimations for Monte-Carlo type methods.
2) These estimations can be used in a wide variety of context to perform a sharp error analysis.
3) We present a class of numerical methods that we refer to as Transported Meshfree Methods. This class of methods can be used for a wide variety of problems based on Partial Differential Equations, among which Artificial Intelligence problems belongs.
4) We can guarantee, thanks to the error analysis, a worst-error while computing with transported meshfree methods. We can also check that this error matches optimal convergence rate.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
Presentation of my NSERC-USRA funded summer research project given at the Canadian Undergraduate Mathematics Conference (CUMC) 2014.
Please refer to the project site: http://jessebett.com/Radial-Basis-Function-USRA/
One of the central tasks in computational mathematics and statistics is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. In this talk, we present a set of sequential algorithms for function approximation in high dimensions with large data sets. The algorithms are of iterative nature and involve only vector operations. They use one data sample at each step and can handle dynamic/stream data. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
Integration with kernel methods, Transported meshfree methodsMercier Jean-Marc
I made public for discussion a first version (subject to changes) of a talk that will be given at the Particles 2019 conference.
The bold points of this presentation are the following :
1) We present new to my knowledge, sharp, estimations for Monte-Carlo type methods.
2) These estimations can be used in a wide variety of context to perform a sharp error analysis.
3) We present a class of numerical methods that we refer to as Transported Meshfree Methods. This class of methods can be used for a wide variety of problems based on Partial Differential Equations, among which Artificial Intelligence problems belongs.
4) We can guarantee, thanks to the error analysis, a worst-error while computing with transported meshfree methods. We can also check that this error matches optimal convergence rate.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
Presentation of my NSERC-USRA funded summer research project given at the Canadian Undergraduate Mathematics Conference (CUMC) 2014.
Please refer to the project site: http://jessebett.com/Radial-Basis-Function-USRA/
One of the central tasks in computational mathematics and statistics is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. In this talk, we present a set of sequential algorithms for function approximation in high dimensions with large data sets. The algorithms are of iterative nature and involve only vector operations. They use one data sample at each step and can handle dynamic/stream data. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
Polynomial matrices can help to elegantly formulate many broadband multi-sensor / multi-channel processing problems, and represent a direct extension of well-established narrowband techniques which typically involve eigen- (EVD) and singular value decompositions (SVD) for optimisation. Polynomial matrix decompositions extend the utility of the EVD to polynomial parahermitian matrices, and this talk presents a brief overview of such polynomial matrices, characteristics of the polynomial EVD (PEVD) and iterative algorithms for its solution. The presentation concludes with some surprising results when applying the PEVD to subband coding and broadband beamforming.
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشپروژه مارکت
بررسی دو روش شناسایی سیستم متغیر بازمان با جزییات و توضیحات ساده و گویا و توضیح روابط و علایم به طور کامل به همراه مثال و شبیه سازی در متلب و فایل های مرجع و مقالات مرجع
روش اول: شناسایی سیستم با پارامترهای متغیر با زمان با استفاده از Least Square
روش دوم: شناسایی سیستم دینامیکی با استفاده از شبکه های عصبی مصنوعی
مناسب برای ارائه به عنوان تحقیق یا پروژه درس شناسایی سیستم ها
به همراه شبیه سازی در متلب و گزارش 20 صفحه ای
https://www.prjmarket.com/product/%d8%a8%d8%b1%d8%b1%d8%b3%db%8c-%d8%af%d9%88-%d8%b1%d9%88%d8%b4-%d8%b4%d9%86%d8%a7%d8%b3%d8%a7%db%8c%db%8c-%d8%b3%db%8c%d8%b3%d8%aa%d9%85-%d9%87%d8%a7%db%8c-%d9%85%d8%aa%d8%ba%db%8c%d8%b1-%d8%a8%d8%a7/
Covariance matrices are central to many adaptive filtering and optimisation problems. In practice, they have to be estimated from a finite number of samples; on this, I will review some known results from spectrum estimation and multiple-input multiple-output communications systems, and how properties that are assumed to be inherent in covariance and power spectral densities can easily be lost in the estimation process. I will discuss new results on space-time covariance estimation, and how the estimation from finite sample sets will impact on factorisations such as the eigenvalue decomposition, which is often key to solving the introductory optimisation problems. The purpose of the presentation is to give you some insight into estimating statistics as well as to provide a glimpse on classical signal processing challenges such as the separation of sources from a mixture of signals.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Factorization including Spatial Constraint with Iterative Reweighted Regression”, International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, November 2012.
Robust Image Denoising in RKHS via Orthogonal Matching PursuitPantelis Bouboulis
We present a robust method for the image denoising task based on kernel ridge regression and sparse modeling. Added noise is assumed to consist of two parts. One part is impulse noise assumed to be sparse (outliers), while the other part is bounded noise. The noisy image is divided into small regions of interest, whose pixels are regarded as points of a two-dimensional surface. A kernel based ridge regression method, whose parameters are selected adaptively, is employed to fit the data, whereas the outliers are detected via the use of the increasingly popular orthogonal matching pursuit (OMP) algorithm. To this end, a new variant of the OMP rationale is employed that has the additional advantage to automatically terminate, when all outliers have been selected.
Polynomial matrices can help to elegantly formulate many broadband multi-sensor / multi-channel processing problems, and represent a direct extension of well-established narrowband techniques which typically involve eigen- (EVD) and singular value decompositions (SVD) for optimisation. Polynomial matrix decompositions extend the utility of the EVD to polynomial parahermitian matrices, and this talk presents a brief overview of such polynomial matrices, characteristics of the polynomial EVD (PEVD) and iterative algorithms for its solution. The presentation concludes with some surprising results when applying the PEVD to subband coding and broadband beamforming.
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشپروژه مارکت
بررسی دو روش شناسایی سیستم متغیر بازمان با جزییات و توضیحات ساده و گویا و توضیح روابط و علایم به طور کامل به همراه مثال و شبیه سازی در متلب و فایل های مرجع و مقالات مرجع
روش اول: شناسایی سیستم با پارامترهای متغیر با زمان با استفاده از Least Square
روش دوم: شناسایی سیستم دینامیکی با استفاده از شبکه های عصبی مصنوعی
مناسب برای ارائه به عنوان تحقیق یا پروژه درس شناسایی سیستم ها
به همراه شبیه سازی در متلب و گزارش 20 صفحه ای
https://www.prjmarket.com/product/%d8%a8%d8%b1%d8%b1%d8%b3%db%8c-%d8%af%d9%88-%d8%b1%d9%88%d8%b4-%d8%b4%d9%86%d8%a7%d8%b3%d8%a7%db%8c%db%8c-%d8%b3%db%8c%d8%b3%d8%aa%d9%85-%d9%87%d8%a7%db%8c-%d9%85%d8%aa%d8%ba%db%8c%d8%b1-%d8%a8%d8%a7/
Covariance matrices are central to many adaptive filtering and optimisation problems. In practice, they have to be estimated from a finite number of samples; on this, I will review some known results from spectrum estimation and multiple-input multiple-output communications systems, and how properties that are assumed to be inherent in covariance and power spectral densities can easily be lost in the estimation process. I will discuss new results on space-time covariance estimation, and how the estimation from finite sample sets will impact on factorisations such as the eigenvalue decomposition, which is often key to solving the introductory optimisation problems. The purpose of the presentation is to give you some insight into estimating statistics as well as to provide a glimpse on classical signal processing challenges such as the separation of sources from a mixture of signals.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Factorization including Spatial Constraint with Iterative Reweighted Regression”, International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, November 2012.
Robust Image Denoising in RKHS via Orthogonal Matching PursuitPantelis Bouboulis
We present a robust method for the image denoising task based on kernel ridge regression and sparse modeling. Added noise is assumed to consist of two parts. One part is impulse noise assumed to be sparse (outliers), while the other part is bounded noise. The noisy image is divided into small regions of interest, whose pixels are regarded as points of a two-dimensional surface. A kernel based ridge regression method, whose parameters are selected adaptively, is employed to fit the data, whereas the outliers are detected via the use of the increasingly popular orthogonal matching pursuit (OMP) algorithm. To this end, a new variant of the OMP rationale is employed that has the additional advantage to automatically terminate, when all outliers have been selected.
Regularized Compression of A Noisy Blurred Image ijcsa
Both regularization and compression are important issues in image processing and have been widely
approached in the literature. The usual procedure to obtain the compression of an image given through a
noisy blur requires two steps: first a deblurring step of the image and then a factorization step of the
regularized image to get an approximation in terms of low rank nonnegative factors. We examine here the
possibility of swapping the two steps by deblurring directly the noisy factors or partially denoised factors.
The experimentation shows that in this way images with comparable regularized compression can be
obtained with a lower computational cost.
Slides for the presentation at ENBIS 2018 of "Deep k-Means: Jointly Clustering with k-Means and Learning Representations" by Thibaut Thonet. Joint work with Maziar Moradi Fard and Eric Gaussier.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
1. A computational framework based over Transported
Meshfree methods.
P.G. LeFloch 1, J.M. Mercier 2
1CNRS, 2MPG-Partners
16 01 2020
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 1 / 12
2. Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
3. Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded
variations), E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
4. Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded
variations), E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
3 Others examples ...quantifiers, wavelet, deep feed-forward neural networks ...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
5. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
6. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
7. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
3 Pick (i.i.d) samples y1
, . . . , yN
. Then you can measure your integration error using
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HK ϕ HK
where
E2
(Y , HK ) =
R2D
K(x, y)dxdy +
1
N2
N
n,m=1
K(yn
, ym
) −
2
N
N
n=1 RD
K(x, yn
)dx
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
8. A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
3 Pick (i.i.d) samples y1
, . . . , yN
. Then you can measure your integration error using
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HK ϕ HK
where
E2
(Y , HK ) =
R2D
K(x, y)dxdy +
1
N2
N
n,m=1
K(yn
, ym
) −
2
N
N
n=1 RD
K(x, yn
)dx
4 You can optimize your error computing sharp discrepancy sequences and optimal
discrepancy error as
Y = arg inf
Y ∈RD×N
E(Y , HK ), EHK (N, D) = E(Y , HK )
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
9. Our local kernels : lattice-based and transported kernels
For our purposes, we crafted two kind of kernels :
1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L
a Lattice, L∗
its dual Lattice. Consider any discrete function satisfying
φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1 and define
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
10. Our local kernels : lattice-based and transported kernels
For our purposes, we crafted two kind of kernels :
1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L
a Lattice, L∗
its dual Lattice. Consider any discrete function satisfying
φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1 and define
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
2 Transported kernel : S : Ω → RD
a transport map. Ktra(x, y) = K(S(x), S(y)).
x
y
k
Matern
x
y
k
Gaussian
x
y
k
Multiquadric
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
11. Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
12. Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
13. Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
3 E(Y , HK ) random – vs E(Y , HK ) computed - vs theoretical EHK (N, D)
D=1 D=16 D=128
N=16 0.228 0.304 0.319
N=128 0.117 0.111 0.115
N=512 0.035 0.054 0.059
D=1 D=16 D=128
N=16 0.062 0.211 0.223
N=128 0.008 0.069 0.077
N=512 0.002 0.034 0.049
D=1 D=16 D=128
N=16 0.062 0.288 0.323
N=128 0.008 0.077 0.105
N=512 0.002 0.034 0.043
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
14. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
15. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
16. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
3 Extrapolation : then one can extrapolate with error bound
RD
P(x)dµ −
1
N
N
n=1
P(yn
) ≤ EHK (N, D) P HK
i.e. µ ∼ 1
N
N
n=1
δyn .
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
17. Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
3 Extrapolation : then one can extrapolate with error bound
RD
P(x)dµ −
1
N
N
n=1
P(yn
) ≤ EHK (N, D) P HK
i.e. µ ∼ 1
N
N
n=1
δyn .
4 Here are two very similar applications :
1 (y1
, P1
), . . . , (yN
, PN
) are prices and implied volatilities (eg call
options under SABR model): Pricing.
2 (y1
, P1
), . . . , (yN
, PN
) are pictures of dogs and cats : classifier.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
18. Application to time-dependant PDE
(Loading NS)
1 Consider a time dependant probability measure µ(t, x) and a kernel Kt
(x, y). We
can define sharp discrepancy sequences t → y1
(t), . . . , yn
(t) .
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
19. Application to time-dependant PDE
(Loading NS)
1 Consider a time dependant probability measure µ(t, x) and a kernel Kt
(x, y). We
can define sharp discrepancy sequences t → y1
(t), . . . , yn
(t) .
2 For PDE, we can try to compute these sequences. For instance consider the
Navier-Stokes equation (hyperbolic equations)
∂t µ = · (vµ), ∂t (µv) + · (µv2
) = − p + · (µΣ)
· v = 0 (or energy conservation for non newtonian fluids)
Together with boundary conditions Dirichlet / Neumann. We obtain a numerical
scheme sharing some similarities with SPH - smooth particle hydrodynamics :
that are LAGRANGIAN MESHFREE METHODS.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
20. Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
21. Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
22. Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D)
3 BACKWARD : interpret t → yn
(t), n = 1 . . . N as a moving, transported, PDE
grid (TREE). Solve with it the Kolmogorov equation. ERROR ESTIMATION :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤ EHKt (N, D) P(t, ·) HKt
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
23. Illustration : the 2D SABR process, widely used in Finance
SABR process d
Ft
αt
= ρ
αt Fβ
t 0
0 ναt
dW 1
t
dW 2
t
, with 0 ≤ β ≤ 1,
ν ≥ 0,ρ ∈ R2×2
. The Fokker-Planck equation associated to SABR is
∂t µ + L∗
µ = 0, L∗
µ = ρ
x2
2
2
xβ
1 0
0 ν2
2
x2
ρT
· 2
µ.
(Loading SABR200)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 9 / 12
24. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
25. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
26. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for
regulatory purposes, or future deltas / gammas / implied vols for hedging purposes.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
27. The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for
regulatory purposes, or future deltas / gammas / implied vols for hedging purposes.
4 Result ? We can compute the solution P(t, x) at any order of accuracy :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤
P(t, ·) HK
Nα
...
...where α ≥ 1/2 is any number... Choose it according to your desired electricity bill
! But beware to smoothing effects in high dimensions : HK contains less
informations as the dimension raises. Some problems, as are for instance optimal
stopping problems, are intrinsecally cursed.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
28. academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
29. academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
2 Business cases - done
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
30. academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
2 Business cases - done
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
3 Under work
1 McKean Vlasov equations (stochastic volatility modeling).
2 ISDA Standard Initial Margin : XVA computations based on sensitivities
(delta / vega ..gamma)
3 Transition IBOR / RFR rates a la Lyashenko - Mercurio.
4 Strategies for Liquidity risk : Hamilton-Jacobi-Bellman equations in high
dimensions.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
31. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
32. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
33. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
34. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
35. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
5 ..for which the error analysis applies : we can guarantee a worst
error estimation, and we can check that this error matches an
optimal convergence rate.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
36. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
5 ..for which the error analysis applies : we can guarantee a worst
error estimation, and we can check that this error matches an
optimal convergence rate.
6 ...Thus we can argue that our numerical methods reach nearly optimal
algorithmic complexity.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12