Presentation of the paper "Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors" at the 7th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). Paper available here: https://proceedings.kr.org/2020/32/
Abstract: The integrated modeling and analysis of dynamic systems and the data they manipulate has been long advocated, on the one hand, to understand how data and corresponding decisions affect the system execution, and on the other hand to capture how actions occurring in the systems operate over data. KR techniques proved successful in handling a variety of tasks over such integrated models, ranging from verification to online monitoring. In this paper, we consider a simple, yet relevant model for data-aware dynamic systems (DDSs), consisting of a finite-state control structure defining the executability of actions that manipulate a finite set of variables with an infinite domain. On top of this model, we consider a data-aware version of reactive synthesis, where execution strategies are built by guaranteeing the satisfaction of a desired linear temporal property that simultaneously accounts for the system dynamics and data evolution.
This document summarizes methods for performing causality and determinacy tests on time series data using R scripts. It describes:
1. Implementing the Test(S), Test(ABN), Test(CS), Test(D) methods to test for stationarity, abnormalities, causality, and determinacy in money velocity time series data.
2. Using R functions like scan(), log(), diff(), scale2() to transform data and $testCS, $plotTestCS, $testD, $plotTestD to run the tests and visualize results.
3. Applying the tests to money velocity data in Japan from 1955-2001 to analyze causality and determinacy properties over time.
Accelerating Dynamic Time Warping Subsequence Search with GPUDavide Nardone
Many time series data mining problems require
subsequence similarity search as a subroutine. While this can
be performed with any distance measure, and dozens of
distance measures have been proposed in the last decade, there
is increasing evidence that Dynamic Time Warping (DTW) is
the best measure across a wide range of domains. Given
DTW’s usefulness and ubiquity, there has been a large
community-wide effort to mitigate its relative lethargy.
Proposed speedup techniques include early abandoning
strategies, lower-bound based pruning, indexing and
embedding. In this work we argue that we are now close to
exhausting all possible speedup from software, and that we
must turn to hardware-based solutions if we are to tackle the
many problems that are currently untenable even with stateof-
the-art algorithms running on high-end desktops. With this
motivation, we investigate both GPU (Graphics Processing
Unit) and FPGA (Field Programmable Gate Array) based
acceleration of subsequence similarity search under the DTW
measure. As we shall show, our novel algorithms allow GPUs,
which are typically bundled with standard desktops, to achieve
two orders of magnitude speedup. For problem domains which
require even greater scale up, we show that FPGAs costing just
a few thousand dollars can be used to produce four orders of
magnitude speedup. We conduct detailed case studies on the
classification of astronomical observations and similarity
search in commercial agriculture, and demonstrate that our
ideas allow us to tackle problems that would be simply
untenable otherwise.
This document describes ScalaMeter, a performance regression testing framework. It discusses several problems that can occur when benchmarking code performance, including warmup effects from JIT compilation, interference from other processes, garbage collection triggering, and variability from other runtime events. It provides examples demonstrating these issues and discusses solutions like running benchmarks in a separate JVM, ignoring measurements impacted by garbage collection, and performing many repetitions to obtain a stable mean.
The document describes a discrete-time Kalman filter implemented in MATLAB to estimate the position of an underwater vehicle using sensor measurements. It presents the state space modeling equations used in the filter, including modifying the state vector to address non-linearities in the direction measurement. Simulation results using a carefully designed trajectory show the filter provides estimates with errors generally within a few meters for position, a few centimeters for velocity bias, and a few meters for range over 1000 iterations.
Story of static code analyzer developmentAndrey Karpov
The document discusses the history and development of static code analyzers. It describes how early tools used regular expressions that were ineffective for complex code analysis. Modern static analyzers overcome these limitations through techniques like type inference, data flow analysis, symbolic execution, and pattern-based analysis. They also leverage method annotations and a mixture of analysis approaches. While machine learning is hyped, static analysis remains very challenging due to the complexity of code and rapid language evolution.
If you are worried about completing your R homework, you can connect with us at Statisticshomeworkhelper.com. We have a team of experts who are professionals in R programming homework help and have years of experience in working on any problem related to R. Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com. You can also call +1 (315) 557-6473 for assistance with Statistics Homework.
This document summarizes methods for performing causality and determinacy tests on time series data using R scripts. It describes:
1. Implementing the Test(S), Test(ABN), Test(CS), Test(D) methods to test for stationarity, abnormalities, causality, and determinacy in money velocity time series data.
2. Using R functions like scan(), log(), diff(), scale2() to transform data and $testCS, $plotTestCS, $testD, $plotTestD to run the tests and visualize results.
3. Applying the tests to money velocity data in Japan from 1955-2001 to analyze causality and determinacy properties over time.
Accelerating Dynamic Time Warping Subsequence Search with GPUDavide Nardone
Many time series data mining problems require
subsequence similarity search as a subroutine. While this can
be performed with any distance measure, and dozens of
distance measures have been proposed in the last decade, there
is increasing evidence that Dynamic Time Warping (DTW) is
the best measure across a wide range of domains. Given
DTW’s usefulness and ubiquity, there has been a large
community-wide effort to mitigate its relative lethargy.
Proposed speedup techniques include early abandoning
strategies, lower-bound based pruning, indexing and
embedding. In this work we argue that we are now close to
exhausting all possible speedup from software, and that we
must turn to hardware-based solutions if we are to tackle the
many problems that are currently untenable even with stateof-
the-art algorithms running on high-end desktops. With this
motivation, we investigate both GPU (Graphics Processing
Unit) and FPGA (Field Programmable Gate Array) based
acceleration of subsequence similarity search under the DTW
measure. As we shall show, our novel algorithms allow GPUs,
which are typically bundled with standard desktops, to achieve
two orders of magnitude speedup. For problem domains which
require even greater scale up, we show that FPGAs costing just
a few thousand dollars can be used to produce four orders of
magnitude speedup. We conduct detailed case studies on the
classification of astronomical observations and similarity
search in commercial agriculture, and demonstrate that our
ideas allow us to tackle problems that would be simply
untenable otherwise.
This document describes ScalaMeter, a performance regression testing framework. It discusses several problems that can occur when benchmarking code performance, including warmup effects from JIT compilation, interference from other processes, garbage collection triggering, and variability from other runtime events. It provides examples demonstrating these issues and discusses solutions like running benchmarks in a separate JVM, ignoring measurements impacted by garbage collection, and performing many repetitions to obtain a stable mean.
The document describes a discrete-time Kalman filter implemented in MATLAB to estimate the position of an underwater vehicle using sensor measurements. It presents the state space modeling equations used in the filter, including modifying the state vector to address non-linearities in the direction measurement. Simulation results using a carefully designed trajectory show the filter provides estimates with errors generally within a few meters for position, a few centimeters for velocity bias, and a few meters for range over 1000 iterations.
Story of static code analyzer developmentAndrey Karpov
The document discusses the history and development of static code analyzers. It describes how early tools used regular expressions that were ineffective for complex code analysis. Modern static analyzers overcome these limitations through techniques like type inference, data flow analysis, symbolic execution, and pattern-based analysis. They also leverage method annotations and a mixture of analysis approaches. While machine learning is hyped, static analysis remains very challenging due to the complexity of code and rapid language evolution.
If you are worried about completing your R homework, you can connect with us at Statisticshomeworkhelper.com. We have a team of experts who are professionals in R programming homework help and have years of experience in working on any problem related to R. Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com. You can also call +1 (315) 557-6473 for assistance with Statistics Homework.
Search-driven String Constraint Solving for Vulnerability DetectionLionel Briand
The document presents a search-driven approach to solving string constraints for vulnerability detection. State-of-the-art solvers like Z3-str2 have limitations in supporting complex string operations. The proposed approach decomposes constraints and leverages an automata-based solver to reduce the search space, before using a search-based solver to find satisfying assignments. An evaluation on 43 programs shows the approach significantly improves vulnerability detection effectiveness over baseline solvers, with affordable time costs. The automata-based solver plays a key role in the effectiveness of the search-based procedure.
This document provides an overview of topics that will be covered in a two-day statistical programming course in R, including:
1. Vector and matrix operations, file input/output, and probability density functions.
2. Distributions like binomial, Poisson, normal and uniform as well as hypothesis testing using t, z, F, and chi-square.
3. Linear and multiple regression techniques, including prediction, residual analysis and modeling.
Case studies and examples are provided for many of these statistical techniques in R, such as linear regression, hypothesis testing, and probability distributions.
As part of the GSP’s capacity development and improvement programme, FAO/GSP have organised a one week training in Izmir, Turkey. The main goal of the training was to increase the capacity of Turkey on digital soil mapping, new approaches on data collection, data processing and modelling of soil organic carbon. This 5 day training is titled ‘’Training on Digital Soil Organic Carbon Mapping’’ was held in IARTC - International Agricultural Research and Education Center in Menemen, Izmir on 20-25 August, 2017.
RedisConf18 - CRDTs and Redis - From sequential to concurrent executionsRedis Labs
The document discusses conflict-free replicated data types (CRDTs) and how they allow for concurrent executions while preserving sequential semantics. It explains that CRDTs can be implemented by tracking a log of operations and defining semantics based on happens-before relationships between operations. It provides examples of how counters, registers, and sets can be implemented as CRDTs by defining their semantics under both sequential and concurrent executions. Redis uses these principles to provide replicated data types like counters, registers, and sets that can support concurrent updates across replicas.
This document summarizes R and data mining. It introduces R language features including vectors, factors, arrays, matrices, data frames, lists, and functions. It also discusses R text mining frameworks like the 'tm' package, and preprocessing text data in R using packages like rmmseg4j, openNLP, Rstem, and Snowball. Finally, it briefly mentions high performance computing in R, network analysis in R, and statistical graphics.
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
Processing Reachability Queries with Realistic Constraints on Massive Network...BigMine
Massive graphs are ubiquitous in various application domains, such as social networks, road networks, communication networks, biological networks, RDF graphs, and so on. Such graphs are massive (for example, with hundreds of millions of nodes and edges or even more) and contain rich information (for example, node/edge weights, labels and textual contents). In such massive graphs, an important class of problems is to process various graph structure related queries. Graph reachability, as an example, asks whether a node can reach another in a graph. However, the large graph scale presents new challenges for efficient query processing.
In this talk, I will introduce two new yet important types of graph reachability queries: weight constraint reachability that imposes edge weight constraint on the answer path, and k-hop reachability that imposes a length constraint on the answer path. With such realistic constraints, we can find more meaningful and practically feasible answers. These two reachablity queries have wide applications in many real-world problems, such as QoS routing and trip planning.
The document describes 11 MATLAB programs for simulating wireless communication networks. It provides instructions for running the main simulation programs dcamain.m and main.m, including how to set parameters, run the simulations, and view the results. Key outputs include plots of blocking probability over time and against system parameters like user numbers or antenna beamwidths.
Efficient Data Stream Classification via Probabilistic Adaptive WindowsAlbert Bifet
This document discusses efficient data stream classification using probabilistic adaptive windows. It introduces the concept of data streams which have potentially infinite sequences of high-speed data that must be processed in real-time with limited memory. It then describes the probabilistic approximate window (PAW) algorithm, which maintains a sample of data instances in logarithmic memory by giving greater weight to newer instances. The document evaluates several data stream classification methods on real and synthetic data streams and finds that k-nearest neighbors with PAW has higher accuracy and lower memory usage than other methods.
This document discusses various types of data structures in R including scalars, vectors, arrays, and matrices. It provides examples of how to perform common operations on vectors such as adding and deleting elements, obtaining the length of a vector, and indexing vectors. It also discusses generating vectors using colon operators and the seq() function, repeating values with rep(), and filtering vectors to extract elements that satisfy certain conditions.
Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
---------------------------------------------------------------
Come join our meet-up and learn how easily you can use R for advanced Machine learning. In this meet-up, we will demonstrate how to understand and use Xgboost for Kaggle competition. Tong is in Canada and will do remote session with us through google hangout.
---------------------------------------------------------------
Speaker Bio:
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Pre-requisite(if any): R /Calculus
Preparation: A laptop with R installed. Windows users might need to have RTools installed as well.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Event arrangement:
6:45pm Doors open. Come early to network, grab a beer and settle in.
7:00-9:00pm XgBoost Demo
Reference:
https://github.com/dmlc/xgboost
This document discusses time series analysis techniques in R, including decomposition, forecasting, clustering, and classification. It provides examples of decomposing the AirPassengers dataset, forecasting with ARIMA models, hierarchical clustering on synthetic control chart data using Euclidean and DTW distances, and classifying the control chart data using decision trees with DWT features. Accuracy of over 88% was achieved on the classification task.
REDUCING TIMED AUTOMATA: A NEW APPROACHijistjournal
Today model checking is the most useful verification method for real time systems, so there is a serious need for improving its efficiency with respect to both time and resources. In this paper we present a new approach for reducing timed automata. In fact regions of a region automaton are aggregated according to a coarse equivalence class partitioning based on traces. We will show that the proposed algorithm terminates and preserves original timed automaton. Proposed algorithms are implemented by model transformation with Atom3 tool.
REDUCING TIMED AUTOMATA : A NEW APPROACHijistjournal
Today model checking is the most useful verification method for real time systems, so there is a serious need for improving its efficiency with respect to both time and resources. In this paper we present a new approach for reducing timed automata. In fact regions of a region automaton are aggregated according to a coarse equivalence class partitioning based on traces. We will show that the proposed algorithm terminates and preserves original timed automaton. Proposed algorithms are implemented by model transformation with Atom3 tool.
The document outlines various statistical and data analysis techniques that can be performed in R including importing data, data visualization, correlation and regression, and provides code examples for functions to conduct t-tests, ANOVA, PCA, clustering, time series analysis, and producing publication-quality output. It also reviews basic R syntax and functions for computing summary statistics, transforming data, and performing vector and matrix operations.
My talk at the "15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing " MCQMC conference at Johannes Kepler Universität Linz, July 20, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation."
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
More Related Content
Similar to Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors
Search-driven String Constraint Solving for Vulnerability DetectionLionel Briand
The document presents a search-driven approach to solving string constraints for vulnerability detection. State-of-the-art solvers like Z3-str2 have limitations in supporting complex string operations. The proposed approach decomposes constraints and leverages an automata-based solver to reduce the search space, before using a search-based solver to find satisfying assignments. An evaluation on 43 programs shows the approach significantly improves vulnerability detection effectiveness over baseline solvers, with affordable time costs. The automata-based solver plays a key role in the effectiveness of the search-based procedure.
This document provides an overview of topics that will be covered in a two-day statistical programming course in R, including:
1. Vector and matrix operations, file input/output, and probability density functions.
2. Distributions like binomial, Poisson, normal and uniform as well as hypothesis testing using t, z, F, and chi-square.
3. Linear and multiple regression techniques, including prediction, residual analysis and modeling.
Case studies and examples are provided for many of these statistical techniques in R, such as linear regression, hypothesis testing, and probability distributions.
As part of the GSP’s capacity development and improvement programme, FAO/GSP have organised a one week training in Izmir, Turkey. The main goal of the training was to increase the capacity of Turkey on digital soil mapping, new approaches on data collection, data processing and modelling of soil organic carbon. This 5 day training is titled ‘’Training on Digital Soil Organic Carbon Mapping’’ was held in IARTC - International Agricultural Research and Education Center in Menemen, Izmir on 20-25 August, 2017.
RedisConf18 - CRDTs and Redis - From sequential to concurrent executionsRedis Labs
The document discusses conflict-free replicated data types (CRDTs) and how they allow for concurrent executions while preserving sequential semantics. It explains that CRDTs can be implemented by tracking a log of operations and defining semantics based on happens-before relationships between operations. It provides examples of how counters, registers, and sets can be implemented as CRDTs by defining their semantics under both sequential and concurrent executions. Redis uses these principles to provide replicated data types like counters, registers, and sets that can support concurrent updates across replicas.
This document summarizes R and data mining. It introduces R language features including vectors, factors, arrays, matrices, data frames, lists, and functions. It also discusses R text mining frameworks like the 'tm' package, and preprocessing text data in R using packages like rmmseg4j, openNLP, Rstem, and Snowball. Finally, it briefly mentions high performance computing in R, network analysis in R, and statistical graphics.
The smile calibration problem is a mathematical conundrum in finance that has challenged quantitative analysts for decades. Through his research, Aitor Muguruza has discovered a novel resolution to this classic problem.
Processing Reachability Queries with Realistic Constraints on Massive Network...BigMine
Massive graphs are ubiquitous in various application domains, such as social networks, road networks, communication networks, biological networks, RDF graphs, and so on. Such graphs are massive (for example, with hundreds of millions of nodes and edges or even more) and contain rich information (for example, node/edge weights, labels and textual contents). In such massive graphs, an important class of problems is to process various graph structure related queries. Graph reachability, as an example, asks whether a node can reach another in a graph. However, the large graph scale presents new challenges for efficient query processing.
In this talk, I will introduce two new yet important types of graph reachability queries: weight constraint reachability that imposes edge weight constraint on the answer path, and k-hop reachability that imposes a length constraint on the answer path. With such realistic constraints, we can find more meaningful and practically feasible answers. These two reachablity queries have wide applications in many real-world problems, such as QoS routing and trip planning.
The document describes 11 MATLAB programs for simulating wireless communication networks. It provides instructions for running the main simulation programs dcamain.m and main.m, including how to set parameters, run the simulations, and view the results. Key outputs include plots of blocking probability over time and against system parameters like user numbers or antenna beamwidths.
Efficient Data Stream Classification via Probabilistic Adaptive WindowsAlbert Bifet
This document discusses efficient data stream classification using probabilistic adaptive windows. It introduces the concept of data streams which have potentially infinite sequences of high-speed data that must be processed in real-time with limited memory. It then describes the probabilistic approximate window (PAW) algorithm, which maintains a sample of data instances in logarithmic memory by giving greater weight to newer instances. The document evaluates several data stream classification methods on real and synthetic data streams and finds that k-nearest neighbors with PAW has higher accuracy and lower memory usage than other methods.
This document discusses various types of data structures in R including scalars, vectors, arrays, and matrices. It provides examples of how to perform common operations on vectors such as adding and deleting elements, obtaining the length of a vector, and indexing vectors. It also discusses generating vectors using colon operators and the seq() function, repeating values with rep(), and filtering vectors to extract elements that satisfy certain conditions.
Our fall 12-Week Data Science bootcamp starts on Sept 21st,2015. Apply now to get a spot!
If you are hiring Data Scientists, call us at (1)888-752-7585 or reach info@nycdatascience.com to share your openings and set up interviews with our excellent students.
---------------------------------------------------------------
Come join our meet-up and learn how easily you can use R for advanced Machine learning. In this meet-up, we will demonstrate how to understand and use Xgboost for Kaggle competition. Tong is in Canada and will do remote session with us through google hangout.
---------------------------------------------------------------
Speaker Bio:
Tong is a data scientist in Supstat Inc and also a master students of Data Mining. He has been an active R programmer and developer for 5 years. He is the author of the R package of XGBoost, one of the most popular and contest-winning tools on kaggle.com nowadays.
Pre-requisite(if any): R /Calculus
Preparation: A laptop with R installed. Windows users might need to have RTools installed as well.
Agenda:
Introduction of Xgboost
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
Event arrangement:
6:45pm Doors open. Come early to network, grab a beer and settle in.
7:00-9:00pm XgBoost Demo
Reference:
https://github.com/dmlc/xgboost
This document discusses time series analysis techniques in R, including decomposition, forecasting, clustering, and classification. It provides examples of decomposing the AirPassengers dataset, forecasting with ARIMA models, hierarchical clustering on synthetic control chart data using Euclidean and DTW distances, and classifying the control chart data using decision trees with DWT features. Accuracy of over 88% was achieved on the classification task.
REDUCING TIMED AUTOMATA: A NEW APPROACHijistjournal
Today model checking is the most useful verification method for real time systems, so there is a serious need for improving its efficiency with respect to both time and resources. In this paper we present a new approach for reducing timed automata. In fact regions of a region automaton are aggregated according to a coarse equivalence class partitioning based on traces. We will show that the proposed algorithm terminates and preserves original timed automaton. Proposed algorithms are implemented by model transformation with Atom3 tool.
REDUCING TIMED AUTOMATA : A NEW APPROACHijistjournal
Today model checking is the most useful verification method for real time systems, so there is a serious need for improving its efficiency with respect to both time and resources. In this paper we present a new approach for reducing timed automata. In fact regions of a region automaton are aggregated according to a coarse equivalence class partitioning based on traces. We will show that the proposed algorithm terminates and preserves original timed automaton. Proposed algorithms are implemented by model transformation with Atom3 tool.
The document outlines various statistical and data analysis techniques that can be performed in R including importing data, data visualization, correlation and regression, and provides code examples for functions to conduct t-tests, ANOVA, PCA, clustering, time series analysis, and producing publication-quality output. It also reviews basic R syntax and functions for computing summary statistics, transforming data, and performing vector and matrix operations.
My talk at the "15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing " MCQMC conference at Johannes Kepler Universität Linz, July 20, 2022, about my recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation."
Similar to Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors (20)
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
Slides of the keynote speech on "Constraints for process framing in Augmented BPM" at the AI4BPM 2022 International Workshop, co-located with BPM 2022. The keynote focuses on the problem of "process framing" in the context of the new vision of "Augmented BPM", where BPM systems are augmented with AI capabilities. This vision is described in a manifesto, available here: https://arxiv.org/abs/2201.12855
Keynote speech at KES 2022 on "Intelligent Systems for Process Mining". I introduce process mining, discuss why process mining tasks should be approached by using intelligent systems, and show a concrete example of this combination, namely (anticipatory) monitoring of evolving processes against temporal constraints, using techniques from knowledge representation and formal methods (in particular, temporal logics over finite traces and their automata-theoretic characterization).
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
1. The document discusses representing business processes with uncertainty using ProbDeclare, an extension of Declare that allows constraints to have uncertain probabilities.
2. ProbDeclare models contain both crisp constraints that must always hold and probabilistic constraints that hold with some probability. This leads to multiple possible "scenarios" depending on which constraints are satisfied.
3. Reasoning involves determining which scenarios are logically consistent using LTLf, and computing the probability distribution over scenarios by solving a system of inequalities defined by the constraint probabilities.
Presentation on "From Case-Isolated to Object-Centric Processes - A Tale of Two Models" as part of the Hasselt University BINF Research Seminar Series (see https://www.uhasselt.be/en/onderzoeksgroepen-en/binf/research-seminar-series).
Invited seminar on "Modeling and Reasoning over Declarative Data-Aware Processes" as part of the KRDB Summer Online Seminars 2020 (https://www.inf.unibz.it/krdb/sos-2020/).
Presentation of the paper "Soundness of Data-Aware Processes with Arithmetic Conditions" at the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022). Paper available here: https://doi.org/10.1007/978-3-031-07472-1_23
Abstract:
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
Presentation of the paper "Probabilistic Trace Alignment" at the 3rd International Conference on Process Mining (ICPM 2021). Paper available here: https://doi.org/10.1109/ICPM53251.2021.9576856
Abstract:
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model. Alignment-based approaches for conformance checking have so far used crisp process models as a reference. Recent probabilistic conformance checking approaches check the degree of conformance of an event log as a whole with respect to a stochastic process model, without providing alignments. For the first time, we introduce a conformance checking approach based on trace alignments using stochastic Workflow nets. This requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the 18th Int. Conference on Business Process Management (BPM 2020). Paper available here: https://doi.org/10.1007/978-3-030-58666-9_3
Abstract: Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the CAiSE2020 Forum. The paper is available here: https://link.springer.com/chapter/10.1007/978-3-030-58135-0_8
Abstract: Conformance checking is a fundamental task to detect deviations between the actual and the expected courses of execution of a business process. In this context, temporal business constraints have been extensively adopted to declaratively capture the expected behavior of the process. However, traditionally, these constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, we equip business constraints with a natural, probabilistic notion of uncertainty. We discuss the semantic implications of the resulting framework and show how probabilistic conformance checking and constraint entailment can be tackled therein.
Presentation of the paper "Modeling and Reasoning over Declarative Data-Aware Processes with Object-Centric Behavioral Constraints" at the 17th Int. Conference on Business Process Management (BPM 2019). Paper available here: https://link.springer.com/chapter/10.1007/978-3-030-26619-6_11
Abstract
Existing process modeling notations ranging from Petri nets to BPMN have difficulties capturing the data manipulated by processes. Process models often focus on the control flow, lacking an explicit, conceptually well-founded integration with real data models, such as ER diagrams or UML class diagrams. To overcome this limitation, Object-Centric Behavioral Constraints (OCBC) models were recently proposed as a new notation that combines full-fledged data models with control-flow constraints inspired by declarative process modeling notations such as DECLARE and DCR Graphs. We propose a formalization of the OCBC model using temporal description logics. The obtained formalization allows us to lift all reasoning services defined for constraint-based process modeling notations without data, to the much more sophisticated scenario of OCBC. Furthermore, we show how reasoning over OCBC models can be reformulated into decidable, standard reasoning tasks over the corresponding temporal description logic knowledge base.
Keynote speech at the Belgian Process Mining Research Day 2021. I discuss the open, critical challenge of data preparation in process mining, considering the case where the original event data are implicitly stored in (legacy) relational databases. This case covers the common situation where event data are stored inside the data layer of an ERP or CRM system. This is usually handled using manual, ad-hoc, error-prone ETL procedures. I propose instead to adopt a pipeline based on semantic technologies, in particular the framework of ontology-based data access (also known as virtual knowledge graph). The approach is code-less, and relies on three main conceptual steps: (1) the creation of a data model capturing the relevant classes, attributes, and associations in the domain of interest (2) the definition of declarative mappings from the source database to the data model, following the ontology-based data access paradigm (3) the annotation of the data model with indications on which classes/associations/attributes provide the relevant notions of case, events, event attributes, and event-to-case relation. Once this is done, the framework automatically extracts the event log from the legacy data. This makes extremely smooth to generate logs by taking multiple perspectives on the same reality. The approach has been operationalized in the onprom tool, which employs semantic web standard languages for the various steps, and the XES standard as the target format for the event logs.
Keynote speech at the 7th International Workshop on DEClarative, DECision and Hybrid approaches to processes ( DEC2H 2019) In conjunction with BPM 2019.
This is a talk about the combined modeling and reasoning techniques for decisions, background knowledge, and work processes.
The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.
In this keynote, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and re
Presentation at "Ontology Make Sense", an event in honor of Nicola Guarino, on how to integrate data models with behavioral constraints, an essential problem when modeling multi-case real-life work processes evolving multiple objects at once. I propose to combine UML class diagrams with temporal constraints on finite traces, linked to the data model via co-referencing constraints on classes and associations.
The document discusses representing and querying norm states using temporal ontology-based data access (OBDA). It presents the QUEN framework which models norms and their state transitions declaratively on top of a relational database. QUEN has three layers: 1) an ontological layer representing norms, 2) a specification of norm state transitions in response to database events, and 3) a legacy relational database storing events. It demonstrates QUEN on an example of patient data access consent, modeling authorizations and their lifecycles. Norm state queries are answered directly over the database using the declarative specifications without materializing states.
Presentation ad EDOC 2019 on monitoring multi-perspective business constraints accounting for time and data, with a specific focus on the (unsolvable in general) problem of conflict detection.
1) The document discusses business process management and how conceptual modeling and process mining can help understand and improve digital enterprises.
2) Process mining techniques like process discovery from event logs, decision mining, and social network mining can provide insights into how processes are executed in reality.
3) Replay techniques can enhance process models with timing information and detect deviations to help align actual behaviors with expected behaviors.
Presentation at BPM 2019, focused on a data-aware extension of BPMN encompassing read-write and read-only data, and on SMT-techniques for effectively tackling parameterized verification of the resulting integrated models.
More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
2. Data-process divide in dynamic systems
Data
Structural aspects
In
fi
nite quanti
fi
cation domain
Process
Actions and updates
System dynamics
3. Data-process divide in dynamic systems
Data
Structural aspects
In
fi
nite quanti
fi
cation domain
Process
Actions and updates
System dynamics
Explosive mix
Undecidability of basic tasks
A-priori propositionalisation
4. The case of Business Process Management
Data-process divide
Cer. Exp.
(date)
Length
(m)
Draft
(m)
Capacity
(TEU)
Cargo
(mg/cm2)
0 0 0 0 0
Enter
y, n
U
Ship Clearance
today
> today < 260 < 10 < 1000
> today < 260 < 10 1000
> today < 260 [10,12] < 4000 0.75
> today < 260 [10,12] < 4000 > 0.75
> today [260,320) (10,13] < 6000 0.5
> today [260,320) (10,13] < 6000 > 0.5
> today [320,400) 13 > 4000 0.25
> today [320,400) 13 > 4000 > 0.25
n
y
n
y
n
y
n
y
n
1
2
3
4
5
6
7
8
9
Table 1: DMN representation of the ship clearance decision of Figure 1b
Enter Length
(m)
Cargo
(mg/cm2)
y,n 0 0
Refuel Area
none, indoor, outdoor
U
Refuel area determination
n
y 350
y > 350 0.3
y > 350 > 0.3
none
indoor
indoor
outdoor
1
2
3
4
Table 2: DMN representation of the refuel area determination decision of Figure 1b
er their corresponding datatypes. In Table 1, the input attributes are: (i) the certificate expira-
on date, (ii) the length, (iii) the size, (iv) the capacity, and (v) the amount of cargo residuals of
ship. Such attributes are nonnegative real numbers; this is captured by typing them as reals,
ding restriction “ 0” as facet. The rightmost, red cell represents the output attribute. In both
ses, there is only one output attribute, of type string. The cell below enumerates the possible
tput values produced by the decision table, in descending priority order. If a default output is
fined, it is underlined. This is the case for the none string in Table 2.
Every other row models a rule. The intuitive interpretation of such rules relies on the usual
f . ..then ...” pattern. For example, the first rule of Table 1 states that, if the certificate of the
ip is expired, then the ship cannot enter the port, that is, the enter output attribute is set to n
egardless of the other input attributes). The second rule, instead, states that, if the ship has a
lid certificate, a length shorter than 260 m, a draft smaller than 10 m, and a capacity smaller
an 1000 TEU, then the ship is allowed to enter the port (regardless of the cargo residuals it
rries). Other rules are interpreted similarly.
Ship
id-code
name
Certi
fi
cate
exp-date
Harbor
location
Attempt
when
outcome
tried entering into
owns
1
0..1
* *
receive
entrance request
record
ship info
inspect ship
ship id
acquire
certificate
record
cargo
residuals
record
exp. date
cargo residuals
certificate exp. date
decice
clearance
enter
refuel area
enter?
send
refusal
send
fuel area info
open
dock
N
Y
ship type (short name)
5. The case of Business Process Management
Data-process divide
Tasks read and write data. Some choices depend on data. Other choices are
resolved by agents. Agents are not always cooperative.
6. Which model for data-aware
dynamic systems?
How to verify properties of data-
aware dynamic systems?
How to account for multiple
agents and reason strategically?
Three main questions
10. Agents: control of actions, choices, variables
Data-Aware Dynamic Systems
0 1 2
1
4
3
choose
repeat
guess
wait cheat
win
num val
[ numw > 0 ] [ valw ≥ valr ]
[ valr ≥ numr ]
[ numw ≥ valr ]
11. Agents: control of actions, choices, variables
Data-Aware Dynamic Systems
0 1 2
1
4
3
choose
repeat
guess
wait cheat
win
num val
[ numw > 0 ] [ valw ≥ valr ]
[ valr ≥ numr ]
[ numw ≥ valr ]
12. Simple and useful
Corresponds to a model of data-aware Petri nets
studied in the literature (bounded, with interleaving
semantics). [Mannhardt, PhD Thesis 2018]
Captures BPMN with case data + DMN: two OMG
standard for process and decision modelling. [_, ER2018]
A fragment can be discovered from event logs using
existing process discovery techniques. [Mannhardt et al,
CAiSE2016]
Why this model?
Interlude
13. Con
fi
guration: state+variable assignment
Executing a DDS
0 1 2
1
4
3
choose
repeat
guess
wait cheat
win
num val
[ numw > 0 ] [ valw ≥ valr ]
[ valr ≥ numr ]
[ numw ≥ valr ]
t
o
k
e
n
o
n
s
t
a
t
e
variable assignment
14. Run: a
fi
nite trace with legal assignments
Executing a DDS
0 1 2
1
4
3
choose
repeat
guess
wait cheat
win
num val
[ numw > 0 ] [ valw ≥ valr ]
[ valr ≥ numr ]
[ numw ≥ valr ]
✓
s1,
⇢
num 50
val 0
◆
15. Conceptual reason (from BPM)
Each process execution is expected to eventually
terminate.
Technical reason (from KR)
Moving from in
fi
nite to
fi
nite traces usually does not impact
worst-case complexity…
… But has a huge impact in terms of practical algorithms!
• Direct application of
fi
nite-state automata, without the
need of detouring to automata over in
fi
nite structures.
Why
fi
nite traces?
Interlude
16. …
…
…
…
…
…
…
…
…
Reachability graph (in
fi
nite in two dimensions)
Execution semantics
0 1 2
choose guess
[ numw > 0 ] [ valw ≥ valr ]
choose
choose
choose
guess
guess
guess
⇢
num = 0
val = 0
⇢
num = 0.5
val = 0
⇢
num = 1
val = 0
⇢
num = 1
val = 0.75
⇢
num = 1
val = 3.4
⇢
num = 1
val = 0
⇢
num = 5
val = 0
0 1
1
2
2
2
1
…
…
…
…
…
…
…
…
…
…
…
…
17. • Atoms: check control state, check constraints.
• Standard temporal operators: labelled next, eventually, globally.
• Interpreted over
fi
nite traces.
Linear temporal properties over the DDS
fi
nite traces
Specification language
me
st,
ss
of
le
is
ri-
),
a-
d,
by
rd
le
di
by
d,
nt
ment ↵ such that for each (v k) 2 C we have ↵(v) k
and, for each (v1 v2) 2 C, we have ↵(v1) ↵(v2).
4 Specification language
Given a DDS B, let LB be the language with grammar:
= true | C | b | ¬ | 1 ^ 2 | 1 _ 2 | hai | 3 | 2
where a 2 A, C is a constraint set over the variables in B
and b 2 B is a system state of B. We now give the semantics
on finite runs on RGB, for expressing properties on these
runs. For brevity, in what follows it is often convenient to
represent a constraint variable assignment ↵ as a constraint
set. Hence we define C↵
.
=
S
v2V {(v = ↵(v))}.
Intuitively, a formula = C is true when C is satisfiable
together with the current constraint variable assignment ↵
in the run of RGB, i.e., constraint variable assignment is a
solution of C (C [ C↵ is satisfiable). Similarly, an atomic
formula b requires the current system state to be b. hai
requires that is true in the run after executing action a (in
the next configuration, which must exist). 2 and 3 are read
as ‘for each step in the run’ and ‘eventually in the run’.
0 1 2
1
4
3
choose
repeat
guess
wait cheat
win
[ numw > 0 ] [ valw ≥ valr ]
[ valr ≥ numr ]
[ numw ≥ valr ]
b c
= val)}
= val)}
m = val)}
m 6= val)}
m = val)}
m 6= val)}
· · ·
{(num 3), win, (num = val)}
{(num < 3), win, (num = val)}
· · · a a0
⇢
num = 0
val = 0
b1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num < 3
num 6= val
a1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num 3
num 6= val
b2
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num < 3
num 6= val b2
8
<
:v
⇢
n
b2
8
<
:
num > 0
val > 0
val num
9
=
;
⇢
num < 3
num 6= val
· · ·
wait, cheat
· · ·
wa
wait,
· · ·
guess
init choose guess
r = 3((num < 3) ^ hwini(val = num)), requiring the chosen real to b
18. 1. Veri
fi
cation: check whether there exists a
witness for f in the reachability graph of B.
2. Strategy synthesis: given an agent a, compute
a strategy for a so that, no matter how the
other agents behave, the execution of the
strategy in the reachability graph of B yields f.
Given a DDS B and a formula f
Reasoning tasks
19. Veri
fi
cation
Example
{(num 3), ¬win, (num = val)}
{(num 3), ¬win, (num 6= val)}
· · · · · · a a0
⇢
n
a1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num 3
num 6= val 8
<
:
num > 0
val > 0
val num
9
=
;
· · ·
guess
init choose
e 4: Left: D for = 3((num < 3) ^ hwini(val = num)), requiring the
uess to be exact. Dots are used for labels not already labelling other outg
ols labelling arcs) showing a winning run. States are associated to two con
DB and the constraint set A. State labels refer to the states of D and DB.
ugh only three are in CGB (see Figure 2): two outcomes disambiguate b
ded to the set of assumptions A. A winning strategy exists if at least num
oller game moves guaranteeing to satisfy is {}, {pick(num, {num > 0, n
roller (with X Y = ;). The objective is to control, at
step, the values of variables in Y in such a way that for
ossible values of those in X a certain formula is true.
set o
then
ing g
0 1 2
1
4
3
choose
repeat
guess
wait cheat
win
[ numw > 0 ] [ valw ≥ valr ]
[ valr ≥ numr ]
[ numw ≥ valr ]
20. Strategy synthesis
Example
0 1 2
1
4
3
choose
repeat
guess
wait cheat
win
num val
[ numw > 0 ] [ valw ≥ valr ]
[ valr ≥ numr ]
[ numw ≥ valr ]
b c
= val)}
6= val)}
m = val)}
m 6= val)}
m = val)}
m 6= val)}
· · ·
{(num 3), win, (num = val)}
{(num < 3), win, (num = val)}
· · · a a0
⇢
num = 0
val = 0
b1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num < 3
num 6= val
a1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num 3
num 6= val
b2
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num < 3
num 6= val b
8
<
:
⇢
b2
8
<
:
num > 0
val > 0
val num
9
=
;
⇢
num < 3
num 6= val
· · ·
wait, cheat
· · ·
w
wait
· · ·
guess
init choose guess
or = 3((num < 3) ^ hwini(val = num)), requiring the chosen real to b
c
3), win, (num = val)}
< 3), win, (num = val)}
· · · a a0
⇢
num = 0
val = 0
b1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num < 3
num 6= val
a1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num 3
num 6= val
b2
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num < 3
num 6= val b2
8
<
:
num > 0
val > 0
val < num
9
=
;
⇢
n
b2
8
<
:
num > 0
val > 0
val num
9
=
;
⇢
num < 3
num = val
b2
8
<
:
num > 0
val > 0
val num
9
=
;
⇢
num < 3
num 6= val
8
<
:v
· · ·
wait, cheat
· · ·
wait, cheat
· · ·
wait, cheat
· · ·
guess
init choose guess
c
c
w
m < 3) ^ hwini(val = num)), requiring the chosen real to be smaller than
21. Veri
fi
cation reduces to strategy synthesis
with a single agent controlling everything.
To solve strategy synthesis we take
inspiration from classical approaches [Pnueli
and Rosner 1998]. However:
• The reachability graph is in
fi
nite.
• We have to handle constraints: “data-
aware” alphabet.
Observations
Reasoning tasks
22. Symbolically group variable assignments using constraint sets.
Constraints de
fi
ned over variables and constants used in the DDS.
From reachability graph to (
fi
nite) constraint graph
Interval abstraction
…
…
…
…
…
…
…
…
…
choose
choose
choose
guess
guess
guess
⇢
num = 0
val = 0
⇢
num = 0.5
val = 0
⇢
num = 1
val = 0
⇢
num = 1
val = 0.75
⇢
num = 1
val = 3.4
⇢
num = 1
val = 0
⇢
num = 5
val = 0
0 1
1
2
2
2
1
…
…
…
…
…
…
…
…
…
…
…
…
23. Symbolically group variable assignments using constraint sets.
Constraints de
fi
ned over variables and constants used in the DDS.
From reachability graph to (
fi
nite) constraint graph
Interval abstraction
…
…
…
…
…
…
…
…
…
choose
choose
choose
guess
guess
guess
⇢
num = 0
val = 0
⇢
num = 0.5
val = 0
⇢
num = 1
val = 0
⇢
num = 1
val = 0.75
⇢
num = 1
val = 3.4
⇢
num = 1
val = 0
⇢
num = 5
val = 0
0 1
1
2
2
2
1
…
…
…
…
…
…
…
…
…
…
…
…
⇢
num > 0
val = 0
8
<
:
num > 0
val = 0
val < num
9
=
;
8
<
:
num > 0
val > 0
val num
9
=
;
8
<
:
num > 0
val > 0
val < num
9
=
;
24. Symbolically group variable assignments using constraint sets.
Constraints de
fi
ned over variables and constants used in the DDS.
From reachability graph to (
fi
nite) constraint graph
Interval abstraction
…
…
…
choose guess
guess
guess
⇢
num = 0
val = 0
0 1 2
2
2
…
…
…
…
…
…
8
<
:
num > 0
val > 0
val num
9
=
;
8
<
:
num > 0
val = 0
val < num
9
=
;
8
<
:
num > 0
val > 0
val < num
9
=
;
⇢
num > 0
val = 0
The abstraction:
• is
fi
nite-state;
• preserves witnesses.
25. • f seen as an LTLf formula.
• First: constraints+tasks as syntactic
alphabet.
• Then: semantic curation to retain
only consistent transitions.
Given a DDS B and a formula f
Computing strategies
1
Formula to DFA
b c
3), ¬win, (num = val)}
3), ¬win, (num 6= val)}
3), ¬win, (num = val)}
3), ¬win, (num 6= val)}
· · ·
{(num 3), win, (num = val)}
{(num < 3), win, (num = val)}
· · · a a0
⇢
num = 0
val = 0
b1
8
<
:
num > 0
val = 0
val < num
⇢
num < 3
num 6= val
a1
8
<
:
num > 0
val = 0
val < num
9
=
;
⇢
num 3
num 6= val 8
<
:
num > 0
val > 0
val num
9
=
;
⇢
num <
num 6=
· · ·
guess
init choose
eft: D for = 3((num < 3) ^ hwini(val = num)), requiring the chose
o be exact. Dots are used for labels not already labelling other outgoing e
belling arcs) showing a winning run. States are associated to two constraint
d the constraint set A. State labels refer to the states of D and DB. Note
nly three are in CGB (see Figure 2): two outcomes disambiguate betwee
the set of assumptions A. A winning strategy exists if at least num, val 2
ame moves guaranteeing to satisfy is {}, {pick(num, {num > 0, num <
a b c
· · ·
{(num < 3), win, (num = val)}
{(num < 3), win, (num 6= val)}
{(num < 3), ¬win, (num = val)}
{(num < 3), ¬win, (num 6= val)}
{(num 3), ¬win, (num = val)}
{(num 3), ¬win, (num 6= val)}
· · ·
{(num 3), win, (num = val)}
{(num < 3), win, (num = val)}
· · · a
⇢
n
v
8
< num > 0
9
= ⇢
num 3
8
<
:
init
26. • f seen as an LTLf formula.
• First: constraints+tasks as syntactic
alphabet.
• Then: semantic curation to retain
only consistent transitions.
Given a DDS B and a formula f
Computing strategies
1
Formula to DFA
2 Constraint graph to DFA
• Variable assignments attached to
transitions to distinguish
nondeterminism on tasks.
27. • f seen as an LTLf formula.
• First: constraints+tasks as syntactic
alphabet.
• Then: semantic curation to retain
only consistent transitions.
Given a DDS B and a formula f
Computing strategies
1
Formula to DFA
2 Constraint graph to DFA
• Variable assignments attached to
transitions to distinguish
nondeterminism on tasks.
3
“data-aware” cross-product
• Semantic curation when combining
the two DFAs, retaining only
consistent combined transitions.
• Cross-products suitably “remembers”
the accumulated constraints.
28. • f seen as an LTLf formula.
• First: constraints+tasks as syntactic
alphabet.
• Then: semantic curation to retain
only consistent transitions.
Given a DDS B and a formula f
Computing strategies
1
Formula to DFA
2 Constraint graph to DFA
• Variable assignments attached to
transitions to distinguish
nondeterminism on tasks.
3
“data-aware” cross-product
• Semantic curation when combining
the two DFAs, retaining only
consistent combined transitions.
• Cross-products suitably “remembers”
the accumulated constraints.
4 Strategy extraction
• Classical backward computation of
winning sets.
• Gives an abstract strategy that
can be concretised step-wise.
29. Lower bound
2-EXPTIME from classical propositional setting.
Upper bound
Doubly-exponential in the formula.
Exponential in the compact DDS. Speci
fi
cally:
#variables, #used constants, #constraint
[Constructions need to call constraint solver]
Complexity
30. Simple, relevant model for data-aware
dynamic systems.
Readily implementable, direct approach to
strategy synthesis, combining classical
strategy synthesis with data abstraction
techniques.
Application to BPM, also to repair process
models.
Conclusion