Seminar on "Verification of Relational Data-Centric Systems with External Services" at the KRDB Research Centre for Knowledge and Data, Faculty of Computer Science, Free University of Bozen-Bolzano (Italy), 03/05/2012.
The document contains solutions to several algorithm problems. For problem 1, it provides two solutions: 1) A divide-and-conquer algorithm that finds the rank of points in O(n log n) time by sorting and merging. 2) A sweep line algorithm using a binary indexed tree that also runs in O(n log n) time. For problem 2, it describes a selection algorithm solution that finds the median in O(n) time and partitions the items around the median, recursively solving in halves. For problem 3, it uses a pigeonhole principle argument to identify good chips in O(n) tests by randomly pairing chips and eliminating inconsistent results.
1) The document discusses problems related to complexity classes P and NP. It shows that several problems are NP-complete, including the Hamiltonian cycle problem, subgraph isomorphism problem, 0-1 integer programming problem, and Hamiltonian path problem.
2) It provides algorithms and reductions to prove several problems are NP-complete, such as reducing Hamiltonian cycle to the subgraph isomorphism problem and reducing 3-SAT to the 0-1 integer programming problem.
3) It also discusses properties of complexity classes P and NP, such as showing P is closed under certain operations and contained within NP intersect co-NP.
Greedy abstract feature generation (GreedyAFG) is an approach for automatically generating feature abstractions to be used in Abstract Zobrist Hashing (AZHDA*) for parallel best-first search. However, GreedyAFG results in high communication overhead because any change in the value of an abstract feature from a parent node to a child node will change the hash value and require communication. To reduce this overhead, a new approach is needed that groups features so that changes within a group do not change the hash value.
The document describes using Bayesian analysis and Markov chain Monte Carlo (MCMC) methods for parameter selection, model calibration, and uncertainty quantification of an SIR disease transmission model. Specifically, it employs the Delayed Rejection Adaptive Metropolis (DRAM) MCMC algorithm to fit the SIR model parameters to data and perform sensitivity analysis. Exercises are proposed for students to modify the 4-parameter SIR code to a 3-parameter version, and to investigate local and global sensitivity of the model response to parameters using one-at-a-time and Sobol sampling approaches.
The document summarizes computational aspects of vehicle routing problems. It discusses time complexity and space complexity, and how they are measured as functions of problem size. It provides examples of calculating complexity for different algorithms. It also discusses common data structures for representing routes, including array lists, doubly linked lists, and their pros and cons for different operations. The document outlines Java code examples for comparing route representation using these data structures.
This document discusses speaker diarization, which is the process of segmenting an audio stream into homogeneous segments according to speaker identity. It covers feature extraction methods like MFCCs, segmentation using Bayesian Information Criteria to compare Gaussian mixture models, and clustering algorithms like k-means and hierarchical agglomerative clustering. Dendrogram visualizations are used to identify natural speaker clusters. The overall goal is to partition audio recordings of discussions or debates into homogeneous segments to attribute speech segments to individual speakers.
(chapter 8) A Concise and Practical Introduction to Programming Algorithms in...Frank Nielsen
These are the slides accompanying the textbook:
A Concise and Practical Introduction to Programming Algorithms in Java
by Frank Nielsen
Published by Springer-Verlag (2009), Undergraduate textbook in computer science (UTiCS series)
ISBN: 978-1-84882-338-9
http://www.lix.polytechnique.fr/~nielsen/JavaProgramming/
http://link.springer.com/book/10.1007%2F978-1-84882-339-6
The document contains solutions to several algorithm problems. For problem 1, it provides two solutions: 1) A divide-and-conquer algorithm that finds the rank of points in O(n log n) time by sorting and merging. 2) A sweep line algorithm using a binary indexed tree that also runs in O(n log n) time. For problem 2, it describes a selection algorithm solution that finds the median in O(n) time and partitions the items around the median, recursively solving in halves. For problem 3, it uses a pigeonhole principle argument to identify good chips in O(n) tests by randomly pairing chips and eliminating inconsistent results.
1) The document discusses problems related to complexity classes P and NP. It shows that several problems are NP-complete, including the Hamiltonian cycle problem, subgraph isomorphism problem, 0-1 integer programming problem, and Hamiltonian path problem.
2) It provides algorithms and reductions to prove several problems are NP-complete, such as reducing Hamiltonian cycle to the subgraph isomorphism problem and reducing 3-SAT to the 0-1 integer programming problem.
3) It also discusses properties of complexity classes P and NP, such as showing P is closed under certain operations and contained within NP intersect co-NP.
Greedy abstract feature generation (GreedyAFG) is an approach for automatically generating feature abstractions to be used in Abstract Zobrist Hashing (AZHDA*) for parallel best-first search. However, GreedyAFG results in high communication overhead because any change in the value of an abstract feature from a parent node to a child node will change the hash value and require communication. To reduce this overhead, a new approach is needed that groups features so that changes within a group do not change the hash value.
The document describes using Bayesian analysis and Markov chain Monte Carlo (MCMC) methods for parameter selection, model calibration, and uncertainty quantification of an SIR disease transmission model. Specifically, it employs the Delayed Rejection Adaptive Metropolis (DRAM) MCMC algorithm to fit the SIR model parameters to data and perform sensitivity analysis. Exercises are proposed for students to modify the 4-parameter SIR code to a 3-parameter version, and to investigate local and global sensitivity of the model response to parameters using one-at-a-time and Sobol sampling approaches.
The document summarizes computational aspects of vehicle routing problems. It discusses time complexity and space complexity, and how they are measured as functions of problem size. It provides examples of calculating complexity for different algorithms. It also discusses common data structures for representing routes, including array lists, doubly linked lists, and their pros and cons for different operations. The document outlines Java code examples for comparing route representation using these data structures.
This document discusses speaker diarization, which is the process of segmenting an audio stream into homogeneous segments according to speaker identity. It covers feature extraction methods like MFCCs, segmentation using Bayesian Information Criteria to compare Gaussian mixture models, and clustering algorithms like k-means and hierarchical agglomerative clustering. Dendrogram visualizations are used to identify natural speaker clusters. The overall goal is to partition audio recordings of discussions or debates into homogeneous segments to attribute speech segments to individual speakers.
(chapter 8) A Concise and Practical Introduction to Programming Algorithms in...Frank Nielsen
These are the slides accompanying the textbook:
A Concise and Practical Introduction to Programming Algorithms in Java
by Frank Nielsen
Published by Springer-Verlag (2009), Undergraduate textbook in computer science (UTiCS series)
ISBN: 978-1-84882-338-9
http://www.lix.polytechnique.fr/~nielsen/JavaProgramming/
http://link.springer.com/book/10.1007%2F978-1-84882-339-6
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)Waqas Tariq
This paper describes an extremely fast polynomial time algorithm, the Near Optimal Vertex Cover Algorithm (NOVCA) that produces an optimal or near optimal vertex cover for any known undirected graph G (V, E). NOVCA is based on the idea of (i) including the vertex having maximum degree in the vertex cover and (ii) rendering the degree of a vertex to zero by including all its adjacent vertices. The two versions of algorithm, NOVCA-I and NOVCA-II, have been developed. The results identifying bounds on the size of the minimum vertex cover as well as polynomial complexity of algorithm are given with experimental verification. Future research efforts will be directed at tuning the algorithm and providing proof for better approximation ratio with NOVCA compared to any other available vertex cover algorithms.
Discuss seven functions, Analysis of algorithms- Experimental Studies/Primitive operations/Asymptotic notation- Big Oh/Big-Omega/Big-Theta
(Download is recommended to make the animations work)
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOMEHONGJOO LEE
45 min talk about collecting home network performance measures, analyzing and forecasting time series data, and building anomaly detection system.
In this talk, we will go through the whole process of data mining and knowledge discovery. Firstly we write a script to run speed test periodically and log the metric. Then we parse the log data and convert them into a time series and visualize the data for a certain period.
Next we conduct some data analysis; finding trends, forecasting, and detecting anomalous data. There will be several statistic or deep learning techniques used for the analysis; ARIMA (Autoregressive Integrated Moving Average), LSTM (Long Short Term Memory).
Slides cover understanding heap, heap properties, representation of heap, up-heap and down heap bubbling followed by Adaptable priority queues, list and heap implementation of priority queues.
The document discusses advanced topics in sorting algorithms, including selection, handling duplicate keys, comparators, and applications. It describes quick-select, an efficient linear-time algorithm for selection, and notes that mergesort handles duplicate keys better than quicksort, which can have quadratic time when there are many duplicate keys if partitioning does not stop on equal keys.
This document presents the πRT-calculus, a calculus for modeling mobile real-time processes. It extends the π-calculus with a timeout operator to model real-time aspects. The document covers the syntax and semantics of the π-calculus and πRT-calculus. It also discusses design choices like having a global clock and discrete time. An example of a mobile video streaming system is used to illustrate the πRT-calculus. The document concludes by discussing future work, like developing timed bisimulation and extending to continuous time.
Let's Practice What We Preach: Likelihood Methods for Monte Carlo DataChristian Robert
This document discusses methods for Monte Carlo data, including importance sampling and bridge sampling. It notes that for importance sampling, maximizing the likelihood does not result in an estimator for the estimand of interest, as the likelihood is independent of the estimand. Bridge sampling provides an estimating equation approach where the data from both sampling distributions are relevant to inferring the ratio of normalizing constants.
The document discusses vectorization in the ATLAS experiment. It begins by providing background on ATLAS and describing how exploiting SIMD (single instruction, multiple data) parallelism can improve performance. It then examines four approaches to vectorization: 1) hand-tuning numerical hotspots, 2) using vectorized linear algebra libraries, 3) relying on compiler auto-vectorization, and 4) using language extensions. For each approach, it provides examples and discusses advantages and limitations. The overall summary is that vectorization can provide significant performance gains but requires careful consideration of algorithms and data structures.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
The document discusses various data structures and algorithms for priority queues and sorting, including priority queues, heaps, heapsort, radix sort, and symbol tables. It provides definitions and implementations of priority queues using arrays and heaps, describes algorithms for heap operations like insertion and deletion, and explains how to implement sorting algorithms like heapsort and radix sort using priority queues. It also defines common operations for symbol tables and provides examples of implementations using arrays and binary search trees.
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Iterative Spark Developmen...Data Con LA
This presentation will explore how Bloomberg uses Spark, with its formidable computational model for distributed, high-performance analytics, to take this process to the next level, and look into one of the innovative practices the team is currently developing to increase efficiency: the introduction of a logical signature for datasets.
This document provides instructions for an assignment on Java programming II. It includes 4 questions - the first asks to write code for a linear search on a sample array and identify the index position of a target number. The second calculates time complexity for linear search and merge sort. The third asks to convert an iterative factorial method to a recursive one. The fourth provides an unsorted array and asks to sort it using selection sort and display the output.
Implementation of Efficiency CORDIC Algorithmfor Sine & Cosine GenerationIOSR Journals
Abstract: This paper presents an area-time efficient coordinate rotation digital computer (CORDIC) algorithm that completely eliminates the scale-factor. A generalized micro-rotation selection technique based on high speed most-significant-1-detection obviates the complex search algorithms for identifying the micro-rotations. This algorithm is redefined as the elementary angles for reducing the number of CORDIC iterations. Compared to the existing re-cursive architectures the proposed one has 17% lower slice-delay product on Xilinx Spartan XC2S200E device. The CORDIC processor pro-vides the flexibility to manipulate the number of iterations depending on the accuracy, area and latency requirements. Index Terms—coordinate rotation digital computer (CORDIC), cosine/sine, field-programmable gate array(FPGA),most-significant-1, recursive architecture.
This document surveys CORDIC algorithms for implementing trigonometric and other functions on FPGAs. It begins with an abstract describing the CORDIC algorithm and its use of shift-add operations to compute functions like trigonometric, hyperbolic, linear and logarithmic functions. It then provides more details on CORDIC theory and how it can be used for vector rotation, as well as discussing implementations for FPGAs. The goal is to survey commonly used CORDIC functions and how they can be implemented efficiently in reconfigurable logic.
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Opti...Fabian Pedregosa
The document proposes a new parallel method called Proximal Asynchronous Stochastic Gradient Average (ProxASAGA) for solving composite optimization problems. ProxASAGA extends SAGA to handle nonsmooth objectives using proximal operators, and runs asynchronously in parallel without locks. It is shown to converge at the same linear rate as the sequential algorithm theoretically, and achieves speedups of 6-12x on a 20-core machine in practice on large datasets, with greater speedups on sparser problems as predicted by theory.
Bubble sort is a simple sorting algorithm that works by repeatedly swapping adjacent elements if they are in the wrong order. It has a worst-case time complexity of Θ(n2) but works best on nearly sorted lists in linear time. The algorithm makes multiple passes through an array, swapping adjacent elements that are out of order until the list is fully sorted.
This document presents methods for computing information flow and quantifying information leakage in non-probabilistic programs using symbolic model checking. It discusses using binary decision diagrams (BDDs) and algebraic decision diagrams (ADDs) to represent program states and calculate fixed points. Algorithms are provided for symbolically computing min-entropy and Shannon entropy leakage by constructing ADDs representing the program summary and sets of possible outputs. The methods were implemented in a tool called Moped-QLeak and evaluated on example programs. Future work includes supporting recursive programs and using other symbolic verification approaches.
This document describes research on identifying extract method refactoring opportunities through program analysis and tool support. It discusses limitations of early work, such as not supporting extraction of objects or composite variables. It presents techniques to address these limitations, including handling behavior preservation issues and ensuring the complete computation of extracted variables. Evaluation results of tools are provided, showing improvements in precision and recall over time. The document also discusses how the work influenced the researcher's career and other researchers.
This presentation provides an overview of decision-making in organisations and introduces a new language called ESL that uses actors to create an executable model that can be analysed. A number of small examples of ESL are shown. The presentation concludes with a larger case study that addresses the recent demonetisation event in India.
Presentation of the paper "Conformance Checking of Executed Clinical Guidelines in presence of Basic Medical Knowledge" at the 4th International Workshop on Process-Oriented Information Systems in Healthcare (ProHealth’11).
A Fast Near Optimal Vertex Cover Algorithm (NOVCA)Waqas Tariq
This paper describes an extremely fast polynomial time algorithm, the Near Optimal Vertex Cover Algorithm (NOVCA) that produces an optimal or near optimal vertex cover for any known undirected graph G (V, E). NOVCA is based on the idea of (i) including the vertex having maximum degree in the vertex cover and (ii) rendering the degree of a vertex to zero by including all its adjacent vertices. The two versions of algorithm, NOVCA-I and NOVCA-II, have been developed. The results identifying bounds on the size of the minimum vertex cover as well as polynomial complexity of algorithm are given with experimental verification. Future research efforts will be directed at tuning the algorithm and providing proof for better approximation ratio with NOVCA compared to any other available vertex cover algorithms.
Discuss seven functions, Analysis of algorithms- Experimental Studies/Primitive operations/Asymptotic notation- Big Oh/Big-Omega/Big-Theta
(Download is recommended to make the animations work)
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOMEHONGJOO LEE
45 min talk about collecting home network performance measures, analyzing and forecasting time series data, and building anomaly detection system.
In this talk, we will go through the whole process of data mining and knowledge discovery. Firstly we write a script to run speed test periodically and log the metric. Then we parse the log data and convert them into a time series and visualize the data for a certain period.
Next we conduct some data analysis; finding trends, forecasting, and detecting anomalous data. There will be several statistic or deep learning techniques used for the analysis; ARIMA (Autoregressive Integrated Moving Average), LSTM (Long Short Term Memory).
Slides cover understanding heap, heap properties, representation of heap, up-heap and down heap bubbling followed by Adaptable priority queues, list and heap implementation of priority queues.
The document discusses advanced topics in sorting algorithms, including selection, handling duplicate keys, comparators, and applications. It describes quick-select, an efficient linear-time algorithm for selection, and notes that mergesort handles duplicate keys better than quicksort, which can have quadratic time when there are many duplicate keys if partitioning does not stop on equal keys.
This document presents the πRT-calculus, a calculus for modeling mobile real-time processes. It extends the π-calculus with a timeout operator to model real-time aspects. The document covers the syntax and semantics of the π-calculus and πRT-calculus. It also discusses design choices like having a global clock and discrete time. An example of a mobile video streaming system is used to illustrate the πRT-calculus. The document concludes by discussing future work, like developing timed bisimulation and extending to continuous time.
Let's Practice What We Preach: Likelihood Methods for Monte Carlo DataChristian Robert
This document discusses methods for Monte Carlo data, including importance sampling and bridge sampling. It notes that for importance sampling, maximizing the likelihood does not result in an estimator for the estimand of interest, as the likelihood is independent of the estimand. Bridge sampling provides an estimating equation approach where the data from both sampling distributions are relevant to inferring the ratio of normalizing constants.
The document discusses vectorization in the ATLAS experiment. It begins by providing background on ATLAS and describing how exploiting SIMD (single instruction, multiple data) parallelism can improve performance. It then examines four approaches to vectorization: 1) hand-tuning numerical hotspots, 2) using vectorized linear algebra libraries, 3) relying on compiler auto-vectorization, and 4) using language extensions. For each approach, it provides examples and discusses advantages and limitations. The overall summary is that vectorization can provide significant performance gains but requires careful consideration of algorithms and data structures.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
The document discusses various data structures and algorithms for priority queues and sorting, including priority queues, heaps, heapsort, radix sort, and symbol tables. It provides definitions and implementations of priority queues using arrays and heaps, describes algorithms for heap operations like insertion and deletion, and explains how to implement sorting algorithms like heapsort and radix sort using priority queues. It also defines common operations for symbol tables and provides examples of implementations using arrays and binary search trees.
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Iterative Spark Developmen...Data Con LA
This presentation will explore how Bloomberg uses Spark, with its formidable computational model for distributed, high-performance analytics, to take this process to the next level, and look into one of the innovative practices the team is currently developing to increase efficiency: the introduction of a logical signature for datasets.
This document provides instructions for an assignment on Java programming II. It includes 4 questions - the first asks to write code for a linear search on a sample array and identify the index position of a target number. The second calculates time complexity for linear search and merge sort. The third asks to convert an iterative factorial method to a recursive one. The fourth provides an unsorted array and asks to sort it using selection sort and display the output.
Implementation of Efficiency CORDIC Algorithmfor Sine & Cosine GenerationIOSR Journals
Abstract: This paper presents an area-time efficient coordinate rotation digital computer (CORDIC) algorithm that completely eliminates the scale-factor. A generalized micro-rotation selection technique based on high speed most-significant-1-detection obviates the complex search algorithms for identifying the micro-rotations. This algorithm is redefined as the elementary angles for reducing the number of CORDIC iterations. Compared to the existing re-cursive architectures the proposed one has 17% lower slice-delay product on Xilinx Spartan XC2S200E device. The CORDIC processor pro-vides the flexibility to manipulate the number of iterations depending on the accuracy, area and latency requirements. Index Terms—coordinate rotation digital computer (CORDIC), cosine/sine, field-programmable gate array(FPGA),most-significant-1, recursive architecture.
This document surveys CORDIC algorithms for implementing trigonometric and other functions on FPGAs. It begins with an abstract describing the CORDIC algorithm and its use of shift-add operations to compute functions like trigonometric, hyperbolic, linear and logarithmic functions. It then provides more details on CORDIC theory and how it can be used for vector rotation, as well as discussing implementations for FPGAs. The goal is to survey commonly used CORDIC functions and how they can be implemented efficiently in reconfigurable logic.
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Opti...Fabian Pedregosa
The document proposes a new parallel method called Proximal Asynchronous Stochastic Gradient Average (ProxASAGA) for solving composite optimization problems. ProxASAGA extends SAGA to handle nonsmooth objectives using proximal operators, and runs asynchronously in parallel without locks. It is shown to converge at the same linear rate as the sequential algorithm theoretically, and achieves speedups of 6-12x on a 20-core machine in practice on large datasets, with greater speedups on sparser problems as predicted by theory.
Bubble sort is a simple sorting algorithm that works by repeatedly swapping adjacent elements if they are in the wrong order. It has a worst-case time complexity of Θ(n2) but works best on nearly sorted lists in linear time. The algorithm makes multiple passes through an array, swapping adjacent elements that are out of order until the list is fully sorted.
This document presents methods for computing information flow and quantifying information leakage in non-probabilistic programs using symbolic model checking. It discusses using binary decision diagrams (BDDs) and algebraic decision diagrams (ADDs) to represent program states and calculate fixed points. Algorithms are provided for symbolically computing min-entropy and Shannon entropy leakage by constructing ADDs representing the program summary and sets of possible outputs. The methods were implemented in a tool called Moped-QLeak and evaluated on example programs. Future work includes supporting recursive programs and using other symbolic verification approaches.
This document describes research on identifying extract method refactoring opportunities through program analysis and tool support. It discusses limitations of early work, such as not supporting extraction of objects or composite variables. It presents techniques to address these limitations, including handling behavior preservation issues and ensuring the complete computation of extracted variables. Evaluation results of tools are provided, showing improvements in precision and recall over time. The document also discusses how the work influenced the researcher's career and other researchers.
This presentation provides an overview of decision-making in organisations and introduces a new language called ESL that uses actors to create an executable model that can be analysed. A number of small examples of ESL are shown. The presentation concludes with a larger case study that addresses the recent demonetisation event in India.
Presentation of the paper "Conformance Checking of Executed Clinical Guidelines in presence of Basic Medical Knowledge" at the 4th International Workshop on Process-Oriented Information Systems in Healthcare (ProHealth’11).
Presentation of the paper "State-Boundedness in Data-Aware Dynamic Systems" at the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR 2014)
Invited talk on "Temporal logics over finite traces for declarative BPM" at the Workshop on Formal Methods for Artificial Intelligence (FMAI 2017), 22-24/02/2017, Naples, Italy
Invited Presentation on "DB-Nets: On The Marriage of Colored Petri Nets and Relational Databases" at the SOAMED 2016 Winter Retreat, 28-29/11/2016, Zeuthen (Berlin), Germany
- The document discusses verification of Data-Centric Dynamic Systems (DCDSs), which combine data and processes. DCDSs have a data layer consisting of a relational database and constraints, and a process layer consisting of condition-action rules that can call external services.
- The verification problem involves checking if a DCDS satisfies a temporal/dynamic property, given as a formula in first-order μ-calculus. However, unrestricted first-order quantification and even simple temporal properties can make verification undecidable.
- The talk proposes restricting quantification and properties to obtain decidable verification, by constructing a finite-state abstraction of the DCDS transition system that soundly and completely represents its behavior.
The document discusses modeling nonlinear digital integrated circuits (ICs) using system identification techniques. It explores using parametric models with different representations including local linear state-space models. Models are estimated from input-output port measurements and validated. Local linear state-space models provided the best results with good accuracy, a unique solution, and verified local stability, while also allowing efficient simulation. The models were successfully applied to simulate a mobile data link system.
This document presents an efficient convex hull algorithm for finding the convex hull of a planar set of points. The algorithm partitions the set of points into four regions based on the minimum and maximum x and y coordinates. It then finds the convex hull parts belonging to each region in parallel. Those parts are merged to derive the full convex hull. For each region, the algorithm uses a modified gradient concept to efficiently process the points and find the boundary points of the convex hull part. The algorithm achieves parallelism, data reduction, and has lower computational cost compared to traditional interior points algorithms. However, its drawbacks include difficulty extending it to higher dimensions and its static nature which requires the entire dataset from the beginning.
This document provides an introduction to VHDL (Very High Speed Integrated Circuits Hardware Description Language). VHDL is used to describe digital circuits and systems at different levels of abstraction. It allows modeling and verifying designs, and synthesizing circuits. The document describes VHDL entities and architectures, basic concepts like signals and variables, and provides an example of describing and simulating a simple combinational logic circuit in VHDL.
We introduce in this paper a new flow-driven and constraint-based approach for error localization. The input is a faulty program for which a counter-example and a postcondition are provided. To identify helpful information for error location, we generate a constraint system for the paths of the control flow graph for which at most k conditional statements may be erroneous. Then, we calculate Minimal Correction Sets (MCS) of bounded size for each of these paths. The removal of one of these sets of constraints yields a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the post condition. To compute the MCS, we extend the algorithm proposed by Liffiton and Sakallah in order to handle programs with numerical statements more efficiently. The main advantage of this flow-driven approach is that the computed sets of suspicious instructions are small, each of them being associated to an identified path. Moreover, the constraint-programming based framework allows mixing Boolean and numerical constraints in an efficient and straightforward way. Preliminary experiments are quite encouraging.
Distributed Resilient Interval Observers for Bounded-Error LTI Systems Subjec...Mohammad Khajenejad
This is my presentation slides at the American Control Conference (ACC), at May 2023, in San Diego. This was one of the talks in an invited session on "Resiliency and Privacy Throughout Networked Cyber-Physical Systems", that we co-organized and co-chaired during ACC 2023. I this invited sessions, great scholars presented their work on the intersection of resilient estimation, control and learning, as well as designing privacy-preserving mechanisms in multi-agent cyber-physical systems.
In my presentation, I discussed our progress on designing distributed interval-valued input and state observers for multi-agent LTI systems that are subject to bounded noise, as well as adversarial unknown inputs on both sensors and actuators. We introduced the notion of min-max consensus as a counterpart to average consensus in bounded-error settings. Moreover, we developed structural conditions for the stability of the proposed observers for different classes of dynamics and networks.
This document outlines the schedule and content for an advanced econometrics and Stata training course taking place from October 17-26, 2019 in Beijing, China. The course will cover topics including single and multi-regression, hypothesis testing, panel data models, time series models, stochastic frontier analysis, data envelopment analysis, and difference-in-differences. Data envelopment analysis will be the focus of sessions 13 and 14, covering concepts such as efficiency measurement, variable returns to scale, and incorporating environmental variables.
Recent years have seen the emergence of several static analysis techniques for reasoning about programs. This talk presents several major classes of techniques and tools that implement these techniques. Part of the presentation will be a demonstration of the tools.
Dr. Subash Shankar is an Associate Professor in the Computer Science department at Hunter College, CUNY. Prior to joining CUNY, he received a PhD from the University of Minnesota and was a postdoctoral fellow in the model checking group at Carnegie Mellon University. Dr. Shankar also has over 10 years of industrial experience, mostly in the areas of formal methods and tools for analyzing hardware and software systems.
Uniformity in mechanical properties of the slab affects quality of subsequent rolling process. One of the most important factors deciding quality of the slab is fluctuation of the molten steel level in the mould. That is, smoothing pouring without fluctuating in the mould level means improvement in quality of the slab and protects break-out problem and allows high speed casting process. If molten steel surface fluctuates severely, the forming oscillation marks on the slab is unstable, solidification of molten steel is not uniform and there will be entrapment of mould powder in the solidified cast strand. It makes quality of the slab inferior and generates defects on the slab.
Software Testing: A Research Travelogue (2000–2014)Alex Orso
The document discusses the major developments and contributions in the field of software testing research over the past 14 years from 2000 to 2014. It identifies the most significant research contributions as automated test input generation techniques like symbolic execution, search-based testing, and random testing. It also discusses important testing strategies, empirical studies and infrastructure developments, and practical contributions to software testing. Open challenges and opportunities discussed include testing real-world systems, handling oracles, testing non-functional properties, and leveraging cloud and crowd-based approaches.
Presentation of the paper "Specification and Verification of Commitment-Regulated Data-Aware Multiagent Systems" at the 29th Italian Conference on Computational Logic (CILC 2014)
The document discusses using algorithmic test generation to improve functional coverage in existing verification environments. It describes limitations of current constrained random stimuli generation techniques for complex designs. Algorithmic test generation uses rule graphs and action functions to efficiently target coverage goals without requiring extensive changes to verification environments. A case study shows algorithmic test generation achieved coverage goals over 600x faster than constrained random for an AXI bus bridge design while requiring minimal changes to the testbench.
Analysis the Privacy preserving and content protecting location based on querieskavidhapr
This document proposes a two-stage solution for secure location-based queries that improves performance. The first stage uses oblivious transfer to privately determine the user's location within a public grid. The second stage uses private information retrieval for the user to efficiently retrieve an appropriate data block from the private grid. The solution introduces a formal security model and analyzes the security of the novel protocol. It aims to achieve privacy protection for both the user and server in location-based services.
Rodrigo Campos presented on Linux systems capacity planning. He discussed performance monitoring tools like Sysstat and common metrics like CPU usage. He explained concepts from queueing theory like utilization, Little's Law, and using modeling tools like PDQ to create what-if scenarios of system performance. Campos provided an example of modeling a web application using a customer behavior model to understand and optimize performance bottlenecks.
Object detection is a central problem in computer vision and underpins many applications from medical image analysis to autonomous driving. In this talk, we will review the basics of object detection from fundamental concepts to practical techniques. Then, we will dive into cutting-edge methods that use transformers to drastically simplify the object detection pipeline while maintaining predictive performance. Finally, we will show how to train these models at scale using Determined’s integrated deep learning platform and then serve the models using MLflow.
What you will learn:
Basics of object detection including main concepts and techniques
Main ideas from the DETR and Deformable DETR approaches to object detection
Overview of the core capabilities of Determined’s deep learning platform, with a focus on its support for effortless distributed training
How to serve models trained in Determined using MLflow
Advanced course in logic and computation at ESSLLI 2017, by Calvanese and Montali, summarizing the main technical results obtained in our 6-year research on the verification of data-aware processes. Part 2/6: Data-Centric Dynamic Systems (DCDSs).
Privacy Preserving State Transitions on EthereumClearmatics
This document provides a summary of a presentation about implementing Zerocash-style private transactions on Ethereum called ZETH. It discusses using zero-knowledge proofs (zkSNARKs) to prove that transactions follow rules without revealing details of payments. ZETH would use a "mixer" smart contract to obscure transaction data and leverage Ethereum events for encrypted broadcasts. Further work is needed on MPC for the common reference string and optimizations before ZETH can move beyond proof-of-concept status. The presentation references several academic papers and code repositories relevant to implementing private transactions with zero-knowledge proofs on Ethereum.
Understanding How is that Adaptive Cursor Sharing (ACS) produces multiple Opt...Carlos Sierra
Adaptive Cursor Sharing (ACS) is a feature available since 11g. It is enabled by default. ACS can help to generate multiple non-persistent Optimal Execution Plans for a given SQL. But it requires a sequence of events for it to get truly activated. This presentation describes what is ACS, when it is used and when it is not. Then it demonstrates ACS capabilities and limitations with a live demo.
This session is about: How Adaptive Cursor Sharing (ACS) actually works. How a bind sensitive cursor becomes bind aware. What are those "ACS buckets". How the "Selectivity Profile" works. Why sometimes your SQL becomes bind aware and why sometimes it does not. How is that ACS interacts with SQL Plan Management (SPM). These and other questions about ACS are answered in detail.
Some live demonstrations are used to illustrate the ramp-up process on ACS and how some child cursors are created then flagged as non-shareable. You will also "see" how the ACS Selectivity Profile is adapted as new executions make use of predicates with new Selectivities. ACS promotes Plan Flexibility while SPM promotes Plan Stability. Understanding how these duo interacts becomes of great value when some gentle intervention is needed to restore this delicate balance.
This session is for those Developers and DBAs that "need" to understand how things work. ACS can be seen as a back-box; or you can "look" inside and understand how it actually works. If you are curious about the ACS functionality, then this Session brings some light. Consider this session only if you are pretty familiar with Cursor Sharing, Binds, Plan Stability and Plan Flexibility.
D-WaveQuantum ComputingAccess & applications via cloud deploymentRakuten Group, Inc.
D-Wave Systems provides quantum computing via cloud deployment. Their mission is to solve hard problems in AI/ML using their annealing-based quantum computers. They have developed hybrid quantum-classical algorithms and architectures. Their current product is the D-Wave 2000Q quantum processor with 2000 qubits. It uses superconducting qubits in a CMOS fabrication process. D-Wave has established the quantum effects of superposition, entanglement and tunneling on their processors. They are working to develop applications in finance, healthcare and machine learning using their quantum-classical hybrid approach.
Automated Discovery of Structured Process Models: Discover Structured vs Disc...Marlon Dumas
Research paper presentation at the 35th International Conference on Conceptual Modeling (ER'2016), Gifu, Japan, 15 Nov. 2016
Presentation delivered by Raffaele Conforti.
Paper available at: http://goo.gl/5EN3l2
Similar to Seminar@KRDB 2012 - Montali - Verification of Relational Data-Centric Systems with External Services (20)
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
Slides of the keynote speech on "Constraints for process framing in Augmented BPM" at the AI4BPM 2022 International Workshop, co-located with BPM 2022. The keynote focuses on the problem of "process framing" in the context of the new vision of "Augmented BPM", where BPM systems are augmented with AI capabilities. This vision is described in a manifesto, available here: https://arxiv.org/abs/2201.12855
Keynote speech at KES 2022 on "Intelligent Systems for Process Mining". I introduce process mining, discuss why process mining tasks should be approached by using intelligent systems, and show a concrete example of this combination, namely (anticipatory) monitoring of evolving processes against temporal constraints, using techniques from knowledge representation and formal methods (in particular, temporal logics over finite traces and their automata-theoretic characterization).
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
1. The document discusses representing business processes with uncertainty using ProbDeclare, an extension of Declare that allows constraints to have uncertain probabilities.
2. ProbDeclare models contain both crisp constraints that must always hold and probabilistic constraints that hold with some probability. This leads to multiple possible "scenarios" depending on which constraints are satisfied.
3. Reasoning involves determining which scenarios are logically consistent using LTLf, and computing the probability distribution over scenarios by solving a system of inequalities defined by the constraint probabilities.
Presentation on "From Case-Isolated to Object-Centric Processes - A Tale of Two Models" as part of the Hasselt University BINF Research Seminar Series (see https://www.uhasselt.be/en/onderzoeksgroepen-en/binf/research-seminar-series).
Invited seminar on "Modeling and Reasoning over Declarative Data-Aware Processes" as part of the KRDB Summer Online Seminars 2020 (https://www.inf.unibz.it/krdb/sos-2020/).
Presentation of the paper "Soundness of Data-Aware Processes with Arithmetic Conditions" at the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022). Paper available here: https://doi.org/10.1007/978-3-031-07472-1_23
Abstract:
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
Presentation of the paper "Probabilistic Trace Alignment" at the 3rd International Conference on Process Mining (ICPM 2021). Paper available here: https://doi.org/10.1109/ICPM53251.2021.9576856
Abstract:
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model. Alignment-based approaches for conformance checking have so far used crisp process models as a reference. Recent probabilistic conformance checking approaches check the degree of conformance of an event log as a whole with respect to a stochastic process model, without providing alignments. For the first time, we introduce a conformance checking approach based on trace alignments using stochastic Workflow nets. This requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
Presentation of the paper "Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors" at the 7th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). Paper available here: https://proceedings.kr.org/2020/32/
Abstract: The integrated modeling and analysis of dynamic systems and the data they manipulate has been long advocated, on the one hand, to understand how data and corresponding decisions affect the system execution, and on the other hand to capture how actions occurring in the systems operate over data. KR techniques proved successful in handling a variety of tasks over such integrated models, ranging from verification to online monitoring. In this paper, we consider a simple, yet relevant model for data-aware dynamic systems (DDSs), consisting of a finite-state control structure defining the executability of actions that manipulate a finite set of variables with an infinite domain. On top of this model, we consider a data-aware version of reactive synthesis, where execution strategies are built by guaranteeing the satisfaction of a desired linear temporal property that simultaneously accounts for the system dynamics and data evolution.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the 18th Int. Conference on Business Process Management (BPM 2020). Paper available here: https://doi.org/10.1007/978-3-030-58666-9_3
Abstract: Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the CAiSE2020 Forum. The paper is available here: https://link.springer.com/chapter/10.1007/978-3-030-58135-0_8
Abstract: Conformance checking is a fundamental task to detect deviations between the actual and the expected courses of execution of a business process. In this context, temporal business constraints have been extensively adopted to declaratively capture the expected behavior of the process. However, traditionally, these constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, we equip business constraints with a natural, probabilistic notion of uncertainty. We discuss the semantic implications of the resulting framework and show how probabilistic conformance checking and constraint entailment can be tackled therein.
Presentation of the paper "Modeling and Reasoning over Declarative Data-Aware Processes with Object-Centric Behavioral Constraints" at the 17th Int. Conference on Business Process Management (BPM 2019). Paper available here: https://link.springer.com/chapter/10.1007/978-3-030-26619-6_11
Abstract
Existing process modeling notations ranging from Petri nets to BPMN have difficulties capturing the data manipulated by processes. Process models often focus on the control flow, lacking an explicit, conceptually well-founded integration with real data models, such as ER diagrams or UML class diagrams. To overcome this limitation, Object-Centric Behavioral Constraints (OCBC) models were recently proposed as a new notation that combines full-fledged data models with control-flow constraints inspired by declarative process modeling notations such as DECLARE and DCR Graphs. We propose a formalization of the OCBC model using temporal description logics. The obtained formalization allows us to lift all reasoning services defined for constraint-based process modeling notations without data, to the much more sophisticated scenario of OCBC. Furthermore, we show how reasoning over OCBC models can be reformulated into decidable, standard reasoning tasks over the corresponding temporal description logic knowledge base.
Keynote speech at the Belgian Process Mining Research Day 2021. I discuss the open, critical challenge of data preparation in process mining, considering the case where the original event data are implicitly stored in (legacy) relational databases. This case covers the common situation where event data are stored inside the data layer of an ERP or CRM system. This is usually handled using manual, ad-hoc, error-prone ETL procedures. I propose instead to adopt a pipeline based on semantic technologies, in particular the framework of ontology-based data access (also known as virtual knowledge graph). The approach is code-less, and relies on three main conceptual steps: (1) the creation of a data model capturing the relevant classes, attributes, and associations in the domain of interest (2) the definition of declarative mappings from the source database to the data model, following the ontology-based data access paradigm (3) the annotation of the data model with indications on which classes/associations/attributes provide the relevant notions of case, events, event attributes, and event-to-case relation. Once this is done, the framework automatically extracts the event log from the legacy data. This makes extremely smooth to generate logs by taking multiple perspectives on the same reality. The approach has been operationalized in the onprom tool, which employs semantic web standard languages for the various steps, and the XES standard as the target format for the event logs.
Keynote speech at the 7th International Workshop on DEClarative, DECision and Hybrid approaches to processes ( DEC2H 2019) In conjunction with BPM 2019.
This is a talk about the combined modeling and reasoning techniques for decisions, background knowledge, and work processes.
The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.
In this keynote, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and re
Presentation at "Ontology Make Sense", an event in honor of Nicola Guarino, on how to integrate data models with behavioral constraints, an essential problem when modeling multi-case real-life work processes evolving multiple objects at once. I propose to combine UML class diagrams with temporal constraints on finite traces, linked to the data model via co-referencing constraints on classes and associations.
The document discusses representing and querying norm states using temporal ontology-based data access (OBDA). It presents the QUEN framework which models norms and their state transitions declaratively on top of a relational database. QUEN has three layers: 1) an ontological layer representing norms, 2) a specification of norm state transitions in response to database events, and 3) a legacy relational database storing events. It demonstrates QUEN on an example of patient data access consent, modeling authorizations and their lifecycles. Norm state queries are answered directly over the database using the declarative specifications without materializing states.
Presentation ad EDOC 2019 on monitoring multi-perspective business constraints accounting for time and data, with a specific focus on the (unsolvable in general) problem of conflict detection.
1) The document discusses business process management and how conceptual modeling and process mining can help understand and improve digital enterprises.
2) Process mining techniques like process discovery from event logs, decision mining, and social network mining can provide insights into how processes are executed in reality.
3) Replay techniques can enhance process models with timing information and detect deviations to help align actual behaviors with expected behaviors.
More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
Gamify it until you make it Improving Agile Development and Operations with ...Ben Linders
So many challenges, so little time. While we’re busy developing software and keeping it operational, we also need to sharpen the saw, but how? Gamification can be a way to look at how you’re doing and find out where to improve. It’s a great way to have everyone involved and get the best out of people.
In this presentation, Ben Linders will show how playing games with the DevOps coaching cards can help to explore your current development and deployment (DevOps) practices and decide as a team what to improve or experiment with.
The games that we play are based on an engagement model. Instead of imposing change, the games enable people to pull in ideas for change and apply those in a way that best suits their collective needs.
By playing games, you can learn from each other. Teams can use games, exercises, and coaching cards to discuss values, principles, and practices, and share their experiences and learnings.
Different game formats can be used to share experiences on DevOps principles and practices and explore how they can be applied effectively. This presentation provides an overview of playing formats and will inspire you to come up with your own formats.
• For a full set of 530+ questions. Go to
https://skillcertpro.com/product/servicenow-cis-itsm-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
11June 2024. An online pre-engagement session was organized on Tuesday June 11 to introduce the Science Policy Lab approach and the main components of the conceptual framework.
About 40 experts from around the globe gathered online for a pre-engagement session, paving the way for the first SASi-SPi Science Policy Lab event scheduled for June 18-19, 2024 in Malmö. The session presented the objectives for the upcoming Science Policy Lab (S-PoL), which featured a role-playing game designed to simulate stakeholder interactions and policy interventions for food systems transitions. Participants called for the sharing of meeting materials and continued collaboration, reflecting a strong commitment to advancing towards sustainable agrifood systems.
1.) Introduction
Our Movement is not new; it is the same as it was for Freedom, Justice, and Equality since we were labeled as slaves. However, this movement at its core must entail economics.
2.) Historical Context
This is the same movement because none of the previous movements, such as boycotts, were ever completed. For some, maybe, but for the most part, it’s just a place to keep your stable until you’re ready to assimilate them into your system. The rest of the crabs are left in the world’s worst parts, begging for scraps.
3.) Economic Empowerment
Our Movement aims to show that it is indeed possible for the less fortunate to establish their economic system. Everyone else – Caucasian, Asian, Mexican, Israeli, Jews, etc. – has their systems, and they all set up and usurp money from the less fortunate. So, the less fortunate buy from every one of them, yet none of them buy from the less fortunate. Moreover, the less fortunate really don’t have anything to sell.
4.) Collaboration with Organizations
Our Movement will demonstrate how organizations such as the National Association for the Advancement of Colored People, National Urban League, Black Lives Matter, and others can assist in creating a much more indestructible Black Wall Street.
5.) Vision for the Future
Our Movement will not settle for less than those who came before us and stopped before the rights were equal. The economy, jobs, healthcare, education, housing, incarceration – everything is unfair, and what isn’t is rigged for the less fortunate to fail, as evidenced in society.
6.) Call to Action
Our movement has started and implemented everything needed for the advancement of the economic system. There are positions for only those who understand the importance of this movement, as failure to address it will continue the degradation of the people deemed less fortunate.
No, this isn’t Noah’s Ark, nor am I a Prophet. I’m just a man who wrote a couple of books, created a magnificent website: http://www.thearkproject.llc, and who truly hopes to try and initiate a truly sustainable economic system for deprived people. We may not all have the same beliefs, but if our methods are tried, tested, and proven, we can come together and help others. My website: http://www.thearkproject.llc is very informative and considerably controversial. Please check it out, and if you are afraid, leave immediately; it’s no place for cowards. The last Prophet said: “Whoever among you sees an evil action, then let him change it with his hand [by taking action]; if he cannot, then with his tongue [by speaking out]; and if he cannot, then, with his heart – and that is the weakest of faith.” [Sahih Muslim] If we all, or even some of us, did this, there would be significant change. We are able to witness it on small and grand scales, for example, from climate control to business partnerships. I encourage, invite, and challenge you all to support me by visiting my website.
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
Seminar@KRDB 2012 - Montali - Verification of Relational Data-Centric Systems with External Services
1. a
iSC
Verification of
Relational Data-Centric Dynamic Systems
with External Services
Presented by Marco Montali
Joint work with B. Bagheri Hariri, D. Calvanese, G. De Giacomo, A. Deutsch
KRDB Research Center
KRDB Lunch Seminar, March 5, 2012
ACSI Project 257593
2. a
iSC
Artifact-Centric Process Verification in ACSI
• The Data is put on stage, and the Process considers it!
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 1/28
3. a
iSC
Artifact-Centric Process Verification in ACSI
• The Data is put on stage, and the Process considers it!
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 1/28
4. a
iSC
Artifact-Centric Process Verification in ACSI
• The Data is put on stage, and the Process considers it!
• And we get beautiful research results!
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 1/28
5. a
iSC
Data-Centric Dynamic Systems
• DCDS: system where the process controlling the dynamics and the
manipulated data are equally central
general, abstract framework
artifact-centric systems are a specialization of DCDSs
• Data Layer: holds the relevant information to be manipulated by the system
• Process Layer: is formed by the invokable (atomic) actions and a process
based on them
Actions evolve the data of the data layer and update the state of the process
Interaction with external services to acquire new information from the
environment (other systems, user choices, . . . )
DCDS
Process Layer
service
service
service
Data Layer
Environment
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 2/28
6. a
iSC
Data Layer
• Represents the information of interest in our application.
• We focus on relational data.
• Is a tuple D = C, R, E, I0 where:
C is a countably infinite set of constants/values.
R = {R1, . . . , Rn} is a database schema (a finite set of relation schemas).
E is a finite set of equality constraints Qi →
V
j=1,...,k zij = yij.
Qi is a domain independent FO query over R using constants from the active
domain adom(I0) and whose free variables are x.
zij and yij are either variables in x or constants in adom(I0).
N.B.: they can be generalized to denials and arbitrary constraints!
I0 is a database instance that represents the initial state of the data layer:
It conforms to the schema R.
It satisfies the constraints E: for each constraint Qi →
V
j=1,...,k zij = yij and
for each tuple θ ∈ ans(Qi, I0), zijθ = yijθ.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 3/28
7. a
iSC
Process Layer
• Constitutes the progression mechanism for the DCDS.
• High-level: rule-based approach that can accommodate any process with a
finite state control flow.
• Non-determinism represented by interleaving.
• A process layer P over a data layer D is a tuple P = F, A, where:
F is a finite set of functions representing external service interfaces whose
behavior is unknown to the DCDS;
A is a finite set of actions;
is a finite set of condition-action rules that form the; specification of the
overall process.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 4/28
8. a
iSC
Actions
An action ρ is constituted by:
• an action signature, i.e., a name, and a list of individual input parameters.
Parameters are substituted by individuals/constants for the execution of the
action.
• an effect specification {e1, . . . , en}, where each ei is an effect.
Effects are assumed to take place simultaneously.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 5/28
9. a
iSC
Actions
An action ρ is constituted by:
• an action signature, i.e., a name, and a list of individual input parameters.
Parameters are substituted by individuals/constants for the execution of the
action.
• an effect specification {e1, . . . , en}, where each ei is an effect.
Effects are assumed to take place simultaneously.
An effect ei has the form q+
i ∧ Q−
i Ei where:
• q+
i ∧ Q−
i is a query over R and constants of adom(I0):
may include some of the input parameters as terms of the query
q+
i is a UCQ over R
Q−
i is a FOL query that acts as a filter
free variables of Q−
i are included in those of q+
i .
• Ei is a set of facts over R, which include as terms: constants in adom(I0),
parameters, free variables of q+
i , and functions calls that formalize call to
(atomic) external services. These calls introduce new values!
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 5/28
10. a
iSC
Process and Action Execution
• Process: finite set of condition-action rules of the form Q → α, where
α is an action in A
Q is a FO query over R whose free variables are exactly the parameters of α,
and whose other terms can be either quantified variables or constants in
adom(I0).
• To execute an action α in state s according to Q → α:
1. evaluate Q over db(s) choosing a suitable parameters assignment σ for α;
2. if such an assignment exists, then α is executable and instantiated with σ;
3. ασ is executed: for each effect q+
i ∧ Q−
i Ei
3.1 (q+
i ∧ Q−
i )σ is evaluated over db(s), getting variables assignments θ1, . . . , θn;
3.2 for each θi, the grounded facts Eiθi are obtained;
3.3 all service calls contained in Eiθi are issued;
3.4 next state is obtained by asserting each Eiθi after the inclusion of all service
call results.
• We assume that services are deterministic:
if f(a) returns b, then from now on f(a) will always return b.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 6/28
11. a
iSC
DCDS Example
Data Layer
Information about hotels and their price:
• Cur = Currency
• CurHotel = Hotel, Currency
• PEntry = Hotel, Price, Date
Process Layer
Possibility of converting the price list of a hotel from USD dollars to another
currency.
• Service call for price conversion: conv usd(p, currency, date)
• Process: Cur(c) ∧ CurHotel(h, US ) −→ Conv(h, c)
• Conv(h, c) :
PEntry(h, p, d) ∧ Cur(c) PEntry(h, conv usd(p, c, d), d)
CurHotel(h, cold) ∧ Cur(c) CurHotel(h, c)
Copy Rest
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 7/28
15. a
iSC
Transition Systems
• Semantics of a DCDS in terms of transition system Υ = ∆, R, Σ, s0, db, ⇒
∆ is a countably infinite set of values;
R is a database schema;
Σ is a set of states;
s0 ∈ Σ is the initial state;
db is a function that, given a state s ∈ Σ, returns the database associated to
s, which is made up of values in ∆ and conforms to R;
⇒ ⊆ Σ × Σ is a transition relation between pairs of states.
s0
s1
s3
s4
s6
s7
Note: Υ is in general infinite state.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 9/28
16. a
iSC
Transition System with Deterministic Services
• Semantics in terms of concrete transition system ΥS = C, R, Σ, s0, db, ⇒ .
• Each state s ∈ Σ remembers all previous service call results: s = I, M ,
where I is a database and M is a map from service calls to results in C.
• db( I, M ) = I.
• Action execution:
before issuing a service call, it is checked whether the (deterministic) result is
already contained in M;
if it is, then the result is already known;
if not, the call is issued and the obtained result is stored in M.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 10/28
17. a
iSC
Construction of ΥS
• start from s0 = I0, ∅ ∈ Σ;
• repeat forever
pick a state s ∈ Σ
for every action executable in s, every “legal” parameters assignment, every
possible service calls results’ configuration. . .
generate the successor state s and put it in Σ;
insert s, s in ⇒.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 11/28
18. a
iSC
Construction of ΥS
• start from s0 = I0, ∅ ∈ Σ;
• repeat forever
pick a state s ∈ Σ
for every action executable in s, every “legal” parameters assignment, every
possible service calls results’ configuration. . .
generate the successor state s and put it in Σ;
insert s, s in ⇒.
Note: three sources of non-determinism in each step:
• actions non-determinism;
• parameters non-determinism;
• service call results non-determinism: leads to potentially infinite branching.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 11/28
19. a
iSC
Example
Consider a DCDS S = D, P
• Data layer D = C, R, E, I0 , where I0 = {P(a), Q(a, a)}
• Process layer P = F, A,
F = {f/1, g/1}, A = {α}, = {true → α}
α :
Q(a, a) ∧ P(x) {R(x)},
P(x) {P(x), Q(f(x), g(x))}
ff
P(a) Q(a,a)
f(a)→b g(a)→b
P(a) R(a) Q(b,b)
f(a)→b g(a)→a
P(a) R(a) Q(b,a)
f(a)→a g(a)→b
P(a) R(a) Q(a,b)
f(a)→a g(a)→a
P(a) R(a) Q(a,a)
f(a)→c g(a)→b
P(a) R(a) Q(c,b)
f(a)→b g(a)→c
P(a) R(a) Q(b,c)
f(a)→c g(a)→c
P(a) R(a) Q(c,c)
f(a)→a g(a)→b
P(a) Q(a,b)
f(a)→b g(a)→a
P(a) Q(b,a)
f(a)→b g(a)→b
P(a) Q(b,b)
f(a)→c g(a)→b
P(a) Q(c,b)
f(a)→b g(a)→c
P(a) Q(b,c)
f(a)→c g(a)→c
P(a) Q(c,c)
. . .
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 12/28
20. a
iSC
Verification
• We are interested in the verification of DCDSs
• Given a DCDS S and a formula Φ (in some FO temporal logic)
Construct the transition system ΥS of S
Check whether ΥS |= Φ
• Problem: ΥS is in general infinite state
• Idea:
Devise a finite-state transition system ΘS that is a faithful abstraction of ΥS ,
Reduce the verification problem ΥS |= Φ to the verification of ΘS |= Φ
• Procedure:
1. Do a syntactic check over S testing whether ΥS admits a finite-state
abstraction ΘS
2. If so, construct ΘS
3. Model check Φ over ΘS with standard model checking techniques
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 13/28
21. a
iSC
Verification Formalism
• We adopt FO variants of the µ-calculus
µL is a very expressive logic!
• µLF O formulas over a DCDS S have the form:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.Φ | − Φ | Z | µZ.Φ
where Q is an FO query over the database schema
R of S, and Z is a predicate variable.
HML
PDLLTL CTL
µL
µLFO
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 14/28
22. a
iSC
Verification Formalism
• We adopt FO variants of the µ-calculus
µL is a very expressive logic!
• µLF O formulas over a DCDS S have the form:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.Φ | − Φ | Z | µZ.Φ
where Q is an FO query over the database schema
R of S, and Z is a predicate variable.
Example
An example of µL formula is:
∃x1, . . . , xn.
i=j
xi = xj ∧
i∈{1,...,n}
µZ.[Stud(xi)∨ − Z]
HML
PDLLTL CTL
µL
µLFO
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 14/28
23. a
iSC
Verification Formalism
• We adopt FO variants of the µ-calculus
µL is a very expressive logic!
• µLF O formulas over a DCDS S have the form:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.Φ | − Φ | Z | µZ.Φ
where Q is an FO query over the database schema
R of S, and Z is a predicate variable.
Example
An example of µL formula is:
∃x1, . . . , xn.
i=j
xi = xj ∧
i∈{1,...,n}
µZ.[Stud(xi)∨ − Z]
This defeats any kind of finite-state abstraction.
HML
PDLLTL CTL
µL
µLFO
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 14/28
24. a
iSC
Verification Formalism
• We adopt FO variants of the µ-calculus
µL is a very expressive logic!
• µLF O formulas over a DCDS S have the form:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.Φ | − Φ | Z | µZ.Φ
where Q is an FO query over the database schema
R of S, and Z is a predicate variable.
Example
An example of µL formula is:
∃x1, . . . , xn.
i=j
xi = xj ∧
i∈{1,...,n}
µZ.[Stud(xi)∨ − Z]
This defeats any kind of finite-state abstraction.
HML
PDLLTL CTL
µL
µLFO
µLA
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 14/28
25. a
iSC
History Preserving Mu-Calculus (µLA)
• Restricts quantification over individuals to those present in the current
database (denoted by live(x)).
• Syntax:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.live(x) ∧ Φ | − Φ | Z | µZ.Φ
• We abbreviate ¬(∃x.live(x) ∧ ¬Φ) as ∀x.live(x) → Φ.
Example
νX.(∀x.live(x) ∧ Stud(x) →
µY.(∃y.live(y) ∧ Grad(x, y) ∨ − Y ) ∧ [−]X)
Along every path, it is always true, for each student x, that there exists an
evolution that eventually leads to a graduation of the student (with some final
mark y).
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 15/28
26. a
iSC
History Preserving Bisimulation for µLA
• Consider two transition systems Υ1 = ∆1, R, Σ1, s01, db1, ⇒1 and
Υ2 = ∆2, R, Σ2, s02, db2, ⇒2 .
• Problem: Υ1 and Υ2 are over different data domains ∆1 and ∆2, and a
correspondence between elements must be preserved over time.
• Is a relation B ⊆ Σ1 × H × Σ2 such that s1, h, s2 ∈ B implies that:
1. h is a partial bijection between ∆1 and ∆2 that induces an isomorphism
between db1(s1) and db2(s2);
2. for each s1, if s1 ⇒1 s1 then there is an s2 with s2 ⇒2 s2 and a bijection h
that extends h, such that s1, h , s2 ∈ B;
3. for each s2, if s2 ⇒2 s2 then there is an s1 with s1 ⇒1 s1 and a bijection h
that extends h, such that s1, h , s2 ∈ B.
• We have Υ1 ≈ Υ2 if there exists a partial bijection h0 and a history
preserving bisimulation B between Υ1 and Υ2 such that s01, h0, s02 ∈ B.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 16/28
27. a
iSC
History Preserving Bisimulation for µLA
• Consider two transition systems Υ1 = ∆1, R, Σ1, s01, db1, ⇒1 and
Υ2 = ∆2, R, Σ2, s02, db2, ⇒2 .
• Problem: Υ1 and Υ2 are over different data domains ∆1 and ∆2, and a
correspondence between elements must be preserved over time.
• Is a relation B ⊆ Σ1 × H × Σ2 such that s1, h, s2 ∈ B implies that:
1. h is a partial bijection between ∆1 and ∆2 that induces an isomorphism
between db1(s1) and db2(s2);
2. for each s1, if s1 ⇒1 s1 then there is an s2 with s2 ⇒2 s2 and a bijection h
that extends h, such that s1, h , s2 ∈ B;
3. for each s2, if s2 ⇒2 s2 then there is an s1 with s1 ⇒1 s1 and a bijection h
that extends h, such that s1, h , s2 ∈ B.
• We have Υ1 ≈ Υ2 if there exists a partial bijection h0 and a history
preserving bisimulation B between Υ1 and Υ2 such that s01, h0, s02 ∈ B.
Theorem
If Υ1 ≈ Υ2, then for every µLA closed formula Φ, we have:
Υ1 |= Φ if and only if Υ2 |= Φ.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 16/28
28. a
iSC
(Un)decidability Results
Theorem
There exists a DCDS S with deterministic services, and a propositional LTL safety
property Φ, such that checking ΥS |= Φ is undecidable.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 17/28
29. a
iSC
(Un)decidability Results
Theorem
There exists a DCDS S with deterministic services, and a propositional LTL safety
property Φ, such that checking ΥS |= Φ is undecidable.
To gain decidability, we need restrictions on the DCDS: run-boundedness
• For every run τ in ΥS we have | s state of τ adom(db(s))| < b
• I.e., there exists a bound b such that every run in ΥS encounters at most b
different values.
• A (data) unbounded run represents an execution in which infinitely many
different service calls are issued.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 17/28
30. a
iSC
(Un)decidability Results
Theorem
There exists a DCDS S with deterministic services, and a propositional LTL safety
property Φ, such that checking ΥS |= Φ is undecidable.
To gain decidability, we need restrictions on the DCDS: run-boundedness
• For every run τ in ΥS we have | s state of τ adom(db(s))| < b
• I.e., there exists a bound b such that every run in ΥS encounters at most b
different values.
• A (data) unbounded run represents an execution in which infinitely many
different service calls are issued.
Theorem
Verification of µLA properties on run-bounded DCDSs with deterministic services
is decidable.
1. We devise a finite-state abstraction ΘS for a run-bounded DCDS S.
2. We prove that ΘS ≈ ΥS, hence they satisfy the same µLA formulae.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 17/28
31. a
iSC
Ensuring Run-Boundedness
Theorem
Checking run-boundedness of DCDSs with deterministic services is undecidable.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 18/28
32. a
iSC
Ensuring Run-Boundedness
Theorem
Checking run-boundedness of DCDSs with deterministic services is undecidable.
We have devised a sufficient syntactic condition that guarantees run-boundedness:
weak acyclicity
• Depends only on action specifications, and not on data.
• Is polynomially checkable.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 18/28
33. a
iSC
Ensuring Run-Boundedness
Theorem
Checking run-boundedness of DCDSs with deterministic services is undecidable.
We have devised a sufficient syntactic condition that guarantees run-boundedness:
weak acyclicity
• Depends only on action specifications, and not on data.
• Is polynomially checkable.
Theorem
Verification of µLA properties for weakly acyclic DCDSs with deterministic
services is decidable, and can be reduced to model checking of propositional
µ-calculus over a finite transition system.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 18/28
34. a
iSC
DCDS Example Data LayerSchema
Customer
In Debt Customer
Gold CustomerLoan
closed
owes
peer
Instance
Cust(ann)
peer(mark, john)
Gold(john)
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 19/28
35. a
iSC
DCDS Example Data LayerSchema
Customer
In Debt Customer
Gold CustomerLoan
closed
owes
peer
Instance
Cust(ann)
peer(mark, john)
Gold(john)
Process Layer
Conditions
peer(x, y) ∧ Gold(y)
−→ GetLoan(x)
Service Calls
UInput(x)
Actions
GetLoan(x) :
∃y.peer(x, y) {owes(x, UInput(x))},
Cust(z) {Cust(z)},
Loan(z) {Loan(z)},
InDebt(z) {InDebt(z)},
Gold(z) {Gold(z)}
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 19/28
36. a
iSC
DCDS Example Data LayerSchema
Customer
In Debt Customer
Gold CustomerLoan
closed
owes
peer
Instance
Cust(ann)
peer(mark, john)
Gold(john)
owes(mark, @25)
Process Layer
Conditions
peer(x, y) ∧ Gold(y)
−→ GetLoan(x)
Service Calls
UInput(x)
Actions
GetLoan(x) :
∃y.peer(x, y) {owes(x, UInput(x))},
Cust(z) {Cust(z)},
Loan(z) {Loan(z)},
InDebt(z) {InDebt(z)},
Gold(z) {Gold(z)}
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 19/28
37. a
iSC
Non-Deterministic Services
• The same service call issued later may give a different result.
• Can be used to model:
user input;
unpredictable external environment.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 20/28
38. a
iSC
Non-Deterministic Services
• The same service call issued later may give a different result.
• Can be used to model:
user input;
unpredictable external environment.
Theorem
There exists a DCDS S with nondeterministic services, and a propositional LTL
safety property Φ, such that checking ΥS |= Φ is undecidable.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 20/28
39. a
iSC
Non-Deterministic Services
• The same service call issued later may give a different result.
• Can be used to model:
user input;
unpredictable external environment.
Theorem
There exists a DCDS S with nondeterministic services, and a propositional LTL
safety property Φ, such that checking ΥS |= Φ is undecidable.
To gain decidability, we need again restrictions on the DCDS:
• Run-boundedness is too strong: it bounds the overall number of service calls.
• We consider instead state boundedness: there is a finite bound b such that
for each state I of ΥS, |adom(I)| < b.
However . . .
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 20/28
40. a
iSC
Non-Deterministic Services
• The same service call issued later may give a different result.
• Can be used to model:
user input;
unpredictable external environment.
Theorem
There exists a DCDS S with nondeterministic services, and a propositional LTL
safety property Φ, such that checking ΥS |= Φ is undecidable.
To gain decidability, we need again restrictions on the DCDS:
• Run-boundedness is too strong: it bounds the overall number of service calls.
• We consider instead state boundedness: there is a finite bound b such that
for each state I of ΥS, |adom(I)| < b.
However . . .
Theorem
Verification of µLA properties on state-bounded DCDSs with nondeterministic
services is undecidable.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 20/28
41. a
iSC
Persistence Preserving Mu-Calculus (µLP )
• Restricts quantification over individuals that continuously persist along the
system evolution, i.e., that continue to be live(x).
• Syntax:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.live(x) ∧ Φ |
− (live(x) ∧ Φ) | [−](live(x) ∧ Φ) | Z | µZ.Φ
where in live(x) ∧ − Φ and live(x) ∧ [−]Φ, the free variables of Φ are x.
• We abbreviate ¬[−](live(x) ∧ ¬Φ) as − (live(x) → Φ)
and ¬ − (live(x) ∧ ¬Φ) as [−](live(x) → Φ).
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 21/28
42. a
iSC
Persistence Preserving Mu-Calculus (µLP )
• Restricts quantification over individuals that continuously persist along the
system evolution, i.e., that continue to be live(x).
• Syntax:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.live(x) ∧ Φ |
− (live(x) ∧ Φ) | [−](live(x) ∧ Φ) | Z | µZ.Φ
where in live(x) ∧ − Φ and live(x) ∧ [−]Φ, the free variables of Φ are x.
• We abbreviate ¬[−](live(x) ∧ ¬Φ) as − (live(x) → Φ)
and ¬ − (live(x) ∧ ¬Φ) as [−](live(x) → Φ).
Example
νX.(∀x.live(x) ∧ Stud(x) →
µY.(∃y.live(y) ∧ Grad(x, y) ∨ − (live(x) ∧ Y )) ∧ [−]X)
Along every path, it is always true, for each student x, that there exists an
evolution in which x persists in the database until she eventually graduates.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 21/28
43. a
iSC
Persistence Preserving Mu-Calculus (µLP )
• Restricts quantification over individuals that continuously persist along the
system evolution, i.e., that continue to be live(x).
• Syntax:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.live(x) ∧ Φ |
− (live(x) ∧ Φ) | [−](live(x) ∧ Φ) | Z | µZ.Φ
where in live(x) ∧ − Φ and live(x) ∧ [−]Φ, the free variables of Φ are x.
• We abbreviate ¬[−](live(x) ∧ ¬Φ) as − (live(x) → Φ)
and ¬ − (live(x) ∧ ¬Φ) as [−](live(x) → Φ).
Example
νX.(∀x.live(x) ∧ Stud(x) →
µY.(∃y.live(y) ∧ Grad(x, y) ∨ − (live(x) ∧ Y )) ∧ [−]X)
Along every path, it is always true, for each student x, that there exists an
evolution in which x persists in the database until she eventually graduates.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 21/28
44. a
iSC
Persistence Preserving Mu-Calculus (µLP )
• Restricts quantification over individuals that continuously persist along the
system evolution, i.e., that continue to be live(x).
• Syntax:
Φ ::= Q | ¬Φ | Φ1 ∧ Φ2 | ∃x.live(x) ∧ Φ |
− (live(x) ∧ Φ) | [−](live(x) ∧ Φ) | Z | µZ.Φ
where in live(x) ∧ − Φ and live(x) ∧ [−]Φ, the free variables of Φ are x.
• We abbreviate ¬[−](live(x) ∧ ¬Φ) as − (live(x) → Φ)
and ¬ − (live(x) ∧ ¬Φ) as [−](live(x) → Φ).
Example
νX.(∀x.live(x) ∧ Stud(x) →
µY.(∃y.live(y) ∧ Grad(x, y) ∨ − (live(x) → Y )) ∧ [−]X)
Along every path, it is always true, for each student x, that there exists an
evolution in which either x does not persist, or she eventually graduates.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 21/28
45. a
iSC
Persistence Preserving Bisimulation for µLP
• Consider two transition systems Υ1 = ∆1, R, Σ1, s01, db1, ⇒1 and
Υ2 = ∆2, R, Σ2, s02, db2, ⇒2 .
• W.r.t. µLA, it is now sufficient to preserve the correspondence between
elements of ∆1 and ∆2 that persist over time.
• Is a relation B ⊆ Σ1 × H × Σ2 such that s1, h, s2 ∈ B implies that:
1. h is an isomorphism between db1(s1) and db2(s2);
2. for each s1, if s1 ⇒1 s1 then there exists an s2 with s2 ⇒2 s2 and a bijection
h that extends h|adom(db1(s1))∩adom(db1(s1)), such that s1, h , s2 ∈ B;
3. for each s2, if s2 ⇒2 s2 then there exists an s1 with s1 ⇒1 s1 and a bijection
h that extends h|adom(db1(s1))∩adom(db1(s1)), such that s1, h , s2 ∈ B.
• We have Υ1 ∼ Υ2 if there exists a partial bijection h0 and a persistence
preserving bisimulation B between Υ1 and Υ2 such that s01, h0, s02 ∈ B.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 22/28
46. a
iSC
Persistence Preserving Bisimulation for µLP
• Consider two transition systems Υ1 = ∆1, R, Σ1, s01, db1, ⇒1 and
Υ2 = ∆2, R, Σ2, s02, db2, ⇒2 .
• W.r.t. µLA, it is now sufficient to preserve the correspondence between
elements of ∆1 and ∆2 that persist over time.
• Is a relation B ⊆ Σ1 × H × Σ2 such that s1, h, s2 ∈ B implies that:
1. h is an isomorphism between db1(s1) and db2(s2);
2. for each s1, if s1 ⇒1 s1 then there exists an s2 with s2 ⇒2 s2 and a bijection
h that extends h|adom(db1(s1))∩adom(db1(s1)), such that s1, h , s2 ∈ B;
3. for each s2, if s2 ⇒2 s2 then there exists an s1 with s1 ⇒1 s1 and a bijection
h that extends h|adom(db1(s1))∩adom(db1(s1)), such that s1, h , s2 ∈ B.
• We have Υ1 ∼ Υ2 if there exists a partial bijection h0 and a persistence
preserving bisimulation B between Υ1 and Υ2 such that s01, h0, s02 ∈ B.
Theorem
If Υ1 ∼ Υ2, then for every µLP closed formula Φ, we have:
Υ1 |= Φ if and only if Υ2 |= Φ.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 22/28
47. a
iSC
Verification of State-Bounded Systems
Theorem
Verification of µLP properties on state-bounded DCDSs with non-deterministic
services is decidable.
1. We devise a finite-state abstraction ΘS for a state-bounded DCDS S.
2. We prove that ΘS ∼ ΥS, hence they satisfy the same µLP formulae.
However . . .
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 23/28
48. a
iSC
Verification of State-Bounded Systems
Theorem
Verification of µLP properties on state-bounded DCDSs with non-deterministic
services is decidable.
1. We devise a finite-state abstraction ΘS for a state-bounded DCDS S.
2. We prove that ΘS ∼ ΥS, hence they satisfy the same µLP formulae.
However . . .
Theorem
Checking state-boundedness of DCDSs with non-deterministic services is
undecidable.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 23/28
49. a
iSC
Ensuring State-Boundedness
We have devised a sufficient syntactic condition that guarantees
state-boundedness: generate-recall acyclicity
• Depends only on action specifications, and not on data.
• Is polynomially checkable.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 24/28
50. a
iSC
Ensuring State-Boundedness
We have devised a sufficient syntactic condition that guarantees
state-boundedness: generate-recall acyclicity
• Depends only on action specifications, and not on data.
• Is polynomially checkable.
Theorem
Verification of µLP properties for generate-recall acyclic DCDSs with
non-deterministic services is decidable, and can be reduced to model checking of
propositional µ-calculus over a finite transition system.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 24/28
51. a
iSC
Generate-Recall Acyclicity
Example
Consider I0 = {R(a)}, process {true −→ α}, and
action α =
R(x) R(x)
R(x) Q(f(x))
• The resulting process layer is generate-recall acyclic.
• Service call f(a) is continuously issued, leading to possibly generate infinitely
many distinct values, but such values are not recalled.
• Since µLP formulae only focus on persisting values, the possibly infinitely
many distinct results obtained by issuing f(a) are irrelevant.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 25/28
52. a
iSC
Generate-Recall Acyclicity
Example
Consider I0 = {R(a)}, process {true −→ α}, and
action α =
R(x) R(x)
R(x) Q(f(x))
Q(x) Q(x)
• The resulting process layer is not generate-recall acyclic.
• Service call f(a) is continuously issued, leading to possibly generate infinitely
many distinct values, which are recalled in Q.
• It is not possible to find a faithful finite abstraction, because there exist runs
accumulating unboundedly many distinct values inside their states.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 26/28
53. a
iSC
Generate-Recall Acyclicity
Example
Consider I0 = {R(a)}, process {true −→ α, true −→ β}, and
actions α =
R(x) R(x)
R(x) Q(f(x))
and β = {Q(x) Q(x)}
• The resulting process layer is generate-recall acyclic.
• It resembles the previous case, but now effects belong to two distinct actions.
• While α is getting a new value for f(a), Q tuples are lost.
• While β is copying Q tuples, no new value is insterted.
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 27/28
54. a
iSC
Summary of Results on Verification of DCDSs
deterministic services nondeterministic services
µLFO µLA µLP µLFO µLA µLP
unrestricted U ← U ← U unrestricted U ← U ← U
↑ ↑
bounded-run ? D → D bounded-state U ← U D
D: Verification is decidable U: Verification is undecidable
Verification of DCDSs ACSI Project 257593 M. Montali - KRDB 28/28