Invited seminar on "Online Monitoring of Business Constraints and Metaconstraints using LTL and LDL over Finite Traces" given at the University of Luxembourg on January 16, 2015.
Status, Potential and Constraints in e-business application in BangladeshManas Saha
This document discusses the status, potential, and constraints of e-business applications in Bangladesh. It provides information on the present status of e-business in Bangladesh, including the types of e-marketing suitable in Bangladesh and names of e-marketing companies operating in the country. It also discusses the SWOT analysis, approaches, and marketing processes of e-business in Bangladesh. Some of the key potential benefits identified include improved infrastructure, increased internet and mobile users, and expanded marketplace. However, challenges include limited access to technology, high costs, security issues, and lack of modern financial systems and regulations. Suggestions are provided to increase training, awareness, and support for e-business development.
The document discusses monitoring business constraints over finite traces using the logic LDLF. It begins by introducing the goals of monitoring constraints online and detecting deviations promptly. It then discusses two problems: 1) how to refine semantics for proactive monitoring and 2) how to capture contextual constraints. The document proposes using LDLF, a logic that combines LTL and regular expressions, to address these problems. It explains how LDLF monitors can check partial traces against constraints and assign monitoring values like "temporarily true" by translating the problem into standard LDLF semantics.
Introductory course on concepts used in predictive control. For more files and MATLAB suporting information go to:
http://controleducation.group.shef.ac.uk/OER_index.htm
The document discusses concurrent data structures and non-blocking synchronization. It introduces lock-free and wait-free synchronization as alternatives to lock-based synchronization. It also describes the NOBLE interface, which aims to make non-blocking synchronization more accessible to parallel programmers by providing efficient and portable non-blocking implementations of common data structures. Experimental results show that replacing locks with non-blocking synchronization can improve performance of parallel scientific applications.
This document discusses non-blocking synchronization as an alternative to lock-based synchronization for parallel applications. It begins by asking whether non-blocking synchronization can provide performance benefits over lock-based approaches for scientific applications. It then describes NOBLE, a non-blocking synchronization interface designed to make non-blocking techniques more accessible to parallel programmers. The document concludes by discussing evaluations of applications modified to use non-blocking synchronization and the performance improvements observed.
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...ActiveEon
Thesis submitted by Andrews Cordolino Sobral at Université de La Rochelle to fulfill the degree of Doctor of Philosophy.
Robust Low-rank and Sparse Decomposition for Moving Object Detection - From Matrices to Tensors
Status, Potential and Constraints in e-business application in BangladeshManas Saha
This document discusses the status, potential, and constraints of e-business applications in Bangladesh. It provides information on the present status of e-business in Bangladesh, including the types of e-marketing suitable in Bangladesh and names of e-marketing companies operating in the country. It also discusses the SWOT analysis, approaches, and marketing processes of e-business in Bangladesh. Some of the key potential benefits identified include improved infrastructure, increased internet and mobile users, and expanded marketplace. However, challenges include limited access to technology, high costs, security issues, and lack of modern financial systems and regulations. Suggestions are provided to increase training, awareness, and support for e-business development.
The document discusses monitoring business constraints over finite traces using the logic LDLF. It begins by introducing the goals of monitoring constraints online and detecting deviations promptly. It then discusses two problems: 1) how to refine semantics for proactive monitoring and 2) how to capture contextual constraints. The document proposes using LDLF, a logic that combines LTL and regular expressions, to address these problems. It explains how LDLF monitors can check partial traces against constraints and assign monitoring values like "temporarily true" by translating the problem into standard LDLF semantics.
Introductory course on concepts used in predictive control. For more files and MATLAB suporting information go to:
http://controleducation.group.shef.ac.uk/OER_index.htm
The document discusses concurrent data structures and non-blocking synchronization. It introduces lock-free and wait-free synchronization as alternatives to lock-based synchronization. It also describes the NOBLE interface, which aims to make non-blocking synchronization more accessible to parallel programmers by providing efficient and portable non-blocking implementations of common data structures. Experimental results show that replacing locks with non-blocking synchronization can improve performance of parallel scientific applications.
This document discusses non-blocking synchronization as an alternative to lock-based synchronization for parallel applications. It begins by asking whether non-blocking synchronization can provide performance benefits over lock-based approaches for scientific applications. It then describes NOBLE, a non-blocking synchronization interface designed to make non-blocking techniques more accessible to parallel programmers. The document concludes by discussing evaluations of applications modified to use non-blocking synchronization and the performance improvements observed.
PhD Thesis Defense Presentation: Robust Low-rank and Sparse Decomposition for...ActiveEon
Thesis submitted by Andrews Cordolino Sobral at Université de La Rochelle to fulfill the degree of Doctor of Philosophy.
Robust Low-rank and Sparse Decomposition for Moving Object Detection - From Matrices to Tensors
Towards agile formal methods
The main goal of this work is to overcome the aforementioned limitations by enabling automated decision gates in performance testing of microservices that allow requirements traceability. We seek to achieve this goal by endowing common agile practices used in microservice performance testing, with the ability to automatically learn and then formally verify a performance model of the System Under Test (SUT) to achieve strong assurances of quality. Even if the separation between agile and formal methods increased over the years, we support the claim that formal methods are at a stage where they can be effectively incorporated into agile methods to give them rigorous engineering foundations and make them systematic and effective with strong guarantees.
This document outlines the requirements for the capstone term project in ME 3012 Systems Analysis & Control. Students will work individually or in pairs to design a feedback controller to improve the performance of a real-world system. They must choose a system, perform open-loop modeling and analysis, design at least three feedback controllers, compare closed-loop performance, and present results. An interim status report is due on February 23rd and the final written report and oral presentation are due on April 14th. The project aims to provide hands-on experience with feedback controller design for real-world applications.
This document is a master's thesis submitted by Jérémy Pouech at KTH Royal Institute of Technology in Stockholm, Sweden in 2015. The thesis proposes an algorithm to make failure detection and classification for industrial robots more generic and semi-automatic. The algorithm uses machine learning to analyze sensory data recorded during robot operations and learn to differentiate between success and failure scenarios. It clusters the training data using OPTICS clustering and extracts labels to learn a classification function to detect failures in new data. The goal is to allow operators without programming skills to teach failure detection to robots. The thesis applies this framework to detect failures in an assembly task performed by an ABB YuMi robot.
This document discusses Critical Path Method (CPM) network analysis and problems. It provides an introduction to CPM, describing how it was developed and the key differences between CPM and PERT. The document then presents benefits and applications of CPM, limitations, basic steps, how to represent a network diagram, and defines key terms. It concludes with an example problem demonstrating how to identify the critical path through a network.
To hit Ruby3x3, we must first figure out **what** we're going to measure, **how** we're going to measure it, in order to get what we actually want. I'll cover some standard definitions of benchmarking in dynamic languages, as well as the tradeoffs that must be made when benchmarking. I'll look at some of the possible benchmarks that could be considered for Ruby 3x3, and evaluate them for what they're good for measuring, and what they're less good for measuring, in order to help the Ruby community decide what the 3x goal is going to be measured against.
Crash course on data streaming (with examples using Apache Flink)Vincenzo Gulisano
These are the slides I used for a crash course (4 hours) on data streaming. It contains both theory / research aspects as well as examples based on Apache Flink (DataStream API)
Software Testing: Test Design and the Project Life CycleDerek Callaway
The software development process consists of an indeterminate number of fundamental steps that together comprise the project life cycle. All of these steps carry out software testing in one form or another. Some organizations have an entire team delegated exclusively to software testing. (Royer 16-17,20) As a result, a substantial amount of a software development project’s budget is allocated solely toward testing. This establishes the need to utilize formal techniques in order to trim cost. (Amman and Black, Coverage 20) Such techniques are the subject of an ample amount of scholarly investigation and are generally classified into two complementary integration approaches (top-down and bottom-up) and fall into one of a pair of distinct methods (black-box and white-box). In this report, the distinguishing characteristics and merits of each are presented, as well as their relative disadvantages and ways to mitigate their limitations.
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...Flink Forward
Apache Beam is Flink’s sibling in the Apache family of streaming processing frameworks. The Beam and Flink teams work closely together on advancing what is possible in streaming processing, including Streaming SQL extensions and code interoperability on both platforms.
Beam was originally developed at Google as the amalgamation of its internal batch and streaming frameworks to power the exabyte-scale data processing for Gmail, YouTube and Ads. It now powers a fully-managed, serverless service Google Cloud Dataflow, as well as is available to run in other Public Clouds and on-premises when deployed in portability mode on Apache Flink, Spark, Samza and other runners. Users regularly run distributed data processing jobs on Beam spanning tens of thousands of CPU cores and processing millions of events per second.
In this session, Sergei Sokolenko, Cloud Dataflow product manager, and Reuven Lax, the founding member of the Dataflow and Beam team, will share Google’s learnings from building and operating a global streaming processing infrastructure shared by thousands of customers, including:
safe deployment to dozens of geographic locations,
resource autoscaling to minimize processing costs,
separating compute and state storage for better scaling behavior,
dynamic work rebalancing of work items away from overutilized worker nodes,
offering a throughput-optimized batch processing capability with the same API as streaming,
grouping and joining of 100s of Terabytes in a hybrid in-memory/on-desk file system,
integrating with the Google Cloud security ecosystem, and other lessons.
Customers benefit from these advances through faster execution of jobs, resource savings, and a fully managed data processing environment that runs in the Cloud and removes the need to manage infrastructure.
The document discusses MTBF (mean time between failures), including how to calculate, predict, and test it. It addresses common misconceptions about MTBF and describes a two-day training plan that covers the basics of MTBF as well as how to analyze MTBF reports and predictions. The training provides answers to questions and considers reliability modeling techniques to estimate component and system-level MTBF.
This document proposes a method to improve the reuse of workflow fragments by mining workflow repositories. It evaluates different graph representations of workflows and uses the SUBDUE algorithm to identify recurrent fragments. An experiment compares representations on precision, recall, memory usage, and time. Representation D1, which labels edges and nodes, performed best. A second experiment assesses how filtering workflows by keywords impacts finding relevant fragments for a user query. The method aims to incorporate workflow fragment search capabilities into the design lifecycle to promote reuse.
Need for Async: Hot pursuit for scalable applicationsKonrad Malawski
This document discusses asynchronous processing and how it relates to scalability and performance. It begins with an introduction on why asynchronous processing is important for highly parallel systems. It then covers topics like asynchronous I/O, scheduling, latency measurement, concurrent data structures, and techniques for distributed systems like backup requests and combined requests. The overall message is that asynchronous programming allows more efficient use of resources through approaches like non-blocking I/O, and that understanding these principles is key to building scalable applications.
A quick introduction to the Multi-Armed Bandits through the lens of Bandit Convex Optimization, and a quick glance at the Stochastic Multi-Armed Bandits.
Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule ApproachAlkis Vazacopoulos
Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead of prioritizing and scheduling one job or project (or stock-order) at a time, we schedule one stock or stock-group at a time where a stock-group is a collection of individual stocks and their one or more stock-orders. These stocks can be feed-stocks, intermediate-stocks or product-stocks of which we focus on product-stocks given that most production is demand-driven. A key feature of this heuristic is our ability to compress the production network or superstructure so that only those unit-operations necessary to produce the stocks in question are included in the model thus reducing the size of the problem considerably at each iteration of the heuristic. The stock-specific network compression technique uses what we call a unit-capacity transshipment linear program to successively determine which unit-operations are redundant when making a particular stock. This heuristic is also particularly useful for those process industries that can potentially produce many product-stocks but only a fraction of these are produced within the scheduling horizon whereby the model is significantly reduced at solve time to include only those stocks that are demanded whereby redundant unit-operations are removed. An illustrative example is provided with recycle loops (i.e., stock flow-reversals) and shared units or equipment (i.e., unit flow-reversals) that demonstrates the effectiveness and efficiency of the technique.
This document provides an overview of model predictive control (MPC) and demonstrates its implementation in MATLAB using a continuous stirred tank heater (CSTH) model. It discusses key MPC concepts like observability, controllability, designing an observer, and formulating the optimization problem. MATLAB files and GUIs are also described that allow simulation and analysis of MPC behavior compared to traditional PID control.
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptxMinilikDerseh1
This document provides an overview of linear programming problems (LPP). It discusses the key components of linear programming models including objectives, decision variables, constraints, and parameters. It also covers formulation of LPP, graphical and simplex solution methods, duality, and post-optimality analysis. Various applications of linear programming in areas like production, marketing, finance, and personnel management are also highlighted. An example problem on determining optimal product mix given resource constraints is presented to illustrate linear programming formulation.
This presentation discusses graceful degradation and fault tolerance in computing systems. It covers topics like degradation allowance, diagnosis and isolation of faulty components, checkpointing and rollback for recovery, and optimal checkpoint insertion. Checkpointing involves periodically saving the state of a computation to allow restarting from the last checkpoint if a fault occurs. The optimal number of checkpoints balances the overhead of checkpointing against the work lost due to rollbacks. Various data distribution techniques like replication and dispersion are also presented to improve the reliability of data storage.
Lbdp localized boundary detection and parametrization for 3 d sensor networksNexgen Technology
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
More Related Content
Similar to UNI.LU Seminar Jan2015 - Montali - Online Monitoring of Business Constraints and Metaconstraints
Towards agile formal methods
The main goal of this work is to overcome the aforementioned limitations by enabling automated decision gates in performance testing of microservices that allow requirements traceability. We seek to achieve this goal by endowing common agile practices used in microservice performance testing, with the ability to automatically learn and then formally verify a performance model of the System Under Test (SUT) to achieve strong assurances of quality. Even if the separation between agile and formal methods increased over the years, we support the claim that formal methods are at a stage where they can be effectively incorporated into agile methods to give them rigorous engineering foundations and make them systematic and effective with strong guarantees.
This document outlines the requirements for the capstone term project in ME 3012 Systems Analysis & Control. Students will work individually or in pairs to design a feedback controller to improve the performance of a real-world system. They must choose a system, perform open-loop modeling and analysis, design at least three feedback controllers, compare closed-loop performance, and present results. An interim status report is due on February 23rd and the final written report and oral presentation are due on April 14th. The project aims to provide hands-on experience with feedback controller design for real-world applications.
This document is a master's thesis submitted by Jérémy Pouech at KTH Royal Institute of Technology in Stockholm, Sweden in 2015. The thesis proposes an algorithm to make failure detection and classification for industrial robots more generic and semi-automatic. The algorithm uses machine learning to analyze sensory data recorded during robot operations and learn to differentiate between success and failure scenarios. It clusters the training data using OPTICS clustering and extracts labels to learn a classification function to detect failures in new data. The goal is to allow operators without programming skills to teach failure detection to robots. The thesis applies this framework to detect failures in an assembly task performed by an ABB YuMi robot.
This document discusses Critical Path Method (CPM) network analysis and problems. It provides an introduction to CPM, describing how it was developed and the key differences between CPM and PERT. The document then presents benefits and applications of CPM, limitations, basic steps, how to represent a network diagram, and defines key terms. It concludes with an example problem demonstrating how to identify the critical path through a network.
To hit Ruby3x3, we must first figure out **what** we're going to measure, **how** we're going to measure it, in order to get what we actually want. I'll cover some standard definitions of benchmarking in dynamic languages, as well as the tradeoffs that must be made when benchmarking. I'll look at some of the possible benchmarks that could be considered for Ruby 3x3, and evaluate them for what they're good for measuring, and what they're less good for measuring, in order to help the Ruby community decide what the 3x goal is going to be measured against.
Crash course on data streaming (with examples using Apache Flink)Vincenzo Gulisano
These are the slides I used for a crash course (4 hours) on data streaming. It contains both theory / research aspects as well as examples based on Apache Flink (DataStream API)
Software Testing: Test Design and the Project Life CycleDerek Callaway
The software development process consists of an indeterminate number of fundamental steps that together comprise the project life cycle. All of these steps carry out software testing in one form or another. Some organizations have an entire team delegated exclusively to software testing. (Royer 16-17,20) As a result, a substantial amount of a software development project’s budget is allocated solely toward testing. This establishes the need to utilize formal techniques in order to trim cost. (Amman and Black, Coverage 20) Such techniques are the subject of an ample amount of scholarly investigation and are generally classified into two complementary integration approaches (top-down and bottom-up) and fall into one of a pair of distinct methods (black-box and white-box). In this report, the distinguishing characteristics and merits of each are presented, as well as their relative disadvantages and ways to mitigate their limitations.
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...Flink Forward
Apache Beam is Flink’s sibling in the Apache family of streaming processing frameworks. The Beam and Flink teams work closely together on advancing what is possible in streaming processing, including Streaming SQL extensions and code interoperability on both platforms.
Beam was originally developed at Google as the amalgamation of its internal batch and streaming frameworks to power the exabyte-scale data processing for Gmail, YouTube and Ads. It now powers a fully-managed, serverless service Google Cloud Dataflow, as well as is available to run in other Public Clouds and on-premises when deployed in portability mode on Apache Flink, Spark, Samza and other runners. Users regularly run distributed data processing jobs on Beam spanning tens of thousands of CPU cores and processing millions of events per second.
In this session, Sergei Sokolenko, Cloud Dataflow product manager, and Reuven Lax, the founding member of the Dataflow and Beam team, will share Google’s learnings from building and operating a global streaming processing infrastructure shared by thousands of customers, including:
safe deployment to dozens of geographic locations,
resource autoscaling to minimize processing costs,
separating compute and state storage for better scaling behavior,
dynamic work rebalancing of work items away from overutilized worker nodes,
offering a throughput-optimized batch processing capability with the same API as streaming,
grouping and joining of 100s of Terabytes in a hybrid in-memory/on-desk file system,
integrating with the Google Cloud security ecosystem, and other lessons.
Customers benefit from these advances through faster execution of jobs, resource savings, and a fully managed data processing environment that runs in the Cloud and removes the need to manage infrastructure.
The document discusses MTBF (mean time between failures), including how to calculate, predict, and test it. It addresses common misconceptions about MTBF and describes a two-day training plan that covers the basics of MTBF as well as how to analyze MTBF reports and predictions. The training provides answers to questions and considers reliability modeling techniques to estimate component and system-level MTBF.
This document proposes a method to improve the reuse of workflow fragments by mining workflow repositories. It evaluates different graph representations of workflows and uses the SUBDUE algorithm to identify recurrent fragments. An experiment compares representations on precision, recall, memory usage, and time. Representation D1, which labels edges and nodes, performed best. A second experiment assesses how filtering workflows by keywords impacts finding relevant fragments for a user query. The method aims to incorporate workflow fragment search capabilities into the design lifecycle to promote reuse.
Need for Async: Hot pursuit for scalable applicationsKonrad Malawski
This document discusses asynchronous processing and how it relates to scalability and performance. It begins with an introduction on why asynchronous processing is important for highly parallel systems. It then covers topics like asynchronous I/O, scheduling, latency measurement, concurrent data structures, and techniques for distributed systems like backup requests and combined requests. The overall message is that asynchronous programming allows more efficient use of resources through approaches like non-blocking I/O, and that understanding these principles is key to building scalable applications.
A quick introduction to the Multi-Armed Bandits through the lens of Bandit Convex Optimization, and a quick glance at the Stochastic Multi-Armed Bandits.
Stock Decomposition Heuristic for Scheduling: A Priority Dispatch Rule ApproachAlkis Vazacopoulos
Highlighted in this article is a closed-shop scheduling heuristic which makes use of the traditional priority dispatch rule approach found in open-shop scheduling such as job-shop scheduling. Instead of prioritizing and scheduling one job or project (or stock-order) at a time, we schedule one stock or stock-group at a time where a stock-group is a collection of individual stocks and their one or more stock-orders. These stocks can be feed-stocks, intermediate-stocks or product-stocks of which we focus on product-stocks given that most production is demand-driven. A key feature of this heuristic is our ability to compress the production network or superstructure so that only those unit-operations necessary to produce the stocks in question are included in the model thus reducing the size of the problem considerably at each iteration of the heuristic. The stock-specific network compression technique uses what we call a unit-capacity transshipment linear program to successively determine which unit-operations are redundant when making a particular stock. This heuristic is also particularly useful for those process industries that can potentially produce many product-stocks but only a fraction of these are produced within the scheduling horizon whereby the model is significantly reduced at solve time to include only those stocks that are demanded whereby redundant unit-operations are removed. An illustrative example is provided with recycle loops (i.e., stock flow-reversals) and shared units or equipment (i.e., unit flow-reversals) that demonstrates the effectiveness and efficiency of the technique.
This document provides an overview of model predictive control (MPC) and demonstrates its implementation in MATLAB using a continuous stirred tank heater (CSTH) model. It discusses key MPC concepts like observability, controllability, designing an observer, and formulating the optimization problem. MATLAB files and GUIs are also described that allow simulation and analysis of MPC behavior compared to traditional PID control.
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptxMinilikDerseh1
This document provides an overview of linear programming problems (LPP). It discusses the key components of linear programming models including objectives, decision variables, constraints, and parameters. It also covers formulation of LPP, graphical and simplex solution methods, duality, and post-optimality analysis. Various applications of linear programming in areas like production, marketing, finance, and personnel management are also highlighted. An example problem on determining optimal product mix given resource constraints is presented to illustrate linear programming formulation.
This presentation discusses graceful degradation and fault tolerance in computing systems. It covers topics like degradation allowance, diagnosis and isolation of faulty components, checkpointing and rollback for recovery, and optimal checkpoint insertion. Checkpointing involves periodically saving the state of a computation to allow restarting from the last checkpoint if a fault occurs. The optimal number of checkpoints balances the overhead of checkpointing against the work lost due to rollbacks. Various data distribution techniques like replication and dispersion are also presented to improve the reliability of data storage.
Lbdp localized boundary detection and parametrization for 3 d sensor networksNexgen Technology
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
Similar to UNI.LU Seminar Jan2015 - Montali - Online Monitoring of Business Constraints and Metaconstraints (20)
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
Slides of the keynote speech on "Constraints for process framing in Augmented BPM" at the AI4BPM 2022 International Workshop, co-located with BPM 2022. The keynote focuses on the problem of "process framing" in the context of the new vision of "Augmented BPM", where BPM systems are augmented with AI capabilities. This vision is described in a manifesto, available here: https://arxiv.org/abs/2201.12855
Keynote speech at KES 2022 on "Intelligent Systems for Process Mining". I introduce process mining, discuss why process mining tasks should be approached by using intelligent systems, and show a concrete example of this combination, namely (anticipatory) monitoring of evolving processes against temporal constraints, using techniques from knowledge representation and formal methods (in particular, temporal logics over finite traces and their automata-theoretic characterization).
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
1. The document discusses representing business processes with uncertainty using ProbDeclare, an extension of Declare that allows constraints to have uncertain probabilities.
2. ProbDeclare models contain both crisp constraints that must always hold and probabilistic constraints that hold with some probability. This leads to multiple possible "scenarios" depending on which constraints are satisfied.
3. Reasoning involves determining which scenarios are logically consistent using LTLf, and computing the probability distribution over scenarios by solving a system of inequalities defined by the constraint probabilities.
Presentation on "From Case-Isolated to Object-Centric Processes - A Tale of Two Models" as part of the Hasselt University BINF Research Seminar Series (see https://www.uhasselt.be/en/onderzoeksgroepen-en/binf/research-seminar-series).
Invited seminar on "Modeling and Reasoning over Declarative Data-Aware Processes" as part of the KRDB Summer Online Seminars 2020 (https://www.inf.unibz.it/krdb/sos-2020/).
Presentation of the paper "Soundness of Data-Aware Processes with Arithmetic Conditions" at the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022). Paper available here: https://doi.org/10.1007/978-3-031-07472-1_23
Abstract:
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
Presentation of the paper "Probabilistic Trace Alignment" at the 3rd International Conference on Process Mining (ICPM 2021). Paper available here: https://doi.org/10.1109/ICPM53251.2021.9576856
Abstract:
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model. Alignment-based approaches for conformance checking have so far used crisp process models as a reference. Recent probabilistic conformance checking approaches check the degree of conformance of an event log as a whole with respect to a stochastic process model, without providing alignments. For the first time, we introduce a conformance checking approach based on trace alignments using stochastic Workflow nets. This requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
Presentation of the paper "Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors" at the 7th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). Paper available here: https://proceedings.kr.org/2020/32/
Abstract: The integrated modeling and analysis of dynamic systems and the data they manipulate has been long advocated, on the one hand, to understand how data and corresponding decisions affect the system execution, and on the other hand to capture how actions occurring in the systems operate over data. KR techniques proved successful in handling a variety of tasks over such integrated models, ranging from verification to online monitoring. In this paper, we consider a simple, yet relevant model for data-aware dynamic systems (DDSs), consisting of a finite-state control structure defining the executability of actions that manipulate a finite set of variables with an infinite domain. On top of this model, we consider a data-aware version of reactive synthesis, where execution strategies are built by guaranteeing the satisfaction of a desired linear temporal property that simultaneously accounts for the system dynamics and data evolution.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the 18th Int. Conference on Business Process Management (BPM 2020). Paper available here: https://doi.org/10.1007/978-3-030-58666-9_3
Abstract: Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the CAiSE2020 Forum. The paper is available here: https://link.springer.com/chapter/10.1007/978-3-030-58135-0_8
Abstract: Conformance checking is a fundamental task to detect deviations between the actual and the expected courses of execution of a business process. In this context, temporal business constraints have been extensively adopted to declaratively capture the expected behavior of the process. However, traditionally, these constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, we equip business constraints with a natural, probabilistic notion of uncertainty. We discuss the semantic implications of the resulting framework and show how probabilistic conformance checking and constraint entailment can be tackled therein.
Presentation of the paper "Modeling and Reasoning over Declarative Data-Aware Processes with Object-Centric Behavioral Constraints" at the 17th Int. Conference on Business Process Management (BPM 2019). Paper available here: https://link.springer.com/chapter/10.1007/978-3-030-26619-6_11
Abstract
Existing process modeling notations ranging from Petri nets to BPMN have difficulties capturing the data manipulated by processes. Process models often focus on the control flow, lacking an explicit, conceptually well-founded integration with real data models, such as ER diagrams or UML class diagrams. To overcome this limitation, Object-Centric Behavioral Constraints (OCBC) models were recently proposed as a new notation that combines full-fledged data models with control-flow constraints inspired by declarative process modeling notations such as DECLARE and DCR Graphs. We propose a formalization of the OCBC model using temporal description logics. The obtained formalization allows us to lift all reasoning services defined for constraint-based process modeling notations without data, to the much more sophisticated scenario of OCBC. Furthermore, we show how reasoning over OCBC models can be reformulated into decidable, standard reasoning tasks over the corresponding temporal description logic knowledge base.
Keynote speech at the Belgian Process Mining Research Day 2021. I discuss the open, critical challenge of data preparation in process mining, considering the case where the original event data are implicitly stored in (legacy) relational databases. This case covers the common situation where event data are stored inside the data layer of an ERP or CRM system. This is usually handled using manual, ad-hoc, error-prone ETL procedures. I propose instead to adopt a pipeline based on semantic technologies, in particular the framework of ontology-based data access (also known as virtual knowledge graph). The approach is code-less, and relies on three main conceptual steps: (1) the creation of a data model capturing the relevant classes, attributes, and associations in the domain of interest (2) the definition of declarative mappings from the source database to the data model, following the ontology-based data access paradigm (3) the annotation of the data model with indications on which classes/associations/attributes provide the relevant notions of case, events, event attributes, and event-to-case relation. Once this is done, the framework automatically extracts the event log from the legacy data. This makes extremely smooth to generate logs by taking multiple perspectives on the same reality. The approach has been operationalized in the onprom tool, which employs semantic web standard languages for the various steps, and the XES standard as the target format for the event logs.
Keynote speech at the 7th International Workshop on DEClarative, DECision and Hybrid approaches to processes ( DEC2H 2019) In conjunction with BPM 2019.
This is a talk about the combined modeling and reasoning techniques for decisions, background knowledge, and work processes.
The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.
In this keynote, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and re
Presentation at "Ontology Make Sense", an event in honor of Nicola Guarino, on how to integrate data models with behavioral constraints, an essential problem when modeling multi-case real-life work processes evolving multiple objects at once. I propose to combine UML class diagrams with temporal constraints on finite traces, linked to the data model via co-referencing constraints on classes and associations.
The document discusses representing and querying norm states using temporal ontology-based data access (OBDA). It presents the QUEN framework which models norms and their state transitions declaratively on top of a relational database. QUEN has three layers: 1) an ontological layer representing norms, 2) a specification of norm state transitions in response to database events, and 3) a legacy relational database storing events. It demonstrates QUEN on an example of patient data access consent, modeling authorizations and their lifecycles. Norm state queries are answered directly over the database using the declarative specifications without materializing states.
Presentation ad EDOC 2019 on monitoring multi-perspective business constraints accounting for time and data, with a specific focus on the (unsolvable in general) problem of conflict detection.
1) The document discusses business process management and how conceptual modeling and process mining can help understand and improve digital enterprises.
2) Process mining techniques like process discovery from event logs, decision mining, and social network mining can provide insights into how processes are executed in reality.
3) Replay techniques can enhance process models with timing information and detect deviations to help align actual behaviors with expected behaviors.
More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)
This presentation by Nathaniel Lane, Associate Professor in Economics at Oxford University, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfBen Linders
Psychological safety in teams is important; team members must feel safe and able to communicate and collaborate effectively to deliver value. It’s also necessary to build long-lasting teams since things will happen and relationships will be strained.
But, how safe is a team? How can we determine if there are any factors that make the team unsafe or have an impact on the team’s culture?
In this mini-workshop, we’ll play games for psychological safety and team culture utilizing a deck of coaching cards, The Psychological Safety Cards. We will learn how to use gamification to gain a better understanding of what’s going on in teams. Individuals share what they have learned from working in teams, what has impacted the team’s safety and culture, and what has led to positive change.
Different game formats will be played in groups in parallel. Examples are an ice-breaker to get people talking about psychological safety, a constellation where people take positions about aspects of psychological safety in their team or organization, and collaborative card games where people work together to create an environment that fosters psychological safety.
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
UNI.LU Seminar Jan2015 - Montali - Online Monitoring of Business Constraints and Metaconstraints
1. Online Monitoring
of Business Constraints and Metaconstraints
Using LTL and LDL over Finite Traces
Based on our BPM2014 Paper
Marco Montali
KRDB Research Centre for Knowledge and Data
Free University of Bozen-Bolzano
Joint work with: G. De Giacomo, R. De Masellis, M. Grasso, F.M. Maggi
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 1 / 42
2. Data-Awareness in Dynamic Systems
Traditional approach to model dynamic systems: divide et impera of
• static, data-related aspects
• dynamic, process/interaction-related aspects.
Notable examples:
• In BPM: data and business processes are conceptually isolated from each
other, and only integrated at the implementation level.
• In MAS: a plethora of logics for reasoning about the behavior of agents and
their interaction, but completely neglecting the exchanged information.
; No coherent understanding of what the system is doing.
Our ultimate goals
1. Provide formal models that simultaneously account for the system
dynamics and the manipulation of data (databases/ontologies).
2. Devise logic-based techniques for the verification and monitoring of
such integrated systems.
This combination poses major challenges to verification.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 2 / 42
3. Compliance Checking
Behaviors Compliance Checker
Regulatory Model
Feedback
Our setting:
• Behaviors are BP execution traces: linear sequences of atomic tasks.
• Regulatory model: set of business constraints about the accepted
dynamics.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 3 / 42
4. Key Dimensions
How to model traces?
When is compliance checked?
How to capture the regulatory model?
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 4 / 42
5. How to Model Traces?
Two choices: finite vs infinite.
• A business process instance is expected to end.
• Hence, traces are finite sequences of tasks.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 5 / 42
6. When is compliance checked?
Design-time.
A-posteriori.
Runtime.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 6 / 42
7. When is compliance checked?
Design-time.
• Traces generated from a process model (cf. temporal model checking).
• Too coarse-grained when the process model is only partially
compliant.
A-posteriori.
Runtime.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 6 / 42
8. When is compliance checked?
Design-time.
A-posteriori.
• Traces extracted from an event log.
• Can only detect issues: no live support to people.
Runtime.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 6 / 42
9. When is compliance checked?
Design-time.
A-posteriori.
Runtime.
• Partial, evolving traces.
• Constant feedback to people.
• Fine-grained notion of compliance (things may change).
• A whole space of possibilities on the power of monitors.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 6 / 42
10. How to Capture the Regulatory Model?
We want precision and understanding: logics!
We want to predicate about the (un)desired BP dynamics.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 7 / 42
11. How to Capture the Regulatory Model?
We want precision and understanding: logics!
We want to predicate about the (un)desired BP dynamics.
Hence:
• Linear
• Temporal/Dynamic
Logics
• over finite traces.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 7 / 42
12. Declarative Approach to Business Process Modeling -
Declare
Basic idea: Drop explicit representation of processes, and ltlf formulas
specify the allowed (finite) traces. [VanDerAalstPesic06,ACM-TWEB10]
178 M. Pesic
forbidden
optional
allowed
possible
(a) forbidden, optional and allowed
in business processes
(b) procedural workflow
control-flow
(c) declarative workflow
constraints
constraints constraints
constraints
Fig. 6.3 Declarative vs. procedural workflows
the constraint-based approach can provide for all types of flexibility listed in the
previous paragraph. A concrete implementation that enables creating decomposi-Marco Montali (unibz) Monitoring Business Constraints UNI.LU 8 / 42
13. Declare at a Glance
64 3 The ConDec Language
choose
item
pay
refuse
shipment
accept
shipment
deliver
receipt
0..1
accept
order
close
order
cancel
order
refuse
order
C
C C C
S
S
S
W
W
0..1
Fig. 3.2. Complete ConDec model capturing the order management choreography
Table 3.7. Some cognitive dimensions
closeness of mapping Closeness of representation to domain
abstraction Types and availability of abstraction mechanisms
From [LNBIP-56].
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 9 / 42
14. Declare Patterns
Declare promotes the use of a controlled set of notable ltlf formulas
for process specification. [VanDerAalstPesic06,ACM-TWEB10]
Example (Some Declare Patterns)
name notation ltlf description
Existence
1..∗
a 3a a must be executed at least once
Resp. existence a •−−−− b 3a → 3b If a is executed, then b must be executed as well
Response a •−−− b 2(a → 3b) Every time a is executed, b must be executed afterwards
Precedence a −−− • b ¬b W a b can be executed only if a has been executed before
Alt. Response a •=== b 2(a → ◦(¬a U b)) Every a must be followed by b, without any other a
inbetween
Chain Response a •=−=−=− b 2(a → ◦b) If a is executed then b must be executed next
Chain Precedence a =−=−=− • b 2(◦b → a) Task b can be executed only immediately after a
Not Coexistence a •−−−• b ¬(3a ∧ 3b) Only one among tasks a and b can be executed
Neg. Succession a •−− • b 2(a → ¬3b) Task a cannot be followed by b, and b cannot be pre-
ceded by a
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 10 / 42
15. Parenthesis: the Subtlety of Finite Traces
Linear Temporal Logic (ltl) ubiquitous in AI and CS
However:
• Sometimes we interpret temporal logic on infinite traces as originally
proposed [Pnueli1977].
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 11 / 42
16. Parenthesis: the Subtlety of Finite Traces
Linear Temporal Logic (ltl) ubiquitous in AI and CS
However:
• Sometimes we interpret temporal logic on infinite traces as originally
proposed [Pnueli1977].
• Sometimes we interpret temporal logic on finite traces
[DeGiacomoVardi13].
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 11 / 42
17. Parenthesis: the Subtlety of Finite Traces
Linear Temporal Logic (ltl) ubiquitous in AI and CS
However:
• Sometimes we interpret temporal logic on infinite traces as originally
proposed [Pnueli1977].
• Sometimes we interpret temporal logic on finite traces
[DeGiacomoVardi13].
• Often, we blur the distinction interpreting ltl on infinite or on finite
traces.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 11 / 42
18. Parenthesis: the Subtlety of Finite Traces
Linear Temporal Logic (ltl) ubiquitous in AI and CS
However:
• Sometimes we interpret temporal logic on infinite traces as originally
proposed [Pnueli1977].
• Sometimes we interpret temporal logic on finite traces
[DeGiacomoVardi13].
• Often, we blur the distinction interpreting ltl on infinite or on finite
traces.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 11 / 42
19. Parenthesis: the Subtlety of Finite Traces
Linear Temporal Logic (ltl) ubiquitous in AI and CS
However:
• Sometimes we interpret temporal logic on infinite traces as originally
proposed [Pnueli1977].
• Sometimes we interpret temporal logic on finite traces
[DeGiacomoVardi13].
• Often, we blur the distinction interpreting ltl on infinite or on finite
traces.
• Temporally extended goals - infinite/finite
• Temporal constraints on trajectories - finite
• Declarative and Procedural control knowledge on trajectories - finite
• Temporal specification in planning domains - infinite
• Planning via model checking - infinite
• Declarative Business Processes Specification: Declare - finite
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 11 / 42
20. Blurring of ltl and ltlf
From [Edelkamp2006]
We can cast the Büchi automaton as an NFA, which accepts a word if it
terminates in a final state.
From [GereviniEtAl2009]
Since PDDL 3.0 constraints are normally evaluated over finite trajectories, the
Büchi acceptance condition “an accepting state is visited infinitely often”, reduces
to the standard acceptance condition that the automaton is in an accepting state
at the end of the trajectory.
From [VanDerAalstPesic06]
We use the original (ltl) algorithm, but we specify that each execution
eventually ends.
• We introduce an “invisible” activity end, the ending activity in the model.
• We use this activity to specify that the service will end:
3end ∧ 2(end → ◦end))
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 12 / 42
21. Blurring of ltl and ltlf
Are these appealing intuitions correct?
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 13 / 42
22. Blurring of ltl and ltlf
Are these appealing intuitions correct? NO!
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 13 / 42
23. Blurring of ltl and ltlf
Are these appealing intuitions correct? NO!
But why do they seem reasonable and were adopted for several years?
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 13 / 42
24. Blurring of ltl and ltlf
Are these appealing intuitions correct? NO!
But why do they seem reasonable and were adopted for several years?
See [AAAI14]
In summary:
• For many widely used ltlf patterns, the intuition in
[VanDerAalstPesic06] is indeed correct.
• But it cannot be generalized to the entire logic.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 13 / 42
25. ltl over finite traces
Assuming finite or infinite traces has big impact.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 14 / 42
26. ltl over finite traces
Assuming finite or infinite traces has big impact.
Example
Consider the following formula:
2(A → 3B) ∧ 2(B → 3A)
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 14 / 42
27. ltl over finite traces
Assuming finite or infinite traces has big impact.
Example
Consider the following formula:
2(A → 3B) ∧ 2(B → 3A)
• On infinite traces:
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 14 / 42
28. ltl over finite traces
Assuming finite or infinite traces has big impact.
Example
Consider the following formula:
2(A → 3B) ∧ 2(B → 3A)
• On infinite traces:
...A B A B ...A B
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 14 / 42
29. ltl over finite traces
Assuming finite or infinite traces has big impact.
Example
Consider the following formula:
2(A → 3B) ∧ 2(B → 3A)
• On infinite traces:
...A B A B ...A B
• On finite traces:
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 14 / 42
30. ltl over finite traces
Assuming finite or infinite traces has big impact.
Example
Consider the following formula:
2(A → 3B) ∧ 2(B → 3A)
• On infinite traces:
...A B A B ...A B
• On finite traces:
A
A
B
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 14 / 42
31. ltl over finite traces
Assuming finite or infinite traces has big impact.
Example
Consider again the formula: 2(A → 3B) ∧ 2(B → 3A)
• Büchi automaton accepting its infinite traces:
• NFA accepting its finite traces:
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 15 / 42
34. Operational Decision Support
Extension of classical process mining to current, live data.
Refined process mining framework
PAGE
information system(s)
current
data
“world”people
machines
organizations
business
processes documents
historic
data
resources/
organization
data/rules
control-flow
de jure models
resources/
organization
data/rules
control-flow
de facto models
provenance
explore
predict
recommend
detect
check
compare
promote
discover
enhance
diagnose
cartographynavigation auditing
event logs
models
“pre
mortem”
“post
mortem”
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 18 / 42
35. Detecting Deviations
Auditing: find deviations between observed and expected behaviors.
e
o
e
ee
.
current
data
historic
data
resources/
organization
data/rules
control-flow
de jure models
resources/
organization
data/rules
control-flow
de facto models
detect
check
compare
promote
auditing
event logs
models
“pre
mortem”
“post
mortem”
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 19 / 42
36. Detecting Deviations
Auditing: find deviations between observed and expected behaviors.
e
o
e
ee
.
current
data
historic
data
resources/
organization
data/rules
control-flow
de jure models
resources/
organization
data/rules
control-flow
de facto models
detect
check
compare
promote
auditing
event logs
models
“pre
mortem”
“post
mortem”
Our setting:
Model
Declarative business constraints.
• E.g., Declare.
Monitoring
• Online, evolving observations.
• Prompt deviation detection.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 19 / 42
37. On Promptness
Flight routes (thanks Claudio!)
• When the airplane takes off, it must eventually reach the destination airport.
• When the airplane is re-routed, it cannot reach the destination airport anymore.
• If a dangerous situation is detected at the destination, airplane must be re-routed.
take-off reach re-route danger
Question
Consider trace:
take-off danger
Is there any deviation?
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 20 / 42
38. Boring Answer: Apparently Not
Reactive Monitor
• Checks the partial trace
observed so far.
• Suspends the judgment if no
conclusive answer can be given.
take-off reach re-route danger
take-off danger
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 21 / 42
39. Boring Answer: Apparently Not
Reactive Monitor
• Checks the partial trace
observed so far.
• Suspends the judgment if no
conclusive answer can be given.
take-off reach re-route danger
take-off danger
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 21 / 42
40. Prophetic Answer: YES
Proactive Monitor
• Checks the partial trace observed so far.
• Looks into the future(s).
Ciao
Marco Montali (unibz) Monitoring
take-off reach re-route danger
take-off danger
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 22 / 42
41. Prophetic Answer: YES
Proactive Monitor
• Checks the partial trace observed so far.
• Looks into the future(s).
Ciao
Marco Montali (unibz) Monitoring
take-off reach re-route danger
take-off danger
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 22 / 42
42. Prophetic Answer: YES
Proactive Monitor
• Checks the partial trace observed so far.
• Looks into the future(s).
Ciao
Marco Montali (unibz) Monitoring
take-off reach re-route danger
take-off danger
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 22 / 42
43. Prophetic Answer: YES
Proactive Monitor
• Checks the partial trace observed so far.
• Looks into the future(s).
Ciao
Marco Montali (unibz) Monitoring
2(take-off → 3reach) ∧ ¬(3(reach) ∧ 3(re-route)) ∧ 2(danger → 3re-route)
take-off danger
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 22 / 42
44. Logics on Finite Traces
Goal
Reasoning on finite partial traces and their finite suffixes.
Typical Solution: ltlf
Adopt LTL on finite traces and corresponding techniques based on
Finite-State Automata.a
a
Not Büchi automata!
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 23 / 42
45. Logics on Finite Traces
Goal
Reasoning on finite partial traces and their finite suffixes.
Typical Solution: ltlf
Adopt LTL on finite traces and corresponding techniques based on
Finite-State Automata.a
a
Not Büchi automata!
Remember the difference, often neglected, between LTL on finite and
infinite traces!
See [AAAI2014]
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 23 / 42
46. Problem #1: Monitoring
Proactive monitoring requires to refine the standard ltlf semantics.
RV-LTL
Given an LTL formula ϕ:
• [ϕ]RV = true ; OK;
• [ϕ]RV = false ; BAD;
• [ϕ]RV = temp_true ; OK now, could become BAD in the future;
• [ϕ]RV = temp_false ; BAD now, could become OK in the future.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 24 / 42
47. Problem #1: Monitoring
Proactive monitoring requires to refine the standard ltlf semantics.
RV-LTL
Given an LTL formula ϕ:
• [ϕ]RV = true ; OK;
• [ϕ]RV = false ; BAD;
• [ϕ]RV = temp_true ; OK now, could become BAD in the future;
• [ϕ]RV = temp_false ; BAD now, could become OK in the future.
However. . .
• Typically studied on infinite traces: detour to Büchi automata.
• Only ad-hoc techniques on finite traces [BPM2011].
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 24 / 42
48. Problem #2: Contextual Business Constraints
Need for monitoring constraints only when specific circumstances hold.
• Compensation constraints.
• Contrary-do-duty “expectations” (yes, I’m scared about obligations).
• . . .
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 25 / 42
49. Problem #2: Contextual Business Constraints
Need for monitoring constraints only when specific circumstances hold.
• Compensation constraints.
• Contrary-do-duty “expectations” (yes, I’m scared about obligations).
• . . .
However. . .
• Cannot be systematically captured at the level of constraint
specification.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 25 / 42
50. Suitability of the Constraint Specification Language
ltlf
FOL over
finite traces
Star-free
regular
expressions
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 26 / 42
51. Suitability of the Constraint Specification Language
ltlf
MSOL over
finite traces
FOL over
finite traces
Regular
expressions
Star-free
regular
expressions
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 26 / 42
52. Suitability of the Constraint Specification Language
ltlf
MSOL over
finite traces
FOL over
finite traces
Regular
expressions
Star-free
regular
expressions
pspace
complexity
Nondet.
finite-state
automata
(nfa)
• ltlf : declarative, but lacking expressiveness.
• Regular expressions: rich formalism, but low-level.
(t)ake-off (r)each ((r|other)∗
(t(t|other)∗
r)(r|other)∗
)∗
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 26 / 42
53. Suitability of the Constraint Specification Language
ldlf
Linear Dynamic
Logic over
finite traces
ltlf
MSOL over
finite traces
FOL over
finite traces
Regular
expressions
Star-free
regular
expressions
pspace
complexity
Nondet.
finite-state
automata
(nfa)
• ltlf : declarative, but lacking expressiveness.
• Regular expressions: rich formalism, but low-level.
(t)ake-off (r)each ((r|other)∗
(t(t|other)∗
r)(r|other)∗
)∗
• ldlf : combines the best of the two!
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 26 / 42
54. The Logic ldlf [De Giacomo&Vardi,IJCAI13]
Merges ltlf with regular expressions, through the syntax of Propositional
Dynamic Logic (PDL):
ϕ ::= φ | tt | ff | ¬ϕ | ϕ1 ∧ ϕ2 | ρ ϕ | [ρ]ϕ
ρ ::= φ | ϕ? | ρ1 + ρ2 | ρ1; ρ2 | ρ∗
ϕ: ltlf part; ρ: regular expression part.
They mutually refer to each other:
• ρ ϕ states that, from the current step in the trace, there is an
execution satisfying ρ such that its last step satisfies ϕ.
• [ρ]ϕ states that, from the current step in the trace, all execution
satisfying ρ are such that their last step satisfies ϕ.
• ϕ? checks whether ϕ is true in the current step and, if so, continues
to evaluate the remaining execution.
Of special interest is end = [true?]ff , to check whether the trace has been
completed (the remaining trace is the empty one).
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 27 / 42
55. Runtime ldlf Monitors
Check partial trace π = e1, . . . , en against formula ϕ.
From ad-hoc techniques . . .
e1 . . . en |= ϕ RV =
temp_true
temp_false
true
false
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 28 / 42
56. Runtime ldlf Monitors
Check partial trace π = e1, . . . , en against formula ϕ.
From ad-hoc techniques . . .
e1 . . . en |= ϕ RV =
temp_true
temp_false
true
false
. . . To standard techniques
e1 . . . en |=
ϕtemp_true
ϕtemp_false
ϕtrue
ϕfalse
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 28 / 42
57. How is This Magic Possible?
Starting point: ltlf /ldlf formula ϕ and its RV semantics.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 29 / 42
58. How is This Magic Possible?
Starting point: ltlf /ldlf formula ϕ and its RV semantics.
1. Good and bad prefixes
• Lposs_good(ϕ) = {π | ∃π .ππ ∈ L(ϕ)}
• Lnec_good(ϕ) = {π | ∀π .ππ ∈ L(ϕ)}
• Lnec_bad(ϕ) = Lnec_good(¬ϕ) = {π | ∀π .ππ ∈ L(ϕ)}
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 29 / 42
59. How is This Magic Possible?
Starting point: ltlf /ldlf formula ϕ and its RV semantics.
1. Good and bad prefixes
• Lposs_good(ϕ) = {π | ∃π .ππ ∈ L(ϕ)}
• Lnec_good(ϕ) = {π | ∀π .ππ ∈ L(ϕ)}
• Lnec_bad(ϕ) = Lnec_good(¬ϕ) = {π | ∀π .ππ ∈ L(ϕ)}
2. RV-LTL values as prefixes
• π |= [ϕ]RV = true iff π ∈ Lnec_good(ϕ);
• π |= [ϕ]RV = false iff π ∈ Lnec_bad(ϕ);
• π |= [ϕ]RV = temp_true iff π ∈ L(ϕ) Lnec_good(ϕ);
• π |= [ϕ]RV = temp_false iff π ∈ L(¬ϕ) Lnec_bad(ϕ).
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 29 / 42
60. How is This Black Magic Possible?
3. Prefixes as regular expressions
Every nfa can be expressed as a regular expression.
; We can build regular expression prefϕ s.t. L(prefϕ) = Lposs_good(ϕ).
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 30 / 42
61. How is This Black Magic Possible?
3. Prefixes as regular expressions
Every nfa can be expressed as a regular expression.
; We can build regular expression prefϕ s.t. L(prefϕ) = Lposs_good(ϕ).
4. Regular expressions can be immersed into ldlf
Hence: π ∈ Lposs_good(ϕ) iff π |= prefϕ end
π ∈ Lnec_good(ϕ) iff π |= prefϕ end ∧ ¬ pref¬ϕ end
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 30 / 42
62. How is This Black Magic Possible?
3. Prefixes as regular expressions
Every nfa can be expressed as a regular expression.
; We can build regular expression prefϕ s.t. L(prefϕ) = Lposs_good(ϕ).
4. Regular expressions can be immersed into ldlf
Hence: π ∈ Lposs_good(ϕ) iff π |= prefϕ end
π ∈ Lnec_good(ϕ) iff π |= prefϕ end ∧ ¬ pref¬ϕ end
5. RV-LTL can be immersed into ldlf !
• π |= [ϕ]RV = true iff prefϕ end ∧ ¬ pref¬ϕ end;
• π |= [ϕ]RV = false iff pref¬ϕ end ∧ ¬ prefϕ end;
• π |= [ϕ]RV = temp_true iff π |= ϕ ∧ pref¬ϕ end;
• π |= [ϕ]RV = temp_false iff π |= ¬ϕ ∧ prefϕ end.
Ending point: 4 ldlf monitor formulae under standard semantics.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 30 / 42
63. Monitoring declare with ldlf
Step 1. Good prefixes of declare patterns.
name notation pref possible RV states
existence
Existence
1..∗
a (a + o)∗ temp_false, true
Absence 2
0..1
a o∗ + (o∗; a; o∗) temp_true, false
choice
Choice a −−
♦
−− b (a + b + o)∗ temp_false, true
Exclusive Choice a −− −− b (a + o)∗ + (b + o)∗ temp_false, temp_true, false
relation
Resp. existence a •−−−− b (a + b + o)∗ temp_true, temp_false, true
Response a •−−− b (a + b + o)∗ temp_true, temp_false
Precedence a −−− • b o∗; (a; (a + b + o)∗)∗ temp_true, true, false
negation
Not Coexistence a •−−−• b (a + o)∗ + (b + o)∗ temp_true, false
Neg. Succession a •−− • b (b + o)∗; (a + o)∗ temp_true, false
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 31 / 42
64. Monitoring declare with ldlf
Step 1. Good prefixes of declare patterns.
name notation pref possible RV states
existence
Existence
1..∗
a (a + o)∗ temp_false, true
Absence 2
0..1
a o∗ + (o∗; a; o∗) temp_true, false
choice
Choice a −−
♦
−− b (a + b + o)∗ temp_false, true
Exclusive Choice a −− −− b (a + o)∗ + (b + o)∗ temp_false, temp_true, false
relation
Resp. existence a •−−−− b (a + b + o)∗ temp_true, temp_false, true
Response a •−−− b (a + b + o)∗ temp_true, temp_false
Precedence a −−− • b o∗; (a; (a + b + o)∗)∗ temp_true, true, false
negation
Not Coexistence a •−−−• b (a + o)∗ + (b + o)∗ temp_true, false
Neg. Succession a •−− • b (b + o)∗; (a + o)∗ temp_true, false
Step 2. Generate ldlf monitors.
• Local monitors: 1 formula for possible RV constraint state.
• Global monitors: 4 RV formulae for the conjunction of all constraints.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 31 / 42
65. One Step Beyond: Metaconstraints
ldlf monitors are simply ldlf formulae.
They can be combined into more complex ldlf formulae!
• E.g., expressing conditional/contextual monitoring.
Business metaconstraint
An ldlf formula of the form Φpre → Ψexp
• Φpre combines membership assertions of business constraints to their
RV truth values.
• Ψexp combines business constraints to be checked when Φpre holds.
ldlf metaconstraint monitors
• Φpre → Ψexp is a standard ldlf formula.
• Hence, just reapply our technique and get the 4 ldlf monitors.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 32 / 42
66. Compensation Constraints
• An order cannot be canceled anymore if it is closed.
ϕcanc = close order •−− • cancel order
• If this happens, then the customer has to pay a supplement:
ϕdopay =
1..∗
pay supplement
• Formally: {[ϕcanc]RV = false} → ϕdopay
• In ldlf : ( pref¬ϕcanc
end ∧ ¬ prefϕcanc
end) → ϕdopay
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 33 / 42
67. Compensation Constraints
• An order cannot be canceled anymore if it is closed.
ϕcanc = close order •−− • cancel order
• If this happens, then the customer has to pay a supplement:
ϕdopay =
1..∗
pay supplement
• Formally: {[ϕcanc]RV = false} → ϕdopay
• In ldlf : ( pref¬ϕcanc
end ∧ ¬ prefϕcanc
end) → ϕdopay
Observation
When the violation occurs, the compensation is monitored from the
beginning of the trace: OK to “compensate in advance”.
• Trace close order → pay supplement → cancel order is OK.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 33 / 42
68. Metaconstraints Revisited
Business metaconstraint with temporal consequence
1. Take Φpre and Ψexp as before.
2. Compute ρ: regular expression denoting those paths that satisfy Φpre
3. Make ρ part of the compensation:
Φpre→ [ρ] Ψexp
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 34 / 42
69. Metaconstraints Revisited
Business metaconstraint with temporal consequence
1. Take Φpre and Ψexp as before.
2. Compute ρ: regular expression denoting those paths that satisfy Φpre
3. Make ρ part of the compensation:
Φpre→ [ρ] Ψexp
Ψexp has now to be enforced after Φpre becomes true.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 34 / 42
70. Compensation Revisited
• An order cannot be canceled anymore if it is closed.
ϕcanc = close order •−− • cancel order
• After this happens, then the customer has to pay a supplement:
ϕdopay =
1..∗
pay supplement
• Formally:
{[ϕcanc]RV = false}→ [re{[ϕcanc]RV =false}] ϕdopay
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 35 / 42
71. Compensation Revisited
• An order cannot be canceled anymore if it is closed.
ϕcanc = close order •−− • cancel order
• After this happens, then the customer has to pay a supplement:
ϕdopay =
1..∗
pay supplement
• Formally:
{[ϕcanc]RV = false}→ [re{[ϕcanc]RV =false}] ϕdopay
All traces violating ϕcanc
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 35 / 42
72. From ldlf to nfa
Direct calculation of nfa corresponding to ldlf formula ϕ
Algorithm
1: algorithm ldlf 2nfa()
2: input ltlf formula ϕ
3: output nfa Aϕ = (2P
, S, {s0}, , {sf })
4: s0 ← {"ϕ"} single initial state
5: sf ← ∅ single final state
6: S ← {s0, sf }, ← ∅
7: while (S or change) do
8: if (q ∈ S and q |= ("ψ"∈q) δ("ψ", Θ))
9: S ← S ∪ {q } update set of states
10: ← ∪ {(q, Θ, q )} update transition relation
Note
• Standard nfa.
• No detour to Büchi automata.
• Easy to code.
• Implemented!
Auxiliary rules
δ("tt", Π) = true
δ("ff ", Π) = false
δ("φ", Π) =
true if Π |= φ
false if Π |= φ
(φ propositional)
δ("ϕ1 ∧ ϕ2", Π) = δ("ϕ1", Π) ∧ δ("ϕ2", Π)
δ("ϕ1 ∨ ϕ2", Π) = δ("ϕ1", Π) ∨ δ("ϕ2", Π)
δ(" φ ϕ", Π) =
"ϕ" if last ∈ Π and Π |= φ (φ propositional)
δ("ϕ", ) if last ∈ Π and Π |= φ
false if Π |= φ
δ(" ψ? ϕ", Π) = δ("ψ", Π) ∧ δ("ϕ", Π)
δ(" ρ1 + ρ2 ϕ", Π) = δ(" ρ1 ϕ", Π) ∨ δ(" ρ2 ϕ", Π)
δ(" ρ1; ρ2 ϕ", Π) = δ(" ρ1 ρ2 ϕ", Π)
δ(" ρ∗
ϕ", Π) =
δ("ϕ", Π) if ρ is test-only
δ("ϕ", Π) ∨ δ(" ρ ρ∗ ϕ", Π) o/w
δ("[φ]ϕ", Π) =
"ϕ" if last ∈ Π and Π |= φ (φ propositional)
δ("ϕ", ) if last ∈ Π and Π |= φ (φ propositional)
true if Π |= φ
δ("[ψ?]ϕ", Π) = δ("nnf (¬ψ)", Π) ∨ δ("ϕ", Π)
δ("[ρ1 + ρ2]ϕ", Π) = δ("[ρ1]ϕ", Π) ∧ δ("[ρ2]ϕ", Π)
δ("[ρ1; ρ2]ϕ", Π) = δ("[ρ1][ρ2]ϕ", Π)
δ("[ρ∗
]ϕ", Π) =
δ("ϕ", Π) if ρ is test-only
δ("ϕ", Π) ∧ δ("[ρ][ρ∗]ϕ", Π) o/w
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 36 / 42
73. Implemented in ProM!
Approach
1. Input ltlf /ldlf constraints and metaconstraints.
2. Produce the corresponding RV ldlf monitoring formulae.
3. Apply the direct algorithm and get the corresponding nfas.
4. (Incrementally) run nfas the monitored trace.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 37 / 42
74. Colored Automata [BPM2011]
Ad-hoc technique for monitoring ltlf formulae according to RV-LTL.
1. Bulild a local automaton for each of the business constraints.
2. Color states of each local automaton according to the 4 RV-LTL truth
values.
3. Combine colored automata into a global colored automaton.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 38 / 42
75. Colored Automata
resulting in a complex or over-restrictive model.
Low_Risk
High_YieldMoney Stocks
Bonds
=
alternate
response
response
precedence
not co-existence
Fig. 1. Reference Model
000
(N)(P)(R)
001
(N)(P)r
M
010
(N)(R)
C
202
(N)(P)R
E
320
(N)P(R)
S
011
(N)r
C
E
321
(N)Pr
S
M
212
(N)R
E
310
(N)(R)
S
E
311
(N)r
S
112
R
122
PR
S
C
S
E
M
E
E
M E
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 39 / 42
76. Correctness of Colored Automata
Why is coloring correct?
1. Take the ltlf formula ϕ of each business constraint constraint.
2. Produce the 4 corresponding ldlf monitoring formulae.
3. Generate the 4 corresponding nfas.
4. Determinize them ; they are identical but with = acceptance states!
5. Hence they can be combined into a unique colored local dfa.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 40 / 42
77. Conclusion
• Focus on finite traces.
• Avoid unneeded detour to infinite traces.
• ldlf : essentially, the maximal expressive logic for finite traces with
good computational properties (≡ MSO).
• Monitoring is a key problem.
• ldlf goes far beyond declare.
• ldlf captures monitors directly as formulae.
Clean.
Meta-constraints.
• Implemented in ProM!
Future work: declarative, data-aware processes.
Marco Montali (unibz) Monitoring Business Constraints UNI.LU 41 / 42