Presentation of the paper "Declarative Process Modeling in BPMN" at the 27th International Conference on Advanced Information Systems Engineering (CAiSE 2015).
Intermediate representations are used in compilers to represent programs between the source and target languages. Control flow graphs (CFG) are a common intermediate representation that represent a program as a directed graph with basic blocks as nodes and edges showing possible control transfers. CFGs are useful for optimizations and analyzing unreachable code. They are built by dividing a program into basic blocks - sections with no jumps in or out - and connecting them based on the control flow.
The document provides an overview of code level optimizations that can be performed by a compiler, including common subexpression elimination, copy propagation, dead code elimination, peephole optimization, and loop optimizations. It then gives examples of applying these optimizations to intermediate three address code to reduce computations and improve efficiency. Specific optimizations demonstrated include common subexpression elimination, copy propagation, dead code elimination, and identifying induction variables within loops. The overall goal of these optimizations is to reduce both the time and space requirements of the generated code.
This document provides an overview of programming in C++ using Turbo C++. It covers the Turbo C++ integrated development environment, basic C++ program structure, modular programming using functions, control structures like if/else statements, and problem solving approaches. Examples are provided for calculating areas and volumes, temperature conversions, and other mathematical problems. Function prototypes, call by value vs reference, and selection structures like switch statements are also discussed. The document aims to teach basic C++ concepts and skills like breaking problems into modular functions.
This document provides an overview of programming in C++ using Turbo C++. It covers topics such as the structure of a C++ program, control structures, functions, classes, and problem solving techniques. Modular programming using functions is discussed, including passing parameters by value and by reference. Various programming problems are presented as examples, such as calculating mathematical values and converting between different units and scales. The use of functions, parameters, and modular programming design are emphasized throughout.
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
This document discusses code optimization techniques performed by compilers. It describes how compilers apply transformations called optimizations to improve code performance and reduce code size. These optimizations are performed in multiple phases including at the source code level, intermediate code level, and target code level. Specific optimization techniques discussed include redundant instruction elimination, constant folding, algebraic transformations, copy propagation, common subexpression elimination, dead code elimination, and loop optimizations such as induction variable elimination and strength reduction.
This document introduces Edgar Barbosa, a senior security researcher who has worked on hardware-based virtualization rootkits and detecting such rootkits. It then provides an overview of control flow analysis (CFA), a static analysis technique used to analyze program execution paths. CFA involves constructing a control flow graph (CFG) from a disassembled binary. The document discusses basic block identification, CFG properties, and challenges like self-modifying code. It also introduces other CFA concepts like dominator trees, natural loops, strongly connected components, and interval analysis.
Intermediate representations are used in compilers to represent programs between the source and target languages. Control flow graphs (CFG) are a common intermediate representation that represent a program as a directed graph with basic blocks as nodes and edges showing possible control transfers. CFGs are useful for optimizations and analyzing unreachable code. They are built by dividing a program into basic blocks - sections with no jumps in or out - and connecting them based on the control flow.
The document provides an overview of code level optimizations that can be performed by a compiler, including common subexpression elimination, copy propagation, dead code elimination, peephole optimization, and loop optimizations. It then gives examples of applying these optimizations to intermediate three address code to reduce computations and improve efficiency. Specific optimizations demonstrated include common subexpression elimination, copy propagation, dead code elimination, and identifying induction variables within loops. The overall goal of these optimizations is to reduce both the time and space requirements of the generated code.
This document provides an overview of programming in C++ using Turbo C++. It covers the Turbo C++ integrated development environment, basic C++ program structure, modular programming using functions, control structures like if/else statements, and problem solving approaches. Examples are provided for calculating areas and volumes, temperature conversions, and other mathematical problems. Function prototypes, call by value vs reference, and selection structures like switch statements are also discussed. The document aims to teach basic C++ concepts and skills like breaking problems into modular functions.
This document provides an overview of programming in C++ using Turbo C++. It covers topics such as the structure of a C++ program, control structures, functions, classes, and problem solving techniques. Modular programming using functions is discussed, including passing parameters by value and by reference. Various programming problems are presented as examples, such as calculating mathematical values and converting between different units and scales. The use of functions, parameters, and modular programming design are emphasized throughout.
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
This document discusses code optimization techniques performed by compilers. It describes how compilers apply transformations called optimizations to improve code performance and reduce code size. These optimizations are performed in multiple phases including at the source code level, intermediate code level, and target code level. Specific optimization techniques discussed include redundant instruction elimination, constant folding, algebraic transformations, copy propagation, common subexpression elimination, dead code elimination, and loop optimizations such as induction variable elimination and strength reduction.
This document introduces Edgar Barbosa, a senior security researcher who has worked on hardware-based virtualization rootkits and detecting such rootkits. It then provides an overview of control flow analysis (CFA), a static analysis technique used to analyze program execution paths. CFA involves constructing a control flow graph (CFG) from a disassembled binary. The document discusses basic block identification, CFG properties, and challenges like self-modifying code. It also introduces other CFA concepts like dominator trees, natural loops, strongly connected components, and interval analysis.
This document describes subtyping for delimited continuations. It introduces shift0 and reset0 as variants of the shift and reset control operators that allow capturing contexts beyond the nearest delimiter. It presents a type system for shift0 and reset0 that uses effect annotations and subtyping to track multiple contexts on the evaluation stack. Examples are given of how this allows typing programs like partition that were not typable with previous systems. The document concludes that the type system ensures type soundness, termination and decidable type inference.
IRJET- Implementation of Material Requirement Planning(MRP) for a MULTI-L...IRJET Journal
This document proposes an implementation of material requirement planning (MRP) for multi-level flexible bills of materials (BOMs). When shortages occur in sub-assemblies or components, a linear programming model is used to formulate a new BOM with substituted or decreased components to still meet production requirements. The process is demonstrated through an example with a top assembly and three levels of BOMs. Key steps include identifying shortages, solving a linear program to determine an adjusted BOM that minimizes deviations from the standard BOM while meeting constraints, and updating planned order release tables if the new BOM is feasible. This approach allows for flexibility in BOMs while still meeting production schedules.
The document discusses code optimization techniques. It defines optimization as program transformations that improve code by reducing resource usage and improving speed. Optimization techniques covered include constant folding, constant propagation, common subexpression elimination, code movement, dead code elimination, and strength reduction. The goal of these techniques is to preserve program meaning while improving performance in a way that is worthwhile.
The document discusses functions in C programming. It covers defining and calling functions, passing arguments to functions, return statements, and different types of functions. Some key points include:
- Functions make code modular and reusable. Arguments can be passed by value or reference.
- A function is defined with a return type, name, and parameters. It is called by name with arguments. Return passes data back to the calling function.
- Functions can take arguments and return values, take arguments but not return, return values without arguments, or do neither.
- Arguments are passed right to left in C. Functions can be nested by calling one function from another.
Dead code elimination is a compiler optimization technique that removes code that can never be executed, such as unreachable code, or code with outputs that are never used. Removing dead code shrinks program size and reduces runtime. It works by analyzing the control flow and data dependencies of a program to identify code that is deemed dead. Both static and dynamic dead code elimination techniques exist.
Compiler code optimizations help improve the performance of generated machine code in three ways:
1) Local optimizations improve individual basic blocks without considering control or data flow between blocks. This includes constant folding, propagation, and dead code elimination.
2) Global optimizations analyze control and data flow across basic blocks through techniques like common subexpression elimination.
3) Peephole optimizations make small, machine-specific improvements by examining one or two instructions at a time, such as replacing redundant loads and stores or using architectural idioms.
The document discusses dataflow analysis and liveness analysis. It defines liveness analysis as determining which variables are "live" or may be needed in the future at different points in a program. This allows optimizations like register allocation by mapping live variables that do not overlap in time to the same register. The document outlines the formal definition of liveness, including live-in and live-out variables at each node, and provides an algorithm to compute liveness information through a fixed point iteration on the control flow graph.
The document discusses code optimization and code generation in compilers. It covers the position of a code generator in the compiler model, code generation, the target machine architecture, instruction selection, register allocation, basic blocks, control flow graphs, common subexpression elimination, dead code elimination, and next-use information. The target machine has registers, instructions with opcodes and addressing modes, and a simple cost model. Code optimization aims to efficiently map source code to the target instruction set architecture.
The document discusses properties of context-free languages. It covers several topics:
1. Simplification of context-free grammars through removal of null productions, unit productions, and useless symbols.
2. Chomsky normal form, which restricts grammar rules to be of the form A → BC or A → a.
3. Turing machines, including their components, transition function, instantaneous descriptions, and examples of languages recognized by Turing machines.
4. Programming techniques for Turing machines like using state information, multiple tracks, and subroutines.
The document discusses different types of flow control instructions in assembly language including conditional jump instructions, unconditional jump instructions, compare instructions, and looping structures. Conditional jump instructions like JG transfer control based on condition flags. Looping structures include for, while, and repeat loops. High-level language equivalents like if-then, if-then-else, and case statements are also covered.
This chapter discusses assignment operators, mathematical library functions, interactive input, symbolic constants, common errors, and debugging in C++. It covers using functions like sqrt() and includes like <cmath>, taking user input with cin, declaring symbolic constants with const, common errors like missing variables, and debugging methods like tracing and echo printing.
This chapter discusses assignment operators, mathematical library functions, interactive input, symbolic constants, common errors, and debugging in C++. It covers using functions like sqrt() and includes like <cmath>, taking user input with cin, declaring symbolic constants with const, common errors like missing variables, and debugging methods like tracing and echo printing.
The document discusses various code optimization techniques that can be used to improve the efficiency of a program without changing its functionality. It covers techniques like using appropriate data types, avoiding global variables, using arrays instead of switch-case statements, combining loops, putting loops inside functions, and early termination of loops. Optimizing variables, control structures, functions and loops can help reduce memory usage and improve execution speed of a program.
Python Programming | JNTUA | UNIT 2 | Conditionals and Recursion | FabMinds
This document summarizes key concepts from lectures 9-11 on conditionals and recursion in Python programming. It discusses logical operators, relational operators, boolean expressions, floor division, modulus, conditional execution using if/else statements, chained and nested conditionals. Recursion is defined as a function calling itself and examples of valid and infinite recursion are provided. The document also covers taking keyboard input in Python using the input function.
Loop optimization is a technique to improve the performance of programs by optimizing the inner loops which take a large amount of time. Some common loop optimization methods include code motion, induction variable and strength reduction, loop invariant code motion, loop unrolling, and loop fusion. Code motion moves loop-invariant code outside the loop to avoid unnecessary computations. Induction variable and strength reduction techniques optimize computations involving induction variables. Loop invariant code motion avoids repeating computations inside loops. Loop unrolling replicates loop bodies to reduce loop control overhead. Loop fusion combines multiple nested loops to reduce the total number of iterations.
keywords; Data flow analysis, control dependency .
Program analysis is the method of computing properties of a program.It is useful for performing program optimiztion
Presenter: Denis Gagne, Trisotech.
Abstract: In this session Denis will present a summary of the work done by the DMN 1.4 Revision Task Force (RTF) and open discussion on what should come next for the Decision Model and Notation standard.
The document summarizes the key changes in the DMN 1.4 specification. It introduces three new boxed expression types - conditional, iterator, and filter. It also describes new features like collection markers on decisions, current date/context FEEL functions, and improved rounding/text FEEL functions. DMN 1.4 was approved in December 2021 and will be published officially in early 2022, with the next version 1.5 planned for March 2023.
Introduction to combinational logic is here. We discuss analysis procedures and design procedures in this slide set. Several adders, multiplexers, encoder and decoder are discussed.
This document summarizes key concepts about combinational logic circuits. It defines combinational logic as circuits whose outputs depend only on the current inputs, in contrast to sequential logic which also depends on prior inputs. Common combinational circuits are described like half and full adders used for arithmetic, as well as decoders. The design process for combinational circuits is outlined involving specification, formulation, optimization and technology mapping. Implementation of functions using NAND and NOR gates is also discussed.
This document describes subtyping for delimited continuations. It introduces shift0 and reset0 as variants of the shift and reset control operators that allow capturing contexts beyond the nearest delimiter. It presents a type system for shift0 and reset0 that uses effect annotations and subtyping to track multiple contexts on the evaluation stack. Examples are given of how this allows typing programs like partition that were not typable with previous systems. The document concludes that the type system ensures type soundness, termination and decidable type inference.
IRJET- Implementation of Material Requirement Planning(MRP) for a MULTI-L...IRJET Journal
This document proposes an implementation of material requirement planning (MRP) for multi-level flexible bills of materials (BOMs). When shortages occur in sub-assemblies or components, a linear programming model is used to formulate a new BOM with substituted or decreased components to still meet production requirements. The process is demonstrated through an example with a top assembly and three levels of BOMs. Key steps include identifying shortages, solving a linear program to determine an adjusted BOM that minimizes deviations from the standard BOM while meeting constraints, and updating planned order release tables if the new BOM is feasible. This approach allows for flexibility in BOMs while still meeting production schedules.
The document discusses code optimization techniques. It defines optimization as program transformations that improve code by reducing resource usage and improving speed. Optimization techniques covered include constant folding, constant propagation, common subexpression elimination, code movement, dead code elimination, and strength reduction. The goal of these techniques is to preserve program meaning while improving performance in a way that is worthwhile.
The document discusses functions in C programming. It covers defining and calling functions, passing arguments to functions, return statements, and different types of functions. Some key points include:
- Functions make code modular and reusable. Arguments can be passed by value or reference.
- A function is defined with a return type, name, and parameters. It is called by name with arguments. Return passes data back to the calling function.
- Functions can take arguments and return values, take arguments but not return, return values without arguments, or do neither.
- Arguments are passed right to left in C. Functions can be nested by calling one function from another.
Dead code elimination is a compiler optimization technique that removes code that can never be executed, such as unreachable code, or code with outputs that are never used. Removing dead code shrinks program size and reduces runtime. It works by analyzing the control flow and data dependencies of a program to identify code that is deemed dead. Both static and dynamic dead code elimination techniques exist.
Compiler code optimizations help improve the performance of generated machine code in three ways:
1) Local optimizations improve individual basic blocks without considering control or data flow between blocks. This includes constant folding, propagation, and dead code elimination.
2) Global optimizations analyze control and data flow across basic blocks through techniques like common subexpression elimination.
3) Peephole optimizations make small, machine-specific improvements by examining one or two instructions at a time, such as replacing redundant loads and stores or using architectural idioms.
The document discusses dataflow analysis and liveness analysis. It defines liveness analysis as determining which variables are "live" or may be needed in the future at different points in a program. This allows optimizations like register allocation by mapping live variables that do not overlap in time to the same register. The document outlines the formal definition of liveness, including live-in and live-out variables at each node, and provides an algorithm to compute liveness information through a fixed point iteration on the control flow graph.
The document discusses code optimization and code generation in compilers. It covers the position of a code generator in the compiler model, code generation, the target machine architecture, instruction selection, register allocation, basic blocks, control flow graphs, common subexpression elimination, dead code elimination, and next-use information. The target machine has registers, instructions with opcodes and addressing modes, and a simple cost model. Code optimization aims to efficiently map source code to the target instruction set architecture.
The document discusses properties of context-free languages. It covers several topics:
1. Simplification of context-free grammars through removal of null productions, unit productions, and useless symbols.
2. Chomsky normal form, which restricts grammar rules to be of the form A → BC or A → a.
3. Turing machines, including their components, transition function, instantaneous descriptions, and examples of languages recognized by Turing machines.
4. Programming techniques for Turing machines like using state information, multiple tracks, and subroutines.
The document discusses different types of flow control instructions in assembly language including conditional jump instructions, unconditional jump instructions, compare instructions, and looping structures. Conditional jump instructions like JG transfer control based on condition flags. Looping structures include for, while, and repeat loops. High-level language equivalents like if-then, if-then-else, and case statements are also covered.
This chapter discusses assignment operators, mathematical library functions, interactive input, symbolic constants, common errors, and debugging in C++. It covers using functions like sqrt() and includes like <cmath>, taking user input with cin, declaring symbolic constants with const, common errors like missing variables, and debugging methods like tracing and echo printing.
This chapter discusses assignment operators, mathematical library functions, interactive input, symbolic constants, common errors, and debugging in C++. It covers using functions like sqrt() and includes like <cmath>, taking user input with cin, declaring symbolic constants with const, common errors like missing variables, and debugging methods like tracing and echo printing.
The document discusses various code optimization techniques that can be used to improve the efficiency of a program without changing its functionality. It covers techniques like using appropriate data types, avoiding global variables, using arrays instead of switch-case statements, combining loops, putting loops inside functions, and early termination of loops. Optimizing variables, control structures, functions and loops can help reduce memory usage and improve execution speed of a program.
Python Programming | JNTUA | UNIT 2 | Conditionals and Recursion | FabMinds
This document summarizes key concepts from lectures 9-11 on conditionals and recursion in Python programming. It discusses logical operators, relational operators, boolean expressions, floor division, modulus, conditional execution using if/else statements, chained and nested conditionals. Recursion is defined as a function calling itself and examples of valid and infinite recursion are provided. The document also covers taking keyboard input in Python using the input function.
Loop optimization is a technique to improve the performance of programs by optimizing the inner loops which take a large amount of time. Some common loop optimization methods include code motion, induction variable and strength reduction, loop invariant code motion, loop unrolling, and loop fusion. Code motion moves loop-invariant code outside the loop to avoid unnecessary computations. Induction variable and strength reduction techniques optimize computations involving induction variables. Loop invariant code motion avoids repeating computations inside loops. Loop unrolling replicates loop bodies to reduce loop control overhead. Loop fusion combines multiple nested loops to reduce the total number of iterations.
keywords; Data flow analysis, control dependency .
Program analysis is the method of computing properties of a program.It is useful for performing program optimiztion
Presenter: Denis Gagne, Trisotech.
Abstract: In this session Denis will present a summary of the work done by the DMN 1.4 Revision Task Force (RTF) and open discussion on what should come next for the Decision Model and Notation standard.
The document summarizes the key changes in the DMN 1.4 specification. It introduces three new boxed expression types - conditional, iterator, and filter. It also describes new features like collection markers on decisions, current date/context FEEL functions, and improved rounding/text FEEL functions. DMN 1.4 was approved in December 2021 and will be published officially in early 2022, with the next version 1.5 planned for March 2023.
Introduction to combinational logic is here. We discuss analysis procedures and design procedures in this slide set. Several adders, multiplexers, encoder and decoder are discussed.
This document summarizes key concepts about combinational logic circuits. It defines combinational logic as circuits whose outputs depend only on the current inputs, in contrast to sequential logic which also depends on prior inputs. Common combinational circuits are described like half and full adders used for arithmetic, as well as decoders. The design process for combinational circuits is outlined involving specification, formulation, optimization and technology mapping. Implementation of functions using NAND and NOR gates is also discussed.
1. Combinational Logic Circutis with examples (1).pdfRohitkumarYadav80
This document provides an overview of combinational circuits including adders, subtractors, and code converters. It discusses the design process for combinational circuits and considerations like minimizing gates and propagation time. Specific circuit components are then explained, including half adders, full adders, half subtractors, and full subtractors. Their truth tables and logic diagrams are presented. Finally, code converters are briefly mentioned as an application of combinational circuits.
1. Combinational Logic Circutis with examples (1).pdfRohitkumarYadav80
The document discusses combinational circuits including adders, subtractors, and code converters. It provides details on the design process for combinational circuits including determining inputs and outputs, deriving truth tables, obtaining Boolean functions, and drawing logic diagrams. Specific circuit components are then covered in more depth, including half adders, full adders, half subtractors, and full subtractors. Their definitions, procedures, truth tables, and logic diagrams are presented. Finally, code converters are briefly mentioned as an application of combinational circuits.
This document discusses flow chart programming for AVR microcontrollers using Flowcode. It begins by listing the topics to be covered, which include microcontrollers, AVR microcontrollers, flow charts, and Flowcode. It then provides information on microcontrollers and embedded systems in general. It discusses the architecture and features of AVR microcontrollers specifically. It also covers basic flowchart symbols and structures like sequence, decision, repetition, and case. Finally, it introduces Flowcode as a graphical programming language for microcontrollers that allows designing programs using flow charts that can then be simulated and downloaded to microcontrollers.
This document discusses code optimization techniques at various levels, from peephole optimizations within small windows of code to optimizations across entire programs and control flow graphs. It describes opportunities for optimization from programmers, intermediate code, and target code. Specific optimizations covered include constant folding, dead code elimination, common subexpression elimination, constant propagation, copy propagation, and loop optimizations like induction variable strength reduction and loop interchange. Global data flow analysis techniques like live variable analysis are also introduced.
This document discusses code optimization techniques at various levels, from peephole optimizations within small windows of code to optimizations across entire programs and control flow graphs. It describes opportunities for optimization from programmers, intermediate code, and target code. Specific optimizations covered include constant folding, dead code elimination, common subexpression elimination, constant propagation, copy propagation, and loop optimizations like induction variable strength reduction and loop interchange. Global data flow analysis techniques like live variable analysis are also introduced.
The document discusses various techniques for compiler code optimization including local, global, and peep-hole optimizations. Local optimizations such as constant folding, propagation, and dead code elimination are performed within basic blocks. Global optimizations analyze control and data flow across basic blocks. Peep-hole optimizations make machine-specific improvements by considering a few instructions at a time. The goal of all these optimization techniques is to improve performance by generating more efficient executable code without changing program behavior.
This is the PowerPoint presentation on the topic "Peephole Optimization". This presentation covers the entire topic of peephole optimization.
This PowerPoint presentation is of Compiler Design.
The document discusses combinational logic circuits including decoders, encoders, multiplexers, demultiplexers, adders, subtractors, and magnitude comparators. It provides details on their design procedures, truth tables, logic diagrams, and implementations using basic logic gates. Combinational logic circuits have outputs that depend only on the current inputs and do not have memory elements.
Computer arithmetic: Integer addition and subtraction, ripple carry adder, carry look-ahead adder, etc. multiplication – shift-and-add, Booth multiplier, carry save multiplier, etc. Division restoring and non-restoring techniques, floating point arithmetic
The document discusses code optimization techniques in compilers. It covers the following key points:
1. Code optimization aims to improve code performance by replacing high-level constructs with more efficient low-level code while preserving program semantics. It occurs at various compiler phases like source code, intermediate code, and target code.
2. Common optimization techniques include constant folding, propagation, algebraic simplification, strength reduction, copy propagation, and dead code elimination. Control and data flow analysis are required to perform many optimizations.
3. Optimizations can be local within basic blocks, global across blocks, or inter-procedural across procedures. Representations like flow graphs, basic blocks, and DAGs are used to apply optimizations at
Design and minimization of reversible programmable logic arrays and its reali...Sajib Mitra
Reversible computing dissipates zero energy in terms of information loss at input and also it can detect error of circuit by keeping unique input-output mapping. In this paper, we have proposed a cost effective design of Reversible Programmable Logic Arrays (RPLAs) which is able to realize multi-output ESOP (Exclusive-OR Sum-Of-Product) functions by using a cost effective 3×3 reversible gate, called MG (MUX Gate). Also a new algorithm has been proposed for the calculation of critical path delay of reversible PLAs. The minimization processes consist of algorithms for ordering of output functions followed by the ordering of products. Five lower bounds on the numbers of gates, garbage and quantum costs of reversible PLAs are also proposed. Finally, we have compared the efficiency of proposed design with the existing one by providing benchmark functions analysis. The experimental results show that the proposed design outperforms the existing one in terms of numbers of gates, garbage, quantum costs and delay.
The document discusses Boolean algebra and its applications in combinational logic circuit design. It covers topics like Boolean expressions, standard forms (sum of products and product of sums), converting between forms, truth tables, and determining logic expressions from truth tables. Standard forms allow for simplification using techniques like Karnaugh maps. Boolean algebra is used to analyze and design basic combinational circuits like encoders, decoders, and adders.
This document discusses logic simplification using Karnaugh maps. It begins with an overview of Boolean algebra simplification techniques. It then covers standard forms such as sum-of-products (SOP) and product-of-sums (POS), and how to convert between different forms. The document also discusses mapping logic expressions to Karnaugh maps and using K-map rules for simplification. Truth tables and determining logic expressions from truth tables are also covered.
Similar to CAiSE 2015 - Montali - Declarative Process Modeling in BPMN (20)
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
Slides of the keynote speech on "Constraints for process framing in Augmented BPM" at the AI4BPM 2022 International Workshop, co-located with BPM 2022. The keynote focuses on the problem of "process framing" in the context of the new vision of "Augmented BPM", where BPM systems are augmented with AI capabilities. This vision is described in a manifesto, available here: https://arxiv.org/abs/2201.12855
Keynote speech at KES 2022 on "Intelligent Systems for Process Mining". I introduce process mining, discuss why process mining tasks should be approached by using intelligent systems, and show a concrete example of this combination, namely (anticipatory) monitoring of evolving processes against temporal constraints, using techniques from knowledge representation and formal methods (in particular, temporal logics over finite traces and their automata-theoretic characterization).
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
1. The document discusses representing business processes with uncertainty using ProbDeclare, an extension of Declare that allows constraints to have uncertain probabilities.
2. ProbDeclare models contain both crisp constraints that must always hold and probabilistic constraints that hold with some probability. This leads to multiple possible "scenarios" depending on which constraints are satisfied.
3. Reasoning involves determining which scenarios are logically consistent using LTLf, and computing the probability distribution over scenarios by solving a system of inequalities defined by the constraint probabilities.
Presentation on "From Case-Isolated to Object-Centric Processes - A Tale of Two Models" as part of the Hasselt University BINF Research Seminar Series (see https://www.uhasselt.be/en/onderzoeksgroepen-en/binf/research-seminar-series).
Invited seminar on "Modeling and Reasoning over Declarative Data-Aware Processes" as part of the KRDB Summer Online Seminars 2020 (https://www.inf.unibz.it/krdb/sos-2020/).
Presentation of the paper "Soundness of Data-Aware Processes with Arithmetic Conditions" at the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022). Paper available here: https://doi.org/10.1007/978-3-031-07472-1_23
Abstract:
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
Presentation of the paper "Probabilistic Trace Alignment" at the 3rd International Conference on Process Mining (ICPM 2021). Paper available here: https://doi.org/10.1109/ICPM53251.2021.9576856
Abstract:
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model. Alignment-based approaches for conformance checking have so far used crisp process models as a reference. Recent probabilistic conformance checking approaches check the degree of conformance of an event log as a whole with respect to a stochastic process model, without providing alignments. For the first time, we introduce a conformance checking approach based on trace alignments using stochastic Workflow nets. This requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
Presentation of the paper "Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors" at the 7th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). Paper available here: https://proceedings.kr.org/2020/32/
Abstract: The integrated modeling and analysis of dynamic systems and the data they manipulate has been long advocated, on the one hand, to understand how data and corresponding decisions affect the system execution, and on the other hand to capture how actions occurring in the systems operate over data. KR techniques proved successful in handling a variety of tasks over such integrated models, ranging from verification to online monitoring. In this paper, we consider a simple, yet relevant model for data-aware dynamic systems (DDSs), consisting of a finite-state control structure defining the executability of actions that manipulate a finite set of variables with an infinite domain. On top of this model, we consider a data-aware version of reactive synthesis, where execution strategies are built by guaranteeing the satisfaction of a desired linear temporal property that simultaneously accounts for the system dynamics and data evolution.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the 18th Int. Conference on Business Process Management (BPM 2020). Paper available here: https://doi.org/10.1007/978-3-030-58666-9_3
Abstract: Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the CAiSE2020 Forum. The paper is available here: https://link.springer.com/chapter/10.1007/978-3-030-58135-0_8
Abstract: Conformance checking is a fundamental task to detect deviations between the actual and the expected courses of execution of a business process. In this context, temporal business constraints have been extensively adopted to declaratively capture the expected behavior of the process. However, traditionally, these constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, we equip business constraints with a natural, probabilistic notion of uncertainty. We discuss the semantic implications of the resulting framework and show how probabilistic conformance checking and constraint entailment can be tackled therein.
Presentation of the paper "Modeling and Reasoning over Declarative Data-Aware Processes with Object-Centric Behavioral Constraints" at the 17th Int. Conference on Business Process Management (BPM 2019). Paper available here: https://link.springer.com/chapter/10.1007/978-3-030-26619-6_11
Abstract
Existing process modeling notations ranging from Petri nets to BPMN have difficulties capturing the data manipulated by processes. Process models often focus on the control flow, lacking an explicit, conceptually well-founded integration with real data models, such as ER diagrams or UML class diagrams. To overcome this limitation, Object-Centric Behavioral Constraints (OCBC) models were recently proposed as a new notation that combines full-fledged data models with control-flow constraints inspired by declarative process modeling notations such as DECLARE and DCR Graphs. We propose a formalization of the OCBC model using temporal description logics. The obtained formalization allows us to lift all reasoning services defined for constraint-based process modeling notations without data, to the much more sophisticated scenario of OCBC. Furthermore, we show how reasoning over OCBC models can be reformulated into decidable, standard reasoning tasks over the corresponding temporal description logic knowledge base.
Keynote speech at the Belgian Process Mining Research Day 2021. I discuss the open, critical challenge of data preparation in process mining, considering the case where the original event data are implicitly stored in (legacy) relational databases. This case covers the common situation where event data are stored inside the data layer of an ERP or CRM system. This is usually handled using manual, ad-hoc, error-prone ETL procedures. I propose instead to adopt a pipeline based on semantic technologies, in particular the framework of ontology-based data access (also known as virtual knowledge graph). The approach is code-less, and relies on three main conceptual steps: (1) the creation of a data model capturing the relevant classes, attributes, and associations in the domain of interest (2) the definition of declarative mappings from the source database to the data model, following the ontology-based data access paradigm (3) the annotation of the data model with indications on which classes/associations/attributes provide the relevant notions of case, events, event attributes, and event-to-case relation. Once this is done, the framework automatically extracts the event log from the legacy data. This makes extremely smooth to generate logs by taking multiple perspectives on the same reality. The approach has been operationalized in the onprom tool, which employs semantic web standard languages for the various steps, and the XES standard as the target format for the event logs.
Keynote speech at the 7th International Workshop on DEClarative, DECision and Hybrid approaches to processes ( DEC2H 2019) In conjunction with BPM 2019.
This is a talk about the combined modeling and reasoning techniques for decisions, background knowledge, and work processes.
The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.
In this keynote, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and re
Presentation at "Ontology Make Sense", an event in honor of Nicola Guarino, on how to integrate data models with behavioral constraints, an essential problem when modeling multi-case real-life work processes evolving multiple objects at once. I propose to combine UML class diagrams with temporal constraints on finite traces, linked to the data model via co-referencing constraints on classes and associations.
The document discusses representing and querying norm states using temporal ontology-based data access (OBDA). It presents the QUEN framework which models norms and their state transitions declaratively on top of a relational database. QUEN has three layers: 1) an ontological layer representing norms, 2) a specification of norm state transitions in response to database events, and 3) a legacy relational database storing events. It demonstrates QUEN on an example of patient data access consent, modeling authorizations and their lifecycles. Norm state queries are answered directly over the database using the declarative specifications without materializing states.
Presentation ad EDOC 2019 on monitoring multi-perspective business constraints accounting for time and data, with a specific focus on the (unsolvable in general) problem of conflict detection.
1) The document discusses business process management and how conceptual modeling and process mining can help understand and improve digital enterprises.
2) Process mining techniques like process discovery from event logs, decision mining, and social network mining can provide insights into how processes are executed in reality.
3) Replay techniques can enhance process models with timing information and detect deviations to help align actual behaviors with expected behaviors.
More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...SkillCertProExams
• For a full set of 760+ questions. Go to
https://skillcertpro.com/product/databricks-certified-data-engineer-associate-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Supercharge your AI - SSP Industry Breakout Session 2024-v2_1.pdf
CAiSE 2015 - Montali - Declarative Process Modeling in BPMN
1. Declarative Process
Modeling in BPMN
Marco Montali
Free University of Bozen-Bolzano
joint work with Giuseppe De Giacomo, Fabrizio M. Maggi, Marlon Dumas
CAiSE’151
3. Trends in BPM
modeling
• Highly-variable BPs
• Metaphor: rules/constraints
Dec t i
lar
a
ev
Imperative modeling
• Traditional repetitive BPs
• Metaphor: flow-chart
3
5. Imperative Modeling
Focus: howthings must be done
• Explicit description of the process control-flow
Closedmodeling
• All that is not explicitly modeled is forbidden
• Exceptions all to be considered at design time
5
10. Constraint-Based Modeling
Focus: whathas to be accomplished
• Explicit description of the relevant business constraints
• behavioral constraints, best practices, norms, rules, …
Openmodeling
• All behaviors are possible provided that they respect the
constraints
• Control-flow left implicit
10
17. Towards a Suitable Trade-Off
In the literature: many hybrid notations
• Mix declarative and imperative constructs
• Lasagne integration vs spaghetti integration
This suggests that new notations are needed.
Bad impact on understandability.
Do we really need completely new notations?
Our answer: NO!
17
18. Our Approach
Translate Declare into a notation that
• mediates between flexibility and control
• is a conservative extension to well-known
imperative modeling languages (BPMN)
• Looks familiar
• Can be converted back into standard BPMN
But… why not plain BPMN???
18
19. Process Responsibilities
• History recognition: given the history of a running
instance, compute the current state (or reject)
• Todo list: given the current state, tell
which tasks can be executed next
(including possibility of termination)
• Update: given a state and a task,
determine the new state
S
A
B
C
S
SnewS
19
20. Declare as a Process?
Automata to the Rescue!
20
21. Declare and Temporal Logics
• Observation: Declare constraints are formalized using
LTL over finite traces (LTLf)
• A Declare model is simply a big LTLf formula
• LTLf corresponds to the star-free fragment of regular
expressions
• Intimate connection with finite-state automata
21
23. From Declare to DFA
…
…
…
'
LTLf NFA
nondeterministic
DFA
deterministic
LTLf2aut determin.
A process!
• allowed tasks: alphabet, task: symbol, trace: word
• history recognition —> word prefix recognition
• todo list —> return next symbols
• update —> transition
23
24. Example
onse constraint).
close
order
pay
receipt
invoice
Fig. 1: Example of a Declare model
clarative model, this model should be interpreted according
antics: It is possible to send a receipt or an invoice without p
so, to close an order without eventually paying. In addition,
an be executed several times. Closing an order several times
ss execution, whereas it is possible to pay several times (this i
0 1
close order
2
pay
receipt
invoice
receipt
invoice
close order
receipt
invoice
close order
pay
24
25. Problem Solved (?)
Convert Declare model into a DFA.
The DFA:
• accepts all and only the traces accepted by
the Declare model
• can be easily converted into workflow nets,
BPMN, …
• is apt to be post-processed
• E.g., theory of region to discover concurrent parts
(cf. [Prescher et al., SIMPDA 2014])
25
26. Can you Digest Spaghetti?
Or: the Issue of Understandability
26
27. Declare 2 DFA: Complexity
'
LTLf NFA
nondeterministic
DFA
deterministic
LTLf2aut determin.
exponential
blow-up
exponential
blow-up
N.B.: this complexity is unavoidable
(and is not just related to concurrency)
27
28. From [Prescher et al., SIMPDA 2014]
(a) Declare model (b) Finite State Automaton
Fig. 7: The process mined out of BPIC 2013 log [33],
4.1 Implementation
In order to have the opportunity to analyze real-life de
28
29. From [Prescher et al., SIMPDA 2014]
(a) Declare model (b) Finite State Automaton
Fig. 7: The process mined out of BPIC 2013 log [33],
4.1 Implementation
In order to have the opportunity to analyze real-life de
re model
Completed
Accepted
Unmatched
Queued
Completed Queued
Accepted
Accepted
Completed
Accepted
Queued
Completed
Accepted
Completed
Accepted
Completed
Accepted
Queued
CompletedAccepted
AcceptedQueued
Completed
Accepted
Completed
Queued
Accepted
Queued
Completed
Completed
Queued
Accepted
(b) Finite State Automaton derived from the Declare model
process mined out of BPIC 2013 log [33], as Declare model and FSA
29
30. From [Prescher et al., SIMPDA 2014]
(a) Declare model (b) Finite State Automaton
Fig. 7: The process mined out of BPIC 2013 log [33],
4.1 Implementation
In order to have the opportunity to analyze real-life de
re model
Completed
Accepted
Unmatched
Queued
Completed Queued
Accepted
Accepted
Completed
Accepted
Queued
Completed
Accepted
Completed
Accepted
Completed
Accepted
Queued
CompletedAccepted
AcceptedQueued
Completed
Accepted
Completed
Queued
Accepted
Queued
Completed
Completed
Queued
Accepted
(b) Finite State Automaton derived from the Declare model
process mined out of BPIC 2013 log [33], as Declare model and FSA
q
q
q q
q q
q
q
c
c
c
c
c
c
c
c
c
c
a
a
a
a
a
a
a
a
a
a
a
u
30
33. BPMN-D Tasks
Notation Name Semantics
t Atomic task As in BPMN: perform t
IN
{t1,…,tn}
Inclusive task Perform a task among t1, . . . , tn
EX
{t1,…,tn}
Exclusive task Perform a task different from t1, . . . , tn
ANY Any task Perform any task from those available in the business context
Table 2: Overview of BPMN-D activity nodes
A flow connector is a binary, directed relation between nodes in the process. It indi-
cates an ordering relationship between the connected nodes, and implicitly also the state
33
34. BPMN-D Flow Connectors
Notation Name Semantics
A B Sequence flow As in BPMN: node B is traversed next to A
IN {t1,…,tn}
A B
Inclusive flow B is traversed after A, with 0 or more repetitions of tasks
from t1, . . . , tn in between
EX {t1,…,tn}
A B
Exclusive flow B is traversed after A, with 0 or more repetitions of tasks
different from t1, . . . , tn in between
ANY
A B
Any flow B is traversed after A, with 0 or more repetitions of tasks in
between
Table 3: Overview of BPMN-D flow connectors
34
35. Example
haviors that are allowed during the process execution, following the standard BPMN
semantics of deferred choice (i.e., choice freely taken by the resources responsible for
the process execution). As shown in Fig. 2, the graphical representation for a XOR gate-
way is the same as in BPMN. In this paper, we only discuss XOR gateways as they are
sufficient to demonstrate the extensions proposed in BPMN-D and the translation from
Declare to BPMN-D. In the remainder of the paper, we denote by ⌃ the set of all tasks
that can be performed in a given business context.
X
close
order
IN {receipt, invoice}
X pay
EX {pay}
IN
{receipt,invoice}
IN {pay, close order}
X
Fig. 2: Example of a BPMN-D model
BPMN-D extends only two constructs in BPMN, namely activity nodes and se-
quence flows connectors. An activity node represents a task in the process, and is repre-
sented as a labeled, rounded rectangle. As in standard BPMN, this in turn corresponds
to an execution step inside the process. Differently from BPMN, though, a BPMN-D
35
36. From BPMN-D to BPMN
• BPMN-D is just a “view” on top of (a fragment of) BPMN
• Once the set of available tasks is known…
• EX constructs can be converted into IN constructs via set-difference
• IN constructs can be modularly turned into standard BPMN
BPMN-D Translation into BPMN
IN
{t1,…,tn}
A B A B
t1
tn
X X…
IN {t1,…,tn}
A B
A B
t1
tnX X
…
X X
36
38. Modular Unfolding
X
close
orderX XX
receipt
invoice XX XX
receipt
invoice
close
order
XX
pay
close
order
pay
receipt
invoice
XX X X
Fig. 3: Standard BPMN representation of the BPMN-D diagram of Figure 2
BPMN
38
39. From Declare to BPMN-D
…
…
…
X
close
order XX
pay
IN {r, i}
EX {p}
IN
{r,i}
IN {p,c}
Declare
model
Constraint
Automaton
BPMN-D
model
Declare 2 CA CA 2 BPMN-D
Correct: accepts all and
only the traces accepted by
the input Declare model
A DFA with
constraints as labels
39
40. Constraint Automaton
• A DFA can have multiple edges between same 2 states
• They represent the same state change!
• Constraint automaton combines them into a single
“constraint transition” that resembles BPMN-D
• “t” —> normal transition
• IN T —> move with a symbol in the set T
• EX T —> move with a symbol not in the set T
• ANY —> move with any symbol
• Procedure: Declare to DFA, then “compact” the DFA
into a constraint automaton
S1 S2
a
b
a
b
c
S1 S2
IN {a,b}
EX {d}
TASKS: {a,b,c,d}
40
d
d
41. Example
ever a payment is done, then a receipt or an invoice must be produce
esponse constraint).
close
order
pay
receipt
invoice
Fig. 1: Example of a Declare model
declarative model, this model should be interpreted according to
mantics: It is possible to send a receipt or an invoice without payi
also, to close an order without eventually paying. In addition, an
l can be executed several times. Closing an order several times ha
ocess execution, whereas it is possible to pay several times (this is th
of installments) and, also, to send invoices and receipts several tim
0 1
close order
2
pay
receipt
invoice
receipt
invoice
close order
receipt
invoice
close order
pay
41
42. Example
ever a payment is done, then a receipt or an invoice must be produce
esponse constraint).
close
order
pay
receipt
invoice
Fig. 1: Example of a Declare model
declarative model, this model should be interpreted according to
mantics: It is possible to send a receipt or an invoice without payi
also, to close an order without eventually paying. In addition, an
l can be executed several times. Closing an order several times ha
ocess execution, whereas it is possible to pay several times (this is th
of installments) and, also, to send invoices and receipts several tim
0
IN {receipt, invoice}
1
close order
2
EX {pay}
pay
IN {pay, close order}
IN {receipt, invoice}
42
43. From CA to BPMN-D
Linear translation, in 3 steps:
1.Iterate through states, generating process
blocks
2.Interconnect blocks through tasks
obtained from transitions
3.Remove unnecessary gateways that have
only one input and one output
43
44. From CA to BPMN-D: States
1. State: introduces XOR split/join logic
2. Initial state: single start event of the process
3. Final state: choice of termination
S X X
start
XS0 X
Sf
X X
44
45. From CA to BPMN-D: Transitions
4. Self-loop: no effective state change; executable 0..* times
5. Proper transition: state transformation through task execution
X X
label
S
label
S1 S2
label X …
X X…labelX
45
47. Example
X
0
IN {receipt, invoice}
1
close order
2
EX {pay}
pay
IN {pay, close order}
IN {receipt, invoice}
X
IN {receipt, invoice}
X
EX {pay}
IN {pay, close order}
X X XX X X
47
48. Example
X
0
IN {receipt, invoice}
1
close order
2
EX {pay}
pay
IN {pay, close order}
IN {receipt, invoice}
X
close
order
IN {receipt, invoice}
X pay
EX {pay}
IN
{receipt,invoice}
IN {pay, close order}
X X XX X X
48
49. Example
X
0
IN {receipt, invoice}
1
close order
2
EX {pay}
pay
IN {pay, close order}
IN {receipt, invoice}
X
close
order
IN {receipt, invoice}
X pay
EX {pay}
IN
{receipt,invoice}
IN {pay, close order}
X X XX X X
49
50. Example
0
IN {receipt, invoice}
1
close order
2
EX {pay}
pay
IN {pay, close order}
IN {receipt, invoice}
X
close
order
IN {receipt, invoice}
X pay
EX {pay}
IN
{receipt,invoice}
IN {pay, close order}
XX
50
51. Conclusion
• BPMN-D: conservative, “minor” extension of BPMN
with declarative constructs
• Can be encoded back into standard BPMN
• Translation mechanism from Declare to BPMN-D
• Tackles directly all regular expressions / LDLf
• Lessons learnt:
• We don’t need completely new notations
• We cannot apply standard techniques off-the-shelf
51
52. Ongoing/Future Work
• Include parallel gateways and exceptions
• Exception handling derived from “compensation
constraints”
[De Giacomo et al., BPM 2014]
• Tooling and benchmarking with case studies
• Synthesis of “just sound” BPMN-D processes
• Compliant by-design, but not necessarily accepting all the
original intended behaviors
• “Tuning knob” for trade-off between understandability and
coverage of behaviors
52