Paper presentation at the 16th International Conference on Business Process Management, Sydney, Australia, 13 September 2018. Paper available at: http://kodu.ut.ee/~dumas/pubs/bpm2018precision.pdf
Learning Accurate LSTM Models of Business ProcessesMarlon Dumas
Presentation delivered at the 17th International Conference on Business Process Management (BPM), Vienna, Austria, 3 September 2019. Paper available at: http://kodu.ut.ee/~dumas/pubs/bpm2019lstm.pdf
Presenter: Manuel Camargo
This document provides an overview of algorithm analysis and determining the time complexity of algorithms. It discusses that the time an algorithm takes to run can be estimated by counting the number of basic operations and expressing the runtime using asymptotic notation. Examples are provided to demonstrate how to analyze the runtime of simple algorithms with loops and nested loops. The key growth rates like constant, linear, quadratic, and exponential are defined. Determining the highest order term provides the overall time complexity of an algorithm in Big O notation.
This document presents a new approach called mixed S-D slicing that combines static and dynamic program slicing using object-oriented concepts in C++. Static slicing analyzes the entire program code but produces larger slices, while dynamic slicing produces smaller slices based on a specific execution but is more difficult to compute. The mixed S-D slicing aims to generate dynamic slices faster by leveraging object-oriented features like classes. An example C++ program is provided to demonstrate the S-D slicing approach using concepts like classes, inheritance, and polymorphism. The approach is intended to reduce complexity and aid in debugging object-oriented programs by combining static and dynamic slicing techniques.
Process Mining Reloaded: Event Structures as a Unified Representation of Proc...Marlon Dumas
Keynote talk at the 36th International Conference on Application and Theory of Petri Nets and Concurrency (Petri Nets 2015).
Screencast available at: https://youtu.be/9bQr0r_WaoE
- The document summarizes techniques for slicing object-oriented programs. It discusses static and dynamic slicing, and limitations of previous approaches.
- It proposes a new intermediate representation called the Object-Oriented System Dependence Graph (OSDG) to more precisely capture dependencies in object-oriented programs. The OSDG explicitly represents data members of objects.
- An edge-marking algorithm is presented for efficiently performing dynamic slicing of object-oriented programs using the OSDG. This avoids recomputing the entire slice after each statement.
Discovering Branching Conditions from Business Process Execution LogsMarlon Dumas
Paper presentation given at the International Conference on Fundamental Approaches to Software Engineering (FASE) in March 2013. The paper can be found <a>here</a>.
Learning Accurate LSTM Models of Business ProcessesMarlon Dumas
Presentation delivered at the 17th International Conference on Business Process Management (BPM), Vienna, Austria, 3 September 2019. Paper available at: http://kodu.ut.ee/~dumas/pubs/bpm2019lstm.pdf
Presenter: Manuel Camargo
This document provides an overview of algorithm analysis and determining the time complexity of algorithms. It discusses that the time an algorithm takes to run can be estimated by counting the number of basic operations and expressing the runtime using asymptotic notation. Examples are provided to demonstrate how to analyze the runtime of simple algorithms with loops and nested loops. The key growth rates like constant, linear, quadratic, and exponential are defined. Determining the highest order term provides the overall time complexity of an algorithm in Big O notation.
This document presents a new approach called mixed S-D slicing that combines static and dynamic program slicing using object-oriented concepts in C++. Static slicing analyzes the entire program code but produces larger slices, while dynamic slicing produces smaller slices based on a specific execution but is more difficult to compute. The mixed S-D slicing aims to generate dynamic slices faster by leveraging object-oriented features like classes. An example C++ program is provided to demonstrate the S-D slicing approach using concepts like classes, inheritance, and polymorphism. The approach is intended to reduce complexity and aid in debugging object-oriented programs by combining static and dynamic slicing techniques.
Process Mining Reloaded: Event Structures as a Unified Representation of Proc...Marlon Dumas
Keynote talk at the 36th International Conference on Application and Theory of Petri Nets and Concurrency (Petri Nets 2015).
Screencast available at: https://youtu.be/9bQr0r_WaoE
- The document summarizes techniques for slicing object-oriented programs. It discusses static and dynamic slicing, and limitations of previous approaches.
- It proposes a new intermediate representation called the Object-Oriented System Dependence Graph (OSDG) to more precisely capture dependencies in object-oriented programs. The OSDG explicitly represents data members of objects.
- An edge-marking algorithm is presented for efficiently performing dynamic slicing of object-oriented programs using the OSDG. This avoids recomputing the entire slice after each statement.
Discovering Branching Conditions from Business Process Execution LogsMarlon Dumas
Paper presentation given at the International Conference on Fundamental Approaches to Software Engineering (FASE) in March 2013. The paper can be found <a>here</a>.
Keynote: Machine Learning for Design Automation at DAC 2018Manish Pandey
Manish Pandey gave a keynote talk on transforming EDA with machine learning and discussed opportunities and challenges. He described how machine learning can be applied across different design abstraction levels from formal verification to silicon engineering. Pandey also discussed using machine learning techniques like reinforcement learning and word embeddings to optimize formal verification, simulation, and mask synthesis. Finally, he outlined challenges with data availability and model development for machine learning in EDA.
New Design Architecture of Chaotic Secure Communication System Combined with ...ijtsrd
In this paper, the exponential synchronization of secure communication system is introduced and a novel secure communication design combined with linear receiver is constructed to ensure the global exponential stability of the resulting error signals. Besides, the guaranteed exponential convergence rate of the proposed secure communication system can be correctly calculated. Finally, some numerical simulations are offered to demonstrate the correctness and feasibility of the obtained results. Yeong-Jeu Sun "New Design Architecture of Chaotic Secure Communication System Combined with Linear Receiver" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd38214.pdf Paper URL : https://www.ijtsrd.com/engineering/electrical-engineering/38214/new-design-architecture-of-chaotic-secure-communication-system-combined-with-linear-receiver/yeongjeu-sun
Multi-Perspective Comparison of Business Processes Variants Based on Event LogsMarlon Dumas
This document presents a method for multi-perspective comparison of business process variants based on event logs. The method involves constructing perspective graphs from different abstractions of event logs to analyze processes from different perspectives based on event attributes. Differential perspective graphs are then used to identify statistically significant differences between two event logs, representing different process variants. The method was experimentally applied to compare differences between divisions in an IT incident handling process using various abstractions and observations. The experiments revealed differences in activity statuses, control flows between countries, and control flow frequencies over time between the divisions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Machine learning techniques can be applied in formal verification in several ways:
1) To enhance current formal verification tools by automating tasks like debugging, specification mining, and theorem proving.
2) To enable the development of new formal verification tools by applying machine learning to problems like SAT solving, model checking, and property checking.
3) Specific applications include using machine learning for debugging and root cause identification, learning specifications from runtime traces, aiding theorem proving by selecting heuristics, and tuning SAT solver parameters and selection.
Fast Algorithm for Computing the Discrete Hartley Transform of Type-IIijeei-iaes
This document presents a new fast algorithm for computing the discrete Hartley transform of type-II (DHT-II) using a radix-2 decimation-in-time approach. The algorithm decomposes the DHT-II computation into butterfly operations involving two DHT-IIs of length N/2. This allows an in-place implementation using a regular butterfly structure. The computational complexity of the new algorithm is analyzed and shown to require fewer operations than an existing DHT-II algorithm by Hu. A comparison of the algorithms demonstrates the new approach has better structural and computational complexity properties for real-time applications.
Metaheuristic Optimization for Automated Business Process DiscoveryMarlon Dumas
Research paper presentation at the 17th International Conference on Business Process Management (BPM'2019) in Vienna, 3 September 2019. Paper available at: http://kodu.ut.ee/~dumas/pubs/bpm2019-optimization.pdf
Presentation delivered by Adriano Augusto
Scalable Conformance Checking of Business ProcessesMarlon Dumas
This document discusses techniques for scalable conformance checking of business process models against event logs. It presents challenges with existing approaches related to scalability for large logs. The research aims to improve scalability while still providing a complete set of differences between the model and log. The approach compresses the model and log into Deterministic Finite Automata and a State Space Partitioning, then uses these compressed structures to efficiently compute optimal alignments and behavioral differences. An evaluation on real-world and artificial datasets demonstrates the approach outperforms traditional trace alignments in scalability for large logs.
Some Engg. Applications of Matrices and Partial DerivativesSanjaySingh011996
This document contains a submission by three students to Dr. Sona Raj Mam regarding partial differentiation, matrices and determinants, and eigenvectors and eigenvalues. It provides examples of how these mathematical concepts are applied in fields like engineering. Partial differentiation is used in economics to analyze demand and in image processing for edge detection. Matrices and determinants allow representing linear transformations in graphics software. Eigenvalues and eigenvectors have applications in areas like computer science, smartphone apps, and modeling structures in civil engineering. The document also provides real-world examples and references textbooks and websites for further information.
Robust PID Controller Design for Non-Minimum Phase Systems using Magnitude Op...IRJET Journal
This document discusses two approaches for designing a controller for non-minimum phase systems: 1) the magnitude optimum and multiple integration method, and 2) a numerical optimization approach. The magnitude optimum method uses areas calculated from the process step response to determine the PID controller parameters, eliminating the need to estimate process parameters directly. The numerical optimization approach formulates the controller design as an optimization problem to minimize sensitivity functions in the closed-loop system. Both approaches are presented as ways to design robust controllers for non-minimum phase systems.
This document discusses analyzing the time efficiency of algorithms. It covers:
1. Defining key terms like algorithms, input size, basic operations, worst/best/average cases.
2. Methods for analyzing efficiency including determining order of growth and asymptotic notation like O, Ω, Θ.
3. Examples of analyzing non-recursive algorithms like maximum element (O(n)), matrix multiplication (O(n^3)), and counting binary digits (O(log n)).
In summary, it provides foundations for analyzing the time complexity of algorithms using order of growth and asymptotic notation to evaluate efficiency classes like constant, logarithmic, linear, and exponential.
01 - Fundamentals of the Analysis of Algorithm Efficiency.pptxIshtiaq Rasool Khan
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. Key aspects of algorithms that are analyzed include correctness, time efficiency, space efficiency, and optimality. Common approaches to analysis involve theoretical analysis to determine asymptotic runtime, and empirical analysis. Methods for analyzing time efficiency include identifying the basic operation, determining costs for best, average, and worst cases, and deriving formulas for the number of times the basic operation is performed. Asymptotic analysis focuses on identifying the growth rate or order of an algorithm's runtime. Examples are provided to demonstrate analyzing simple algorithms using summation formulas. The document also discusses analyzing recursive algorithms.
This document discusses the analysis of algorithm efficiency. It begins by defining an algorithm and listing key attributes like correctness, time efficiency, space efficiency and optimality. It then discusses analyzing time efficiency theoretically by determining the basic operation and counting its repetitions as input size increases. Different cases like best-case, average-case and worst-case are examined. Key concepts in asymptotic analysis like Big O, Omega and Theta notation are introduced to classify algorithms by order of growth. Examples are provided to illustrate time complexity analysis for non-recursive algorithms.
Dr Omar Presrntation of (on the solution of Multiobjective (1).ppteyadabdallah
This document presents a solution algorithm for solving a multiobjective cutting stock problem in the aluminum industry where scrap is considered a fuzzy parameter. The problem involves casting molten aluminum into rods and cutting them into logs to meet customer demands while minimizing costs from inventory and scrap. The algorithm formulates the problem using fuzzy set concepts and models scrap as a fuzzy number. It then finds α-Pareto optimal solutions for different α-levels using a weighted objective function and nonlinear programming solved with branch-and-bound methods. An example demonstrates implementing the method.
The paper examines the problem of systems redesign within the context of passive electrical networks and through analogies provides also the means of addressing issues of re-design of mechanical networks. The problem addressed here are special cases of the more general network redesign problem. Redesigning autonomous passive electric networks involves changing the network natural dynamics by modification of the types of elements, possibly their values, interconnection topology and possibly addition, or elimination of parts of the network. We investigate the modelling of systems, whose structure is not fixed but evolves during the system lifecycle. As such, this is a problem that differs considerably from a standard control problem, since it involves changing the system itself without control and aims to achieve the desirable system properties, as these may be expressed by the natural frequencies by system re-engineering. In fact, this problem involves the selection of alternative values for dynamic elements and non-dynamic elements within a fixed interconnection topology and/or alteration of the network interconnection topology and possible evolution of the cardinality of physical elements (increase of elements, branches). The aim of the paper is to define an appropriate representation framework that allows the deployment of control theoretic tools for the re-engineering of properties of a given network. We use impedance and admittance modelling for passive electrical networks and develop a systems framework that is capable of addressing “life-cycle design issues” of networks where the problems of alteration of existing topology and values of the elements, as well as issues of growth, or death of parts of the network are addressed.
We use the Natural Impedance/ Admittance (NI-A) models and we establish a representation of the different types of transformations on such models. This representation provides the means for an appropriate formulation of natural frequencies assignment using the Determinantal Assignment Problem framework defined on appropriate structured transformations. The developed natural representation of transformations are expressed as additive structured transformations. For the simpler case of RL or RC networks it is shown that the single parameter variation problem (dynamic or non-dynamic) is equivalent to Root Locus problems.
follow IEEE NTUA SB on facebook:
https://www.facebook.com/IeeeNtuaSB
Cauchy’s Inequality based study of the Differential Equations and the Simple ...IRJET Journal
1. The paper studies the multivariate generalization of Cauchy's inequality 1 + x ≤ ex, where x is a non-negative real number. This generalization can help solve certain ordinary differential equations (ODEs) and population dynamics problems.
2. The paper proves the multivariate generalization of the inequality and shows it only holds when the values are all equal to 0. It also analyzes some qualitative properties of solutions to ODE Cauchy problems using this generalization.
3. Different approaches are taken to directly prove the multivariate inequality using notions of monotone functions, Beppo Levi theorem, and divided differences mean value theorem. Allowed repetitions in the variables are also considered.
Parametric sensitivity analysis of a mathematical model of facultative mutualismIOSR Journals
The complex dynamics of facultative mutualism is best described by a system of continuous non-linear first order ordinary differential equations. The methods of 1-norm, 2-norm, and infinity-norm will be used to quantify and differentiate the different forms of the sensitivity of model parameters. These contributions will be presented and discussed.
This document discusses fuzzy identification of systems and its applications to modeling and control. It begins by introducing fuzzy logic and fuzzy controllers. It then provides details on the format of fuzzy implications and reasoning algorithms using Takagi-Sugeno controllers. An identification algorithm is presented as a mathematical tool to build fuzzy models of systems. The document applies this fuzzy identification method to modeling a human operator's control of a water cleaning process and a converter in a steel-making process. Results show the fuzzy models accurately capture the operators' actions.
Dimensional analysis is a technique to reduce the number of variables in a physical problem by expressing them as dimensionless parameters. It enables scaling between experiments of different physical dimensions. The document discusses dimensional analysis methods including the Buckingham Pi Theorem and exponent method. It provides an example application to a hydraulic jump, identifying the relevant variables and deriving the dimensionless parameters of Reynolds number, Froude number, and depth ratio that the problem depends on.
This document discusses asymptotic analysis and big-O notation for analyzing the time complexity of algorithms. It begins by defining key concepts like growth rate, asymptotic notations such as O(n), Ω(n) and Θ(n). It then provides examples of analyzing the time efficiency of different algorithms like finding the maximum element in an array and computing prefix averages. The document explains how to determine the asymptotic complexity by counting the total number of operations and expressing it using big-O notation. It also discusses properties of big-O notation like rules for dropping constant factors and lower order terms.
Predicting organic reaction outcomes with weisfeiler lehman networkKazuki Fujikawa
This document discusses neural message passing networks for modeling quantum chemistry. It defines message passing networks as having message functions that update node states based on neighboring node states, vertex update functions that update node states based to accumulated messages, and a readout function that produces an output for the full graph. It provides examples of specific message, update, and readout functions used in existing message passing models like interaction networks and molecular graph convolutions.
Keynote: Machine Learning for Design Automation at DAC 2018Manish Pandey
Manish Pandey gave a keynote talk on transforming EDA with machine learning and discussed opportunities and challenges. He described how machine learning can be applied across different design abstraction levels from formal verification to silicon engineering. Pandey also discussed using machine learning techniques like reinforcement learning and word embeddings to optimize formal verification, simulation, and mask synthesis. Finally, he outlined challenges with data availability and model development for machine learning in EDA.
New Design Architecture of Chaotic Secure Communication System Combined with ...ijtsrd
In this paper, the exponential synchronization of secure communication system is introduced and a novel secure communication design combined with linear receiver is constructed to ensure the global exponential stability of the resulting error signals. Besides, the guaranteed exponential convergence rate of the proposed secure communication system can be correctly calculated. Finally, some numerical simulations are offered to demonstrate the correctness and feasibility of the obtained results. Yeong-Jeu Sun "New Design Architecture of Chaotic Secure Communication System Combined with Linear Receiver" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd38214.pdf Paper URL : https://www.ijtsrd.com/engineering/electrical-engineering/38214/new-design-architecture-of-chaotic-secure-communication-system-combined-with-linear-receiver/yeongjeu-sun
Multi-Perspective Comparison of Business Processes Variants Based on Event LogsMarlon Dumas
This document presents a method for multi-perspective comparison of business process variants based on event logs. The method involves constructing perspective graphs from different abstractions of event logs to analyze processes from different perspectives based on event attributes. Differential perspective graphs are then used to identify statistically significant differences between two event logs, representing different process variants. The method was experimentally applied to compare differences between divisions in an IT incident handling process using various abstractions and observations. The experiments revealed differences in activity statuses, control flows between countries, and control flow frequencies over time between the divisions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Machine learning techniques can be applied in formal verification in several ways:
1) To enhance current formal verification tools by automating tasks like debugging, specification mining, and theorem proving.
2) To enable the development of new formal verification tools by applying machine learning to problems like SAT solving, model checking, and property checking.
3) Specific applications include using machine learning for debugging and root cause identification, learning specifications from runtime traces, aiding theorem proving by selecting heuristics, and tuning SAT solver parameters and selection.
Fast Algorithm for Computing the Discrete Hartley Transform of Type-IIijeei-iaes
This document presents a new fast algorithm for computing the discrete Hartley transform of type-II (DHT-II) using a radix-2 decimation-in-time approach. The algorithm decomposes the DHT-II computation into butterfly operations involving two DHT-IIs of length N/2. This allows an in-place implementation using a regular butterfly structure. The computational complexity of the new algorithm is analyzed and shown to require fewer operations than an existing DHT-II algorithm by Hu. A comparison of the algorithms demonstrates the new approach has better structural and computational complexity properties for real-time applications.
Metaheuristic Optimization for Automated Business Process DiscoveryMarlon Dumas
Research paper presentation at the 17th International Conference on Business Process Management (BPM'2019) in Vienna, 3 September 2019. Paper available at: http://kodu.ut.ee/~dumas/pubs/bpm2019-optimization.pdf
Presentation delivered by Adriano Augusto
Scalable Conformance Checking of Business ProcessesMarlon Dumas
This document discusses techniques for scalable conformance checking of business process models against event logs. It presents challenges with existing approaches related to scalability for large logs. The research aims to improve scalability while still providing a complete set of differences between the model and log. The approach compresses the model and log into Deterministic Finite Automata and a State Space Partitioning, then uses these compressed structures to efficiently compute optimal alignments and behavioral differences. An evaluation on real-world and artificial datasets demonstrates the approach outperforms traditional trace alignments in scalability for large logs.
Some Engg. Applications of Matrices and Partial DerivativesSanjaySingh011996
This document contains a submission by three students to Dr. Sona Raj Mam regarding partial differentiation, matrices and determinants, and eigenvectors and eigenvalues. It provides examples of how these mathematical concepts are applied in fields like engineering. Partial differentiation is used in economics to analyze demand and in image processing for edge detection. Matrices and determinants allow representing linear transformations in graphics software. Eigenvalues and eigenvectors have applications in areas like computer science, smartphone apps, and modeling structures in civil engineering. The document also provides real-world examples and references textbooks and websites for further information.
Robust PID Controller Design for Non-Minimum Phase Systems using Magnitude Op...IRJET Journal
This document discusses two approaches for designing a controller for non-minimum phase systems: 1) the magnitude optimum and multiple integration method, and 2) a numerical optimization approach. The magnitude optimum method uses areas calculated from the process step response to determine the PID controller parameters, eliminating the need to estimate process parameters directly. The numerical optimization approach formulates the controller design as an optimization problem to minimize sensitivity functions in the closed-loop system. Both approaches are presented as ways to design robust controllers for non-minimum phase systems.
This document discusses analyzing the time efficiency of algorithms. It covers:
1. Defining key terms like algorithms, input size, basic operations, worst/best/average cases.
2. Methods for analyzing efficiency including determining order of growth and asymptotic notation like O, Ω, Θ.
3. Examples of analyzing non-recursive algorithms like maximum element (O(n)), matrix multiplication (O(n^3)), and counting binary digits (O(log n)).
In summary, it provides foundations for analyzing the time complexity of algorithms using order of growth and asymptotic notation to evaluate efficiency classes like constant, logarithmic, linear, and exponential.
01 - Fundamentals of the Analysis of Algorithm Efficiency.pptxIshtiaq Rasool Khan
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. Key aspects of algorithms that are analyzed include correctness, time efficiency, space efficiency, and optimality. Common approaches to analysis involve theoretical analysis to determine asymptotic runtime, and empirical analysis. Methods for analyzing time efficiency include identifying the basic operation, determining costs for best, average, and worst cases, and deriving formulas for the number of times the basic operation is performed. Asymptotic analysis focuses on identifying the growth rate or order of an algorithm's runtime. Examples are provided to demonstrate analyzing simple algorithms using summation formulas. The document also discusses analyzing recursive algorithms.
This document discusses the analysis of algorithm efficiency. It begins by defining an algorithm and listing key attributes like correctness, time efficiency, space efficiency and optimality. It then discusses analyzing time efficiency theoretically by determining the basic operation and counting its repetitions as input size increases. Different cases like best-case, average-case and worst-case are examined. Key concepts in asymptotic analysis like Big O, Omega and Theta notation are introduced to classify algorithms by order of growth. Examples are provided to illustrate time complexity analysis for non-recursive algorithms.
Dr Omar Presrntation of (on the solution of Multiobjective (1).ppteyadabdallah
This document presents a solution algorithm for solving a multiobjective cutting stock problem in the aluminum industry where scrap is considered a fuzzy parameter. The problem involves casting molten aluminum into rods and cutting them into logs to meet customer demands while minimizing costs from inventory and scrap. The algorithm formulates the problem using fuzzy set concepts and models scrap as a fuzzy number. It then finds α-Pareto optimal solutions for different α-levels using a weighted objective function and nonlinear programming solved with branch-and-bound methods. An example demonstrates implementing the method.
The paper examines the problem of systems redesign within the context of passive electrical networks and through analogies provides also the means of addressing issues of re-design of mechanical networks. The problem addressed here are special cases of the more general network redesign problem. Redesigning autonomous passive electric networks involves changing the network natural dynamics by modification of the types of elements, possibly their values, interconnection topology and possibly addition, or elimination of parts of the network. We investigate the modelling of systems, whose structure is not fixed but evolves during the system lifecycle. As such, this is a problem that differs considerably from a standard control problem, since it involves changing the system itself without control and aims to achieve the desirable system properties, as these may be expressed by the natural frequencies by system re-engineering. In fact, this problem involves the selection of alternative values for dynamic elements and non-dynamic elements within a fixed interconnection topology and/or alteration of the network interconnection topology and possible evolution of the cardinality of physical elements (increase of elements, branches). The aim of the paper is to define an appropriate representation framework that allows the deployment of control theoretic tools for the re-engineering of properties of a given network. We use impedance and admittance modelling for passive electrical networks and develop a systems framework that is capable of addressing “life-cycle design issues” of networks where the problems of alteration of existing topology and values of the elements, as well as issues of growth, or death of parts of the network are addressed.
We use the Natural Impedance/ Admittance (NI-A) models and we establish a representation of the different types of transformations on such models. This representation provides the means for an appropriate formulation of natural frequencies assignment using the Determinantal Assignment Problem framework defined on appropriate structured transformations. The developed natural representation of transformations are expressed as additive structured transformations. For the simpler case of RL or RC networks it is shown that the single parameter variation problem (dynamic or non-dynamic) is equivalent to Root Locus problems.
follow IEEE NTUA SB on facebook:
https://www.facebook.com/IeeeNtuaSB
Cauchy’s Inequality based study of the Differential Equations and the Simple ...IRJET Journal
1. The paper studies the multivariate generalization of Cauchy's inequality 1 + x ≤ ex, where x is a non-negative real number. This generalization can help solve certain ordinary differential equations (ODEs) and population dynamics problems.
2. The paper proves the multivariate generalization of the inequality and shows it only holds when the values are all equal to 0. It also analyzes some qualitative properties of solutions to ODE Cauchy problems using this generalization.
3. Different approaches are taken to directly prove the multivariate inequality using notions of monotone functions, Beppo Levi theorem, and divided differences mean value theorem. Allowed repetitions in the variables are also considered.
Parametric sensitivity analysis of a mathematical model of facultative mutualismIOSR Journals
The complex dynamics of facultative mutualism is best described by a system of continuous non-linear first order ordinary differential equations. The methods of 1-norm, 2-norm, and infinity-norm will be used to quantify and differentiate the different forms of the sensitivity of model parameters. These contributions will be presented and discussed.
This document discusses fuzzy identification of systems and its applications to modeling and control. It begins by introducing fuzzy logic and fuzzy controllers. It then provides details on the format of fuzzy implications and reasoning algorithms using Takagi-Sugeno controllers. An identification algorithm is presented as a mathematical tool to build fuzzy models of systems. The document applies this fuzzy identification method to modeling a human operator's control of a water cleaning process and a converter in a steel-making process. Results show the fuzzy models accurately capture the operators' actions.
Dimensional analysis is a technique to reduce the number of variables in a physical problem by expressing them as dimensionless parameters. It enables scaling between experiments of different physical dimensions. The document discusses dimensional analysis methods including the Buckingham Pi Theorem and exponent method. It provides an example application to a hydraulic jump, identifying the relevant variables and deriving the dimensionless parameters of Reynolds number, Froude number, and depth ratio that the problem depends on.
This document discusses asymptotic analysis and big-O notation for analyzing the time complexity of algorithms. It begins by defining key concepts like growth rate, asymptotic notations such as O(n), Ω(n) and Θ(n). It then provides examples of analyzing the time efficiency of different algorithms like finding the maximum element in an array and computing prefix averages. The document explains how to determine the asymptotic complexity by counting the total number of operations and expressing it using big-O notation. It also discusses properties of big-O notation like rules for dropping constant factors and lower order terms.
Predicting organic reaction outcomes with weisfeiler lehman networkKazuki Fujikawa
This document discusses neural message passing networks for modeling quantum chemistry. It defines message passing networks as having message functions that update node states based on neighboring node states, vertex update functions that update node states based to accumulated messages, and a readout function that produces an output for the full graph. It provides examples of specific message, update, and readout functions used in existing message passing models like interaction networks and molecular graph convolutions.
STATISTICAL ANALYSIS OF FUZZY LINEAR REGRESSION MODEL BASED ON DIFFERENT DIST...Wireilla
Using fuzzy linear regression model, the least squares estimation for linear regression (LR) fuzzy number is studied by Euclidean distance, Y-K distance and Dk distance respectively. It is concluded that the three different distances have the same coefficient of the least squares estimation. The data simulation shows the correctness of this conclusion.
Applications Of One Type Of Euler-Lagrange Fractional Differential EquationIRJET Journal
This document presents applications of one type of Euler-Lagrange fractional differential equation involving the composition of left Riemann-Liouville and right Caputo fractional derivatives of order α, where 0 < α < 1. First, some examples of ordinary harmonic oscillators described by second-order differential equations are transformed into this fractional differential equation form. Next, the expanded form of the fractional differential equation is obtained using finite differences and the definitions of the fractional derivatives. This is also expressed in matrix notation. Finally, the document describes using Matlab script to numerically solve this type of equation and graphically represent the approximate solutions for various values of α.
The document provides an overview of correlation and regression analysis, time series models, and cost indexes. It defines correlation, regression analysis, and their importance and applications. It discusses simple linear regression equations, assumptions, and hypothesis testing. It also covers multiple linear regression, moving averages, exponential smoothing, and quantitative measures for evaluating time series models. The document is serving as the agenda for the Advanced Economics for Engineers course taught by Leemary Berrios, Irving Rivera, and Wilfredo Robles.
This document discusses algorithm analysis and asymptotic notation. It introduces algorithms for computing prefix averages in arrays. One algorithm runs in quadratic time O(n^2) by applying the definition directly. A more efficient linear time O(n) algorithm is also presented that maintains a running sum. Asymptotic analysis determines the worst-case running time of an algorithm as a function of the input size using big-O notation. This provides an analysis of algorithms that is independent of implementation details and hardware.
STATISTICAL ANALYSIS OF FUZZY LINEAR REGRESSION MODEL BASED ON DIFFERENT DIST...ijfls
This document summarizes a study on statistical analysis of fuzzy linear regression models based on different distance measures. It analyzes least squares estimations and error terms for fuzzy linear regression models using Euclidean distance, Y-K distance, and kD distance. The study finds that the three distances produce the same coefficient estimates for the least squares regression model. Simulation data is used to validate this conclusion.
Solving Fuzzy Maximal Flow Problem Using Octagonal Fuzzy NumberIJERA Editor
In this paper a general fuzzy maximal flow problem is discussed . A crisp maximal flow problem can be solved
in two methods : linear programming modeling and maximal flow algorithm . Here I tried to fuzzify the
maximal flow algorithm using octagonal fuzzy numbers introduced by S.U Malini and Felbin .C. kennedy [26].
By ranking the octagonal fuzzy numbers it is possible to compare them and using this we convert the fuzzy
valued maximal flow algorithm to a crisp valued algorithm . It is proved that a better solution is obtained when
it is solved using fuzzy octagonal number than when it is solved using trapezoidal fuzzy number . To illustrate
this a numerical example is solved and the obtained result is compared with the existing results . If there is no
uncertainty about the flow between source and sink then the proposed algorithm gives the same result as in crisp
maximal flow problems.
Solving Fuzzy Maximal Flow Problem Using Octagonal Fuzzy NumberIJERA Editor
In this paper a general fuzzy maximal flow problem is discussed . A crisp maximal flow problem can be solved
in two methods : linear programming modeling and maximal flow algorithm . Here I tried to fuzzify the
maximal flow algorithm using octagonal fuzzy numbers introduced by S.U Malini and Felbin .C. kennedy [26].
By ranking the octagonal fuzzy numbers it is possible to compare them and using this we convert the fuzzy
valued maximal flow algorithm to a crisp valued algorithm . It is proved that a better solution is obtained when
it is solved using fuzzy octagonal number than when it is solved using trapezoidal fuzzy number . To illustrate
this a numerical example is solved and the obtained result is compared with the existing results . If there is no
uncertainty about the flow between source and sink then the proposed algorithm gives the same result as in crisp
maximal flow problems.
Similar to Abstract-and-Compare: A Family of Scalable Precision Measures for Automated Process Discovery (20)
How GenAI will (not) change your business?Marlon Dumas
Not all new technology waves are the same. Some waves are vertical (3D printing, digital twins, blockchain) while others are horizontal (the PC in the 80s, the Web in the 90s). GenAI is a horizontal wave. The question is not if GenAI will impact my business, but what will be the scope of this impact. In this talk, we will go through a journey of collisions: GenAI colliding with customer service, clerical work, information search, content production, IT development, product design, and other knowledge work. A common thread to understand the impact of GenAI is to distinguish between descriptive use cases (search, summarize, expand, transcribe & translate) versus creative use.
Walking the Way from Process Mining to AI-Driven Process OptimizationMarlon Dumas
While generative AI grabs headlines, most organizations are yet to achieve continuous process improvement from predictive and prescriptive analytics.
Why? It’s largely about data, people, and a methodical approach to deploy AI to connect data and people. The good news is that if your organization has built a process mining capability, you are well placed to climb the ladder to achieve AI-driven process optimization. But to get there, you need a disciplined step-by-step approach along two tracks: a tactical management track and an operational management track.
First, it’s about predicting what will happen if you leave your process as-is, and what will happen if you implement a change in your process. At a tactical level, a predictive capability allows you to prioritize improvement opportunities. At an operational level, it allows you to predict issues, such as deadline violations. The challenges here are how to manage the inherent uncertainty of data-driven AI systems, and how to change your people and culture to manage processes proactively, rather than reactively. One thing is to deploy predictive dashboards, another entirely different thing is to get people to use them effectively to improve the processes.
Next, it’s about becoming preemptive: continuously optimizing your processes by leveraging streams of data-driven recommendations to trigger changes and actions. At the tactical level, this prescriptive capability allows you to implement the right changes to maximize competing KPIs. At the operational level, it means triggering interventions in your processes to “wow” customers and to meet SLAs in a cost-effective manner. The challenge here is how to help process owners, workers, and other stakeholders to understand the causes of performance issues and how the recommendations generated by the AI-driven optimization system will tackle those causes?
And finally, as an icing on the cake, generative AI allows you to produce improvement scenarios to adapt to external changes. Importantly, the transformative potential of generative AI in the context of process improvement does not come from its ability to provide question-and-answer interfaces to query data. It comes from its ability to support continuous process adaptation by generating and validating hypotheses based on a holistic view of your organization.
In this talk, we will discuss how organizations are driving sustainable business value by strategically layering predictive, prescriptive, and generative AI onto a process mining foundation, one brick at a time.
Industry keynote talk by Marlon Dumas at the 5th International Conference on Process Mining (ICPM'2023), Rome, Italy, 25 October 2023
Discovery and Simulation of Business Processes with Probabilistic Resource Av...Marlon Dumas
In the field of business process simulation, the availability of resources is captured by assigning a calendar to each resource, e.g., Monday-Friday 9:00-18:00. Resources are assumed to be always available to perform activities during their calendar. This assumption often does not hold due to interruptions, breaks, or because resources time-share across multiple processes. A simulation model that captures availability via crisp time slots (a resource is either on or off during a slot) does not capture these behaviors, leading to inaccuracies in the simulation output. This paper presents a simulation approach wherein resource availability is modeled probabilistically. In this approach, each availability time slot is associated with a probability, allowing us to capture, for example, that a resource is available on Fridays between 14:00-15:00 with 90% probability and between 17:00-18:00 with 50% probability. The paper proposes an algorithm to discover probabilistic availability calendars from event logs. An empirical evaluation shows that simulation models with probabilistic calendars discovered from event logs, replicate the temporal distribution of activity instances and cycle times of a process more closely than simulation models with crisp calendars.
This presentation was delivered at the 5th International Conference on Process Mining (ICPM'2023), Rome, Italy, October 2023.
The paper is available at: https://easychair.org/publications/preprint/Rz9g
Can I Trust My Simulation Model? Measuring the Quality of Business Process Si...Marlon Dumas
Business Process Simulation (BPS) is an approach to analyze the performance of business processes under different scenarios. For example, BPS allows us to estimate what would be the cycle time of a process if one or more resources became unavailable. The starting point of BPS is a process model annotated with simulation parameters (a BPS model). BPS models may be manually designed, based on information collected from stakeholders and empirical observations, or automatically discovered from execution data. Regardless of its origin, a key question when using a BPS model is how to assess its quality. In this paper, we propose a collection of measures to evaluate the quality of a BPS model w.r.t. its ability to replicate the observed behavior of the process. We advocate an approach whereby different measures tackle different process perspectives. We evaluate the ability of the proposed measures to discern the impact of modifications to a BPS model, and their ability to uncover the relative strengths and weaknesses of two approaches for automated discovery of BPS models. The evaluation shows that the measures not only capture how close a BPS model is to the observed behavior, but they also help us to identify sources of discrepancies.
Presentation delivered by David Chapela-Campa at the BPM'2023 conference, Utrecht, September 2023.
Business Process Optimization: Status and PerspectivesMarlon Dumas
For decades, business process optimization has been largely about art and craft (and sometimes wizardry). Apart from narrowly scoped approaches to optimize resource allocation (often assuming that workers behave like robots), a lot of business process optimization relies on high-level guidelines, with A/B testing for idea validation, which is hard to scale to complex processes. As a result, managers end up settling for a "good enough" process. Can we do more? In this talk, we review recent work on the use of high-fidelity simulation models discovered from execution data. The talk also explores the possibilities (and perils) that LLMs bring to the field of business process optimization.
This talk was delivered at the Workshop on Data-Driven Business Process Optimization at the BPM'2023 conference.
Learning When to Treat Business Processes: Prescriptive Process Monitoring wi...Marlon Dumas
Paper presentation at the 35th International Conference on Advanced Information Systems Engineering (CAiSE'2023).
Abstract.
Increasing the success rate of a process, i.e. the percentage of cases that end in a positive outcome, is a recurrent process improvement goal. At runtime, there are often certain actions (a.k.a. treatments) that workers may execute to lift the probability that a case ends in a positive outcome. For example, in a loan origination process, a possible treatment is to issue multiple loan offers to increase the probability that the customer takes a loan. Each treatment has a cost. Thus, when defining policies for prescribing treatments to cases, managers need to consider the net gain of the treatments. Also, the effect of a treatment varies over time: treating a case earlier may be more effective than later in a case. This paper presents a prescriptive monitoring method that automates this decision-making task. The method combines causal inference and reinforcement learning to learn treatment policies that maximize the net gain. The method leverages a conformal prediction technique to speed up the convergence of the reinforcement learning mechanism by separating cases that are likely to end up in a positive or negative outcome, from uncertain cases. An evaluation on two real-life datasets shows that the proposed method outperforms a state-of-the-art baseline.
Why am I Waiting Data-Driven Analysis of Waiting Times in Business ProcessesMarlon Dumas
Presentation of a research paper at the 35th International Conference on Advanced Information Systems Engineering (CAiSE) in Zaragoza Spain. The paper presents a classification of causes of waiting times in business processes and a method to automatically detect and quantify the presence of each of these causes in a business process recorded in an event log.
This talk introduces the concept of Augmented Business Process Management System: An ABPMS is a process-aware information system that relies on trustworthy AI technology to
reason and act upon data, within a set of restrictions, with the aim to continuously adapt and
improve a set of business processes with respect to one or more key performance indicators.
The talk describes the transition from existing process mining technology to AI-Augmented BPM as a pyramid, where predictive, prescriptive, conversational and reasoning capabilities are stacked up incrementally to reach the level of Augmented BPM.
Talk delivered at the AAAI'2023 Workshop on AI for Business Process Management.
Process Mining and Data-Driven Process SimulationMarlon Dumas
Guest lecture delivered at the - Institut Teknologi Sepuluh on 8 December 2022.
This lecture gives an overview of process mining and simulation techniques, and how the two can be used together in process improvement projects.
Modeling Extraneous Activity Delays in Business Process SimulationMarlon Dumas
This paper presents a technique to enhance the fidelity of business process simulation models by detecting unexplained (extraneous) delays from business process execution data, and modeling these delays in the simulation model, via timer events.
The presentation was delivered at the 4th International Conference on Process Mining (ICPM'2022).
Paper available at: https://arxiv.org/abs/2206.14051
Business Process Simulation with Differentiated Resources: Does it Make a Dif...Marlon Dumas
Existing methods for discovering business process simulation models from execution data (event logs) assume that all resources in a pool have the same performance and share the same availability calendars. This paper proposes a method for discovering simulation models, wherein each resource is treated as an individual entity, with its own performance and availability calendar. An evaluation shows that simulation models with differentiated resources more closely replicate the distributions of cycle times and the work rhythm in a process than models with undifferentiated resources. The paper is available at: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_24
Prescriptive Process Monitoring Under Uncertainty and Resource ConstraintsMarlon Dumas
This paper presents an approach to trigger runtime interventions at runtime, in order to improve the success rate of a process, when the number of resources who can perform these interventions is limited.
The paper is available at: https://link.springer.com/chapter/10.1007/978-3-031-16171-1_13
The presentation delivered at the 20th International Conference on Business Process Management (BPM'2022), in Muenster, Germany, September 2022.
Slides of a lecture delivered at the First Process Mining Summer School in Aachen, Germany, July 2022.
This lecture introduces techniques in the area of "task mining" with an emphasis on Robotic Process Mining. Robotic Process Mining (RPM) is a family of techniques to discover repetitive routines that can be automated using Robotic Process Automation (RPA) technology, by analyzing interactions between
one or more workers and one or more software applications, during the performance of one or more tasks in a business process. In general, RPM techniques take as input logs of User Interactions (UI logs). These UI logs are recorded while workers interact with one or more applications, typically desktop applications. Based on these logs, RPM techniques produce specifications of one or more routines that can be automated using RPA or related tools.
Accurate and Reliable What-If Analysis of Business Processes: Is it Achievable?Marlon Dumas
This document discusses using event logs to generate business process simulation models. It describes traditional discrete event simulation approaches that discover simulation models from event logs recorded by information systems. Deep learning techniques are also discussed that can generate traces without an explicit process model. The document suggests that combining discrete event simulation and deep learning may produce more accurate simulations, but challenges remain around validating such hybrid approaches and testing them in previously unseen scenarios. More research is needed before these data-driven simulation methods can reliably predict the effects of interventions.
Learning Accurate Business Process Simulation Models from Event Logs via Auto...Marlon Dumas
Paper presentation at the International Conference on Advanced Information Systems Engineering (CAiSE).
This paper presents an approach to automatically discover business process simulation models from event logs by combining process mining and deep learning techniques.
Paper available at: https://link.springer.com/chapter/10.1007/978-3-031-07472-1_4
Process Mining: A Guide for PractitionersMarlon Dumas
This document presents a guide for practitioners on process mining. It introduces process mining and discusses its main use cases. These use cases are categorized into discovery oriented, future and change oriented, alignment oriented, variant oriented, and performance oriented. The document also provides a framework to classify use cases and discusses the business-oriented questions that can be answered using different process mining use cases, such as improving transparency, quality, agility, efficiency and conformance.
Process Mining for Process Improvement.pptxMarlon Dumas
Presentation of a research paper at the 16th International Conference on Research Challenges in Information Science (RCIS). The paper presents the results of an empirical study on how practitioners use process mining to identify business process improvement opportunities. The paper is available at: https://link.springer.com/chapter/10.1007/978-3-031-05760-1_13
Data-Driven Analysis of Batch Processing Inefficiencies in Business ProcessesMarlon Dumas
Slides of a research paper presentation at the 16th International Conference on Research Challenges in Information Science (RCIS).
The research paper presents an approach to analyze event logs of business processes in order to identify batched activities and to analyze the waiting times caused by these activities.
Paper available at: https://link.springer.com/chapter/10.1007/978-3-031-05760-1_14
Optimización de procesos basada en datosMarlon Dumas
Ponencia en BPM Day Lima 2021.
En esta charla, hablaremos de métodos y aplicaciones emergentes en el ámbito de la optimización de procesos basada en datos. Hablaremos de avances en el área de la minería de procesos, de métodos de construcción de gemelos digitales de procesos y de métodos de monitoreo predictivo. Mostraremos por medio de ejemplos y casos de estudio, cómo estos métodos permiten guiar las iniciativas de transformación digital y de mejora continua de procesos, En particular, ilustraremos el uso de estos métodos para: (1) analizar el rendimiento de los procesos de negocio de manera a identificar fricciones y oportunidades de automatización; (2) predecir el impacto de cambios, y en particular, predecir el impacto de una iniciativa de automatización; (3) realizar predicciones sobre el rendimiento del proceso y ajustar la ejecución del proceso de manera a prevenir incumplimientos del SLA, quejas de clientes, y otros eventos indeseables.
Process Mining and AI for Continuous Process ImprovementMarlon Dumas
Talk delivered at BPM Day Rio Grande do Sul on 11 November 2021.
Abstract.
Process mining is a technology that marries methods from business process management and from data science, to support operational excellence and digital transformation. Process mining tools can transform data extracted from enterprise systems, into visualizations and reports that allow managers to improve organizational performance along different dimensions, such as efficiency, quality, and compliance. In this talk, we will give an overview of the capabilities of process mining tools, and we will illustrate the benefits of process mining via several case studies in the fields of insurance, manufacturing, and IT service management.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
What is greenhouse gasses and how many gasses are there to affect the Earth.
Abstract-and-Compare: A Family of Scalable Precision Measures for Automated Process Discovery
1. Abstract-and-Compare:
a Family of Scalable Precision Measures
for Automated Process Discovery
Adriano Augusto, Abel Armas-Cervantes, Raffaele Conforti,
Marlon Dumas, Marcello La Rosa, and Daniel Reissner
2. Precision in Process Mining
Precision captures the extent to which the behaviour allowed by a process model
is observed in an event log:
— How much behaviour of a process model can be found in an event log?
Event
Log
Process
Model
Event Log
Behaviour
Process
Model
Behaviour
Compare
Precision
2
3. State of the Art Precision Measures
Precision
Name Authors Year
Set Difference Precision Greco et Al. 2006
Advanced Behavioural Appropriateness Rozinat and van der Aalst 2008
Negative Events Precision De Weerdt et al. 2011
Alignments-based ETC precision (one-align) Adriansyah et al. 2015
Projected Conformance Checking Leemans et al. 2016
Anti-alignment Precision van Dongen et al. 2016
3
4. Five Axioms for Precision Measures
Tax, N., Lu, X., Sidorova, N., Fahland, D. and van der Aalst, W. (2018).
The imprecisions of precision measures in process mining.
— Axiom 1: given a log L and a process model M,
a precision measure is a deterministic function: Prec(L, M) in ℝ.
4
5. Five Axioms for Precision Measures
Tax, N., Lu, X., Sidorova, N., Fahland, D. and van der Aalst, W. (2018).
The imprecisions of precision measures in process mining.
— Axiom 1: given a log L and a process model M,
a precision measure is a deterministic function: Prec(L, M) in ℝ.
— Axiom 2: given a log L, and two process models M1 and M2.
If the behaviour of L is fully contained in M1 behaviour,
and the behaviour of M1 is fully contained in M2 behavior,
then Prec(L, M1) ≥ Prec(L, M2).
M2 M1 L Axiom 2
5
6. Five Axioms for Precision Measures
Tax, N., Lu, X., Sidorova, N., Fahland, D. and van der Aalst, W. (2018).
The imprecisions of precision measures in process mining.
— Axiom 1: given a log L and a process model M,
a precision measure is a deterministic function: Prec(L, M) in ℝ.
— Axiom 2: given a log L, and two process models M1 and M2.
If the behaviour of L is fully contained in M1 behaviour,
and the behaviour of M1 is fully contained in M2 behavior,
then Prec(L, M1) ≥ Prec(L, M2).
— Axiom 3: given a log L, and two process models M1 and M2.
If the behaviour of L is fully contained in M1 and M2 is the flower process,
then Prec(L, M1) > Prec(L, M2).
M2 M1 L Axiom 2
6
7. Five Axioms for Precision Measures
Tax, N., Lu, X., Sidorova, N., Fahland, D. and van der Aalst, W. (2018).
The imprecisions of precision measures in process mining.
— Axiom 1: given a log L and a process model M,
a precision measure is a deterministic function: Prec(L, M) in ℝ.
— Axiom 2: given a log L, and two process models M1 and M2.
If the behaviour of L is fully contained in M1 behaviour,
and the behaviour of M1 is fully contained in M2 behavior,
then Prec(L, M1) ≥ Prec(L, M2).
— Axiom 3: given a log L, and two process models M1 and M2.
If the behaviour of L is fully contained in M1 and M2 is the flower process,
then Prec(L, M1) > Prec(L, M2).
— Axiom 4: given a log L, and two process models M1 and M2.
If the behaviour of M1 is equal to the behaviour of M2,
then Prec(L, M1) = Prec(L, M2).
M2 M1 L Axiom 2
7
8. Five Axioms for Precision Measures
Tax, N., Lu, X., Sidorova, N., Fahland, D. and van der Aalst, W. (2018).
The imprecisions of precision measures in process mining.
— Axiom 1: given a log L and a process model M,
a precision measure is a deterministic function: Prec(L, M) in ℝ.
— Axiom 2: given a log L, and two process models M1 and M2.
If the behaviour of L is fully contained in M1 behaviour,
and the behaviour of M1 is fully contained in M2 behavior,
then Prec(L, M1) ≥ Prec(L, M2).
— Axiom 3: given a log L, and two process models M1 and M2.
If the behaviour of L is fully contained in M1 and M2 is the flower process,
then Prec(L, M1) > Prec(L, M2).
— Axiom 4: given a log L, and two process models M1 and M2.
If the behaviour of M1 is equal to the behaviour of M2,
then Prec(L, M1) = Prec(L, M2).
— Axiom 5: given two logs L1 and L2, and a process model M.
If the behaviour of L1 is fully contained in L2 behaviour,
then Prec(L2, M) ≥ Prec(L1, M).
M2 M1 L
M L2 L1
Axiom 2
Axiom 5
8
9. State of the Art Precision Measures(2)
Precision Axiom Satisfied
Name Authors Year A1 A2 A3 A4 A5
Set Difference Precision Greco et Al. 2006 yes ? no yes yes
Advanced Behavioural Appropriateness Rozinat and van der Aalst 2008 no ? no yes ?
Negative Events Precision De Weerdt et al. 2011 no no ? ? ?
Alignments-based ETC precision (one-align) Adriansyah et al. 2015 no no no no no
Projected Conformance Checking Leemans et al. 2016 ? no ? ? no
Anti-alignment Precision van Dongen et al. 2016 ? ? ? ? no
Tax, N., Lu, X., Sidorova, N., Fahland, D. and van der Aalst, W. (2018).
The imprecisions of precision measures in process mining.
9
12. Why so Challenging?
Event
Log
Process
Model
Event Log
Behaviour
Process
Model
Behaviour
Compare
Precision
The process model behaviour
may be infinite.
The event log behaviour
is always finite.
How to fairly compare
an infinite behaviour against a finite one?
12
15. Objectives-Driven Approach Design
Event
Log
Process
Model
Event Log
Behaviour
Process
Model
Behaviour
Compare
Precision
1. Use the same Behavioural Abstraction
2. Capture only Chunks of Behaviour
3. Control Behavioural Approximation
Abstract
Behaviour
Abstract
Behaviour
15
16. Objectives-Driven Approach Design
Event
Log
Process
Model
Event Log
Behaviour
Process
Model
Behaviour
Compare
Precision
1. Use the same Behavioural Abstraction
2. Capture only Chunks of Behaviour
3. Control Behavioural Approximation
4. Be Noise Tolerant and Rapid
5. Must satisfy the Five Axioms
Abstract
Behaviour
Abstract
Behaviour
16
17. kth-order Markovian Abstraction:
a graphical representation of behavioural chunks (i.e. subtraces) of length k and their evolution.
Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
1st Order Markovian Abstraction
17
Log Behaviour
18. kth-order Markovian Abstraction:
a graphical representation of behavioural chunks (i.e. subtraces) of length k and their evolution.
Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
1st Order Markovian Abstraction
18
Log Behaviour
19. kth-order Markovian Abstraction:
a graphical representation of behavioural chunks (i.e. subtraces) of length k and their evolution.
Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
1st Order Markovian Abstraction
19
Log Behaviour
20. kth-order Markovian Abstraction:
a graphical representation of behavioural chunks (i.e. subtraces) of length k and their evolution.
Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
1st Order Markovian Abstraction
20
Log Behaviour
26. 26
1. Turn the process into an automaton
2nd Order Markovian Abstraction:
from a Process Model
27. 27
1. Turn the process into an automatonAutomaton
s0 s1
a
b
sf
a
b
ba
2nd Order Markovian Abstraction:
from a Process Model
28. 28
1. Turn the process into an automaton
2. Replay the automaton collecting all the
subtraces of length k
(and full traces of length < k)
Automaton
s0 s1
a
b
sf
a
b
ba
2nd Order Markovian Abstraction:
from a Process Model
29. 29
1. Turn the process into an automaton
2. Replay the automaton collecting all the
subtraces of length k
(and full traces of length < k)
Automaton
s0 s1
a
b
sf
a
b
ba
Subtraces:
<a,b> <a,a> <b,a> <b,b> <a> <b>
2nd Order Markovian Abstraction:
from a Process Model
30. 30
1. Turn the process into an automaton
2. Replay the automaton collecting all the
subtraces of length k
(and full traces of length < k)
3. Turn each subtrace into a node of the
Markovian Abstraction
Automaton
s0 s1
a
b
sf
a
b
ba
Subtraces:
<a,b> <a,a> <b,a> <b,b> <a> <b>
2nd Order Markovian Abstraction:
from a Process Model
31. 31
1. Turn the process into an automaton
2. Replay the automaton collecting all the
subtraces of length k
(and full traces of length < k)
3. Turn each subtrace into a node of the
Markovian Abstraction
Subtraces:
<a,b> <a,a> <b,a> <b,b> <a> <b>
2nd Order Markovian Abstraction
2nd Order Markovian Abstraction:
from a Process Model
32. 32
1. Turn the process into an automaton
2. Replay the automaton collecting all the
subtraces of length k
(and full traces of length < k)
3. Turn each subtrace into a node of the
Markovian Abstraction
4. Connect the nodes representing
overlapping subtraces (e.g. <a,b> and
<b,a>), and to the initial node (-) the
traces’ suffixes and prefixes.
Subtraces:
<a,b> <a,a> <b,a> <b,b> <a> <b>
2nd Order Markovian Abstraction
2nd Order Markovian Abstraction:
from a Process Model
33. 33
1. Turn the process into an automaton
2. Replay the automaton collecting all the
subtraces of length k
(and full traces of length < k)
3. Turn each subtrace into a node of the
Markovian Abstraction
4. Connect the nodes representing
overlapping subtraces (e.g. <a,b> and
<b,a>), and to the initial node (-) the
traces’ suffixes and prefixes.
Subtraces:
<a,b> <a,a> <b,a> <b,b> <a> <b>
2nd Order Markovian Abstraction
2nd Order Markovian Abstraction:
from a Process Model
35. (Graph) Comparison
— Comparator 1:
Strict Graph Comparison (edges set difference)
— Comparator 2 (implemented):
Hungarian Algorithm Graph Comparison (HGC), with Levenshtein Distance as cost function
Precision (L, M) = 1 -
| 𝑴 𝒆 𝑳 𝒆 |
𝑴 𝒆
𝐿 𝑒 = 𝑒𝑑𝑔𝑒𝑠 𝑠𝑒𝑡 𝑜𝑓 𝑡ℎ𝑒 𝐿𝑜𝑔 𝑀𝑎𝑟𝑘𝑜𝑣𝑖𝑎𝑛 𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑖𝑜𝑛
𝑀𝑒 = 𝑒𝑑𝑔𝑒𝑠 𝑠𝑒𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑃𝑟𝑜𝑐𝑒𝑠𝑠 𝑀𝑜𝑑𝑒𝑙 𝑀𝑎𝑟𝑘𝑜𝑣𝑖𝑎𝑛 𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑖𝑜𝑛
Precision (L, M) = 1 -
𝐻𝐺𝐶 𝑐𝑜𝑠𝑡
𝑴 𝒆
𝐻𝐺𝐶𝑐𝑜𝑠𝑡 = 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑐𝑜𝑠𝑡 𝑜𝑓 𝑡ℎ𝑒 𝐻𝑢𝑛𝑔𝑎𝑟𝑖𝑎𝑛 𝐴𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚 𝑎𝑝𝑝𝑙𝑖𝑒𝑑 𝑡𝑜
𝑡ℎ𝑒 𝑀𝑎𝑟𝑘𝑜𝑣𝑖𝑎𝑛 𝐴𝑏𝑠𝑡𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑠 𝑜𝑓 𝑡ℎ𝑒 𝑃𝑟𝑜𝑐𝑒𝑠𝑠 𝑀𝑜𝑑𝑒𝑙 𝑎𝑛𝑑 𝑜𝑓 𝑡ℎ𝑒 𝐿𝑜𝑔
35
note: comparator 2 is equal to comparator 1 when the process behaviour fully contains the log behaviour..
36. kth-order Markovian Abstraction-Based Precision:
MAPk
Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
Process 1 (P1) Event Log (L) Flower Process (FP)
36
37. Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
Process 1 (P1) Event Log (L) Flower Process (FP)
MAP1 (L, P1) = 1 -
0
6
= 1.00
37
kth-order Markovian Abstraction-Based Precision:
MAPk
38. Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
Process 1 (P1) Event Log (L) Flower Process (FP)
MAP1 (L, P1) = 1 -
0
6
= 1.00
38
kth-order Markovian Abstraction-Based Precision:
MAPk
39. Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
Process 1 (P1) Event Log (L) Flower Process (FP)
MAP1 (L, P1) = 1 -
0
6
= 1.00 MAP1 (L, FP) = 1 -
2
8
= 0.75
39
kth-order Markovian Abstraction-Based Precision:
MAPk
40. Traces #
A, A, B x
A, B, B y
A, B, A, B, A, B z
MAP2 (L, P1) = 1 -
4
12
= 0.66
MAP1 (L, P1) = 1 -
0
6
= 1.00
MAP2 (L, FP) = 1 -
12
20
= 0.40
MAP1 (L, FP) = 1 -
2
8
= 0.75
40
Process 1 (P1) Event Log (L) Flower Process (FP)
kth-order Markovian Abstraction-Based Precision:
MAPk
41. Satisfiability of the Five Axioms
Precision Axiom Satisfied
Name Authors Year A1 A2 A3 A4 A5
Set Difference Precision Greco et Al. 2006 yes yes no yes yes
Advanced Behavioural Appropriateness Rozinat and van der Aalst 2008 no ? no yes ?
Negative Events Precision De Weerdt et al. 2011 no no ? ? ?
Alignments-based ETC precision (one-align) Adriansyah et al. 2015 no no no no no
Projected Conformance Checking Leemans et al. 2016 ? no ? ? no
Anti-alignment Precision van Dongen et al. 2016 ? ? ? ? no
kth-order Markovian Abstraction Augusto et al. 2018 yes yes yes* yes yes
*Axiom 3 is satisfied for a given order of k* (or higher orders).
k* = 2 for any process having at least one activity that cannot be executed twice consecutively.
41
42. Qualitative Evaluation on Artificial Data (1)
Traces #
A, B, D, E, I 1207
A, C, D, G, H, F, I 145
A, C, G, D, H, F, I 56
A, C, H, D, F, I 23
A, C, D, H, F, I 28
van Dongen, B., Carmona, J. and Chatain, T.
A unified approach for measuring precision and generalization based on anti-alignments, BPM 2016.
42
1. single trace
2. separate traces
3. flower model
4. optional G || optional H
5. G and H as self-loop activities
6. D as self-loop activity
7. all parallel activities
8. round robin
43. 43
Process
Infinite
Behaviuor
Traces
(max 2 loops)
SD ETC NE PCC AA MAP1 MAP2 MAP3 … MAP7
original model no 6 7 7 9 8 7 7 7 7 = 7
single trace no 1 8 8 6 8 8 7 8 8 = 8
separate traces no 5 8 8 8 7 8 7 8 8 = 8
opt. G || opt. H no 12 6 3 7 6 6 5 6 6 = 6
all parallel no 362,880 1 2 2 2 3 2 2 2 = 2
round robin yes 27 1 4 3 3 1 4 5 5 = 5
D self-loop yes 118 1 6 4 5 4 5 4 4 = 4
G and H self-loops yes 362 1 5 5 4 5 3 3 3 = 3
flower model yes 986,410 1 1 1 1 1 1 1 1 = 1
processes ordered by precision (the higher the rank the more precise the process).
Qualitative Evaluation on Artificial Data (2)
44. 44
Process
Infinite
Behaviuor
Traces
(max 2 loops)
SD ETC NE PCC AA MAP1 MAP2 MAP3 … MAP7
original model no 6 7 7 9 8 7 7 7 7 = 7
single trace no 1 8 8 6 8 8 7 8 8 = 8
separate traces no 5 8 8 8 7 8 7 8 8 = 8
opt. G || opt. H no 12 6 3 7 6 6 5 6 6 = 6
all parallel no 362,880 1 2 2 2 3 2 2 2 = 2
round robin yes 27 1 4 3 3 1 4 5 5 = 5
D self-loop yes 118 1 6 4 5 4 5 4 4 = 4
G and H self-loops yes 362 1 5 5 4 5 3 3 3 = 3
flower model yes 986,410 1 1 1 1 1 1 1 1 = 1
processes ordered by precision (the higher the rank the more precise the process).
Qualitative Evaluation on Artificial Data (2)
45. 45
Process
Infinite
Behaviuor
Traces
(max 2 loops)
SD ETC NE PCC AA MAP1 MAP2 MAP3 … MAP7
original model no 6 7 7 9 8 7 7 7 7 = 7
single trace no 1 8 8 6 8 8 7 8 8 = 8
separate traces no 5 8 8 8 7 8 7 8 8 = 8
opt. G || opt. H no 12 6 3 7 6 6 5 6 6 = 6
all parallel no 362,880 1 2 2 2 3 2 2 2 = 2
round robin yes 27 1 4 3 3 1 4 5 5 = 5
D self-loop yes 118 1 6 4 5 4 5 4 4 = 4
G and H self-loops yes 362 1 5 5 4 5 3 3 3 = 3
flower model yes 986,410 1 1 1 1 1 1 1 1 = 1
processes ordered by precision (the higher the rank the more precise the process).
Qualitative Evaluation on Artificial Data (2)
46. 46
Process
Infinite
Behaviuor
Traces
(max 2 loops)
SD ETC NE PCC AA MAP1 MAP2 MAP3 … MAP7
original model no 6 7 7 9 8 7 7 7 7 = 7
single trace no 1 8 8 6 8 8 7 8 8 = 8
separate traces no 5 8 8 8 7 8 7 8 8 = 8
opt. G || opt. H no 12 6 3 7 6 6 5 6 6 = 6
all parallel no 362,880 1 2 2 2 3 2 2 2 = 2
round robin yes 27 1 4 3 3 1 4 5 5 = 5
D self-loop yes 118 1 6 4 5 4 5 4 4 = 4
G and H self-loops yes 362 1 5 5 4 5 3 3 3 = 3
flower model yes 986,410 1 1 1 1 1 1 1 1 = 1
processes ordered by precision (the higher the rank the more precise the process).
Qualitative Evaluation on Artificial Data (2)
47. 47
Process
Infinite
Behaviuor
Traces
(max 2 loops)
SD ETC NE PCC AA MAP1 MAP2 MAP3 … MAP7
original model no 6 7 7 9 8 7 7 7 7 = 7
single trace no 1 8 8 6 8 8 7 8 8 = 8
separate traces no 5 8 8 8 7 8 7 8 8 = 8
opt. G || opt. H no 12 6 3 7 6 6 5 6 6 = 6
all parallel no 362,880 1 2 2 2 3 2 2 2 = 2
round robin yes 27 1 4 3 3 1 4 5 5 = 5
D self-loop yes 118 1 6 4 5 4 5 4 4 = 4
G and H self-loops yes 362 1 5 5 4 5 3 3 3 = 3
flower model yes 986,410 1 1 1 1 1 1 1 1 = 1
processes ordered by precision (the higher the rank the more precise the process).
Qualitative Evaluation on Artificial Data (2)
48. Real-Life Evaluation (setup)
48
SETUP
— 20 Real-life logs: 12 publicly available (at the 4TU data centre), and 8 proprietary
— Models discovered by three automated discovery algorithms
(Split Miner, Inductive Miner, and Structured Heuristics Miner)
— Qualitative comparison against ETC Precision (the only feasible in a real-life context)
— Time performance comparison against ETC Precision
49. Real-Life Evaluation (results)
49
RESULTS
— MAPk easily distinguishes between process models with poor precision and high precision
— MAPk is suitable for quality assessment and (especially) comparison of process models
— MAPk can be over 10 times faster than ETC
(avg. time 3.7s vs 60.0s, on models discovered by Split Miner)
53. Limitations
53
—The order k and the execution time are proportional
—How to choose k?
—The selection of the best k is not automated
54. Future Work
—Designing a complementary Markovian Abstraction-Based Fitness
—Exploring alternative comparison algorithms (e.g. graph bisimulation)
—Using the Markovian Precision in automated process discovery
for reinforcement learning
54