Presentation of the paper "Discovering Unbounded Synchronization Conditions in Artifact-Centric Process Models" at the 9th International Workshop on Business Process Intelligence (BPI'2013) - http://www.win.tue.nl/bpi2013/
This document provides an overview of a toy model for simulating particle collisions. It describes sampling particle data from experimental measurements to generate events. A jet finding algorithm is used to cluster particles into jets using FastJet. The current status indicates particle generation works as expected but jet finding results appear buggy. Next steps involve analyzing jet distributions and performance of the jet finder on simulated events without embedded jets. Possible extensions include jet fragmentation.
Rx is a library for composing asynchronous and event-based programs using observable collections. It has a strong theoretical basis using the duality between the classic Iterator and Observer design patterns to simplify controlling asynchrony. Rx allows defining and combining asynchronous data streams through declarative query operators.
Discovering Branching Conditions from Business Process Execution LogsMarlon Dumas
Paper presentation given at the International Conference on Fundamental Approaches to Software Engineering (FASE) in March 2013. The paper can be found <a>here</a>.
Minería de Procesos y de Reglas de NegocioMarlon Dumas
Charla sobre minería de procesos y reglas de negocio en el 1er Foro Colombiano de BPM organizado por la Universidad de los Andes (Bogotá), 29 de Noviembre 2013 - http://forosisis.uniandes.edu.co/bpm/1er-forodebpm/
Automated Discovery of Structured Process Models: Discover Structured vs Disc...Marlon Dumas
Research paper presentation at the 35th International Conference on Conceptual Modeling (ER'2016), Gifu, Japan, 15 Nov. 2016
Presentation delivered by Raffaele Conforti.
Paper available at: http://goo.gl/5EN3l2
Evidence-Based Business Process ManagementMarlon Dumas
1. The document discusses trends in business process management, specifically the rise of evidence-based business process management using process mining techniques.
2. Process mining allows companies to analyze process data from event logs to understand their actual processes, quantify the impact of changes, and discover opportunities for improvement.
3. The techniques discussed include process discovery, conformance checking, predictive monitoring, and rule mining to provide insights into deviations, bottlenecks, and other process issues.
This document provides an overview of a toy model for simulating particle collisions. It describes sampling particle data from experimental measurements to generate events. A jet finding algorithm is used to cluster particles into jets using FastJet. The current status indicates particle generation works as expected but jet finding results appear buggy. Next steps involve analyzing jet distributions and performance of the jet finder on simulated events without embedded jets. Possible extensions include jet fragmentation.
Rx is a library for composing asynchronous and event-based programs using observable collections. It has a strong theoretical basis using the duality between the classic Iterator and Observer design patterns to simplify controlling asynchrony. Rx allows defining and combining asynchronous data streams through declarative query operators.
Discovering Branching Conditions from Business Process Execution LogsMarlon Dumas
Paper presentation given at the International Conference on Fundamental Approaches to Software Engineering (FASE) in March 2013. The paper can be found <a>here</a>.
Minería de Procesos y de Reglas de NegocioMarlon Dumas
Charla sobre minería de procesos y reglas de negocio en el 1er Foro Colombiano de BPM organizado por la Universidad de los Andes (Bogotá), 29 de Noviembre 2013 - http://forosisis.uniandes.edu.co/bpm/1er-forodebpm/
Automated Discovery of Structured Process Models: Discover Structured vs Disc...Marlon Dumas
Research paper presentation at the 35th International Conference on Conceptual Modeling (ER'2016), Gifu, Japan, 15 Nov. 2016
Presentation delivered by Raffaele Conforti.
Paper available at: http://goo.gl/5EN3l2
Evidence-Based Business Process ManagementMarlon Dumas
1. The document discusses trends in business process management, specifically the rise of evidence-based business process management using process mining techniques.
2. Process mining allows companies to analyze process data from event logs to understand their actual processes, quantify the impact of changes, and discover opportunities for improvement.
3. The techniques discussed include process discovery, conformance checking, predictive monitoring, and rule mining to provide insights into deviations, bottlenecks, and other process issues.
SC7 Hangout 3: Architecture of the BDE Pilot for Secure SocietiesBigData_Europe
This document describes the architecture of a pilot for secure societies that uses big data techniques. It involves workflows for event detection, change detection, and a common workflow. The event detection workflow crawls news, uses Cassandra to store items, detects events using Spark, and performs location enrichment. The change detection workflow aggregates images, detects changes using Spark, and clusters changes. The common workflow converts data to RDF using GeoTriples, stores and queries data using Strabon and SemaGrow, and includes a user interface called Sextant.
Slides of the tutorial on Multi-Dimensional Process Analysis shown at the BPM 2022 conference in Muenster, Germany.
Processes are complex phenomena that emerge from the interplay of human actors, materials, data, and machines. Process science develops effective methods and techniques for studying and improving processes. The BPM field has developed mature methods and techniques for studying and improving process executions from the control-flow perspective, and the limitations of control-flow focused thinking are well-known. Current research explores concepts from related disciplines to study behavioral phenomena “beyond” control-flow. However, it remains challenging to relate models and concepts of other behavioral phenomena to the dominant control-flow oriented paradigm.
This tutorial introduces several recently developed simple models that naturally describe behavior beyond control-flow, but are inherently compatible with control-flow oriented thinking. We discuss the Performance Spectrum to study performance patterns and their propagation over time, Event Knowledge Graphs to study networks of behavior over data objects and actors, and Proclets as a formal model for reasoning over control-flow, data object, queue and actor behavior. For each model, we discuss which phenomena can be studied, which insights can be gained, which tools are available, and to which other fields they relate.
https://doi.org/10.1007/978-3-031-16103-2_3
Flink Forward SF 2017: Stefan Richter - Improvements for large state and reco...Flink Forward
Stateful stream processing with exactly-once guarantees is one of Apache Flink's distinctive features and we can observe that the scale of state that is managed by Flink in production constantly grows. This leads to a couple of interesting challenges for state handling in Flink. In this talk, we presents current and future developments to improve the handling of large state and recovery in Apache Flink. We show how to keep snapshots for large state swift and how to minimize negative effects on job performance through incremental and asynchronous checkpointing. Furthermore, we discuss how to greatly accelerate recovery under failures and for rescaling. In this context, we go into details about improved execution graph recovery, caching state on task managers, and considering new features of modern storage architectures for our state backends.
The document describes an activity analysis and visualization project with the following objectives:
1. Build a system to support groups in learning how to work more effectively through visualizing collaboration data logs.
2. Develop different types of visualizations like activity radars and interaction networks to provide insights into participation, interactions, and timelines of events.
3. Apply data mining techniques to find frequent patterns and sequences of events that characterize aspects of teamwork.
The document provides an overview of the Network Simulator ns-2. It discusses:
1) The history and goals of ns-2 including supporting networking research, protocol design and comparison, and providing a collaborative environment.
2) Current projects using ns-2 including SAMAN to build robust networks and CONSER to extend ns-2's capabilities.
3) The components and functionality of ns-2 including modeling wired and wireless networks, various protocols, traffic sources, and queue management.
Cities are composed of complex systems with physical, cyber, and social components. Current works on extracting and understanding city events mainly rely on technology enabled infrastructure to observe and record events. In this work, we propose an approach to leverage citizen observations of various city systems and services such as traffic, public transport, water supply, weather, sewage, and public safety as a source of city events. We investigate the feasibility of using such textual streams for extracting city events from annotated text. We formalize the problem of annotating social streams such as microblogs as a sequence labeling problem. We present a novel training data creation process for training sequence labeling models. Our automatic training data creation process utilizes instance level domain knowledge (e.g., locations in a city, possible event terms). We compare this automated annotation process to a state-of-the-art tool that needs manually created training data and show that it has comparable performance in annotation tasks. An aggregation algorithm is then presented for event extraction from annotated text. We carry out a comprehensive evaluation of the event annotation and event extraction on a real-world dataset consisting of event reports and tweets collected over four months from San Francisco Bay Area. The evaluation results are promising and provide insights into the utility of social stream for extracting city events.
This document discusses concepts related to data streams and real-time analytics. It begins with introductions to stream data models and sampling techniques. It then covers filtering, counting, and windowing queries on data streams. The document discusses challenges of stream processing like bounded memory and proposes solutions like sampling and sketching. It provides examples of applications in various domains and tools for real-time data streaming and analytics.
This document describes a method for predicting backgrounds to top-antitop quark events using b-tagging information. The method involves measuring b-tagging rates in gamma+jets events and applying those rates to other events to predict whether they would contain one or more b-tagged jets. The rates are measured as functions of muon momentum and position. Applying the rates to gamma+jets, jet, and lepton+jets data shows good agreement, validating the method. The method is then used to predict top-antitop and W+jets backgrounds in lepton+jets data and measure the top-antitop production cross section.
Logical clocks assign sequence numbers to distributed system events to determine causality without a global clock. Lamport's algorithm uses logical clocks to impose a partial ordering on events. Vector clocks extend this to also detect concurrent events that are not causally related, providing a full happened-before relation between all events. Each process maintains a vector clock that is incremented after local events and updated when receiving messages from other processes.
- The document describes a method for understanding city traffic dynamics by utilizing sensor data that measures average speed and link travel time, as well as textual data from tweets and official traffic reports.
- It builds statistical models to learn normal traffic patterns from historical sensor data and identifies anomalies, then correlates anomalies with relevant traffic events extracted from tweets and reports.
- The method was evaluated on data collected for the San Francisco Bay Area, and it was able to scale to large real-world datasets by exploiting the problem structure and using Apache Spark for distributed processing. Events extracted from social media provided complementary information to sensor data for explaining traffic anomalies.
Sequential pattern mining is a technique for discovering frequent subsequences (patterns) in sequence databases. It involves the following steps:
1. Sorting the sequence database based on customer ID and timestamp.
2. Generating candidate sequential patterns of increasing length and calculating their support by scanning the database.
3. Finding maximal sequential patterns that are not subsequences of other higher-support patterns.
Algorithms like FreeSpan improve efficiency by projecting the database to generate candidate patterns and avoiding generating a huge number of candidates. Sequential pattern mining is useful for applications like market basket analysis, web usage mining, and episode mining in event sequences.
Multi-Perspective Comparison of Business Processes Variants Based on Event LogsMarlon Dumas
This document presents a method for multi-perspective comparison of business process variants based on event logs. The method involves constructing perspective graphs from different abstractions of event logs to analyze processes from different perspectives based on event attributes. Differential perspective graphs are then used to identify statistically significant differences between two event logs, representing different process variants. The method was experimentally applied to compare differences between divisions in an IT incident handling process using various abstractions and observations. The experiments revealed differences in activity statuses, control flows between countries, and control flow frequencies over time between the divisions.
Making Use of the Linked Data Cloud: The Role of Index StructuresThomas Gottron
The intensive growth of the Linked Open Data Cloud has spawned a web of data where a multitude of data sources provides huge amounts of valuable information across different domains. Nowadays, when accessing and using Linked Data more and more often the challenging question is not so much whether there is relevant data available, but rather where it can be found and how it is structured. Thus, index structures play an important role for making use of the information in LOD cloud. In this talk I will address three aspects of Linked Data index structures: (1) a high level view and categorization of indices structures and how they can be queried and explored, (2) approaches for building index structures and the need to maintain them and (3) some example applications which greatly benefit from indices over linked data.
[OpenInfra Days Korea 2018] (Track 4) CloudEvents 소개 - 상호 운용 가능성을 극대화한 이벤트 데이...OpenStack Korea Community
The document discusses CloudEvents, a new standard for describing event data formats. CloudEvents provides a common way to describe events across different systems by defining common metadata for events, including event type, source, and time. It aims to allow events to be delivered across various protocols and encodings. The standard defines an event as an occurrence paired with associated data, with context metadata providing information about the event source and time. It describes how events following the CloudEvents standard would be structured and can be implemented in different environments and languages.
MongoDB Solution for Internet of Things and Big DataStefano Dindo
Internet of Things è uno degli scenari di mercato più importanti su cui investire entro il 2020.
L'Internet of Things permette di trasferire sul Web la vita reale delle persone grazie all'interazione con oggetti e spazi fisici scambiando un grande volume di dati.
Durante il Lab è stata fornita una descrizione di architettura necessaria a supportare progetti di Internet of Things con un focus sull'organizzazione dei dati all'interno di MongoDB, database NoSQL Leader di mercato, per raccogliere ed analizzare grandi volumi di dati in tempo reale ed in modo efficiente.
Lab pratico per la progettazione di soluzioni MongoDB in ambito Internet of T...festival ICT 2016
Le aziende sono chiamate a rispondere velocemente ai cambiamenti di mercato a seguito dell’affermarsi di nuovi scenari (Internet of Things, Social Analysis, manifattura 4.0 ecc.) che richiedono sempre di più l’integrazione con nuove tecnologie. Trattandosi di progetti innovativi, che impattano su processi e contesti aziendali, è fondamentale disporre di soluzioni flessibili per la raccolta e l’analisi dei dati.
MongoDB è un database NoSQL in grado di offrire flessibilità, scalabilità e semplificazione delle attività di sviluppo. Il lab avrà lo scopo di illustrare come creare architetture MongoDB e svolgere attività di Schema Design per la gestione dei dati in ambito IoT e Big Data facendo inoltre riferimento a casi pratici reali che si basano su tecnologie Cloud, necessarie a far fronte ad un mercato sempre più globale.
Event Detection and Characterization in Dynamic GraphsShebuti Rayana
The document presents a framework for event detection and characterization in dynamic graphs. It proposes an ensemble approach that uses multiple algorithms for event detection, including eigen-behavior based detection, a probabilistic approach, and SPIRIT. The algorithms produce different scores and rankings that are merged through consensus methods. The approach is evaluated on two datasets: a cyber network dataset with ground truths and a New York Times corpus without ground truths. Major events are successfully detected in both datasets.
Building Applications with Streams and SnapshotsJ On The Beach
Stream processing has been traditionally associated with realtime analytics. Modern stream processors, like Apache Flink, however, go far beyond that and give us a new approach to build applications and services as a whole.
This talk shows how to build applications on *data streams*, *state*, and *snaphots* (point-in-time views of application state) using Apache Flink. Rather than separating computation (application) and state (database), Flink manages the application logic and state as a tight pair and uses snapshots for consistent view onto the application and its state. With features like Flink's queryable state, the stream processor and database effectively become one.
This application pattern has many interesting properties: Aside from having fewer moving parts, it supports very high event rates because of its tight integration between computation and state, and its simple concurrency and recovery model. At the same time, it exposes a powerful consistency model, allows for seamless forking/updating/rollback of online applications, generalizes across historic and real-time data, and easily incorporates event time semantics and handling of late data. Finally, it allows applications to be defined in an easy way via streaming SQL.
Haystax Technology Labs presentation of white-paper on advanced threat analytics at 9th International Semantic Technologies Intelligence for Defense and Security (STIDS)
Conversation with-search-engines (Ren et al. 2020)Vaclav Kosar
To make voice search more natural, this paper compiles new dataset and implements two novel modules into QA architecture.
Authors: Pengjie Ren, Zhumin Chen, Zhaochun Ren, Evangelos Kanoulas, Christof Monz, Maarten de Rijke
+ Other mentioned papers
How GenAI will (not) change your business?Marlon Dumas
Not all new technology waves are the same. Some waves are vertical (3D printing, digital twins, blockchain) while others are horizontal (the PC in the 80s, the Web in the 90s). GenAI is a horizontal wave. The question is not if GenAI will impact my business, but what will be the scope of this impact. In this talk, we will go through a journey of collisions: GenAI colliding with customer service, clerical work, information search, content production, IT development, product design, and other knowledge work. A common thread to understand the impact of GenAI is to distinguish between descriptive use cases (search, summarize, expand, transcribe & translate) versus creative use.
Walking the Way from Process Mining to AI-Driven Process OptimizationMarlon Dumas
While generative AI grabs headlines, most organizations are yet to achieve continuous process improvement from predictive and prescriptive analytics.
Why? It’s largely about data, people, and a methodical approach to deploy AI to connect data and people. The good news is that if your organization has built a process mining capability, you are well placed to climb the ladder to achieve AI-driven process optimization. But to get there, you need a disciplined step-by-step approach along two tracks: a tactical management track and an operational management track.
First, it’s about predicting what will happen if you leave your process as-is, and what will happen if you implement a change in your process. At a tactical level, a predictive capability allows you to prioritize improvement opportunities. At an operational level, it allows you to predict issues, such as deadline violations. The challenges here are how to manage the inherent uncertainty of data-driven AI systems, and how to change your people and culture to manage processes proactively, rather than reactively. One thing is to deploy predictive dashboards, another entirely different thing is to get people to use them effectively to improve the processes.
Next, it’s about becoming preemptive: continuously optimizing your processes by leveraging streams of data-driven recommendations to trigger changes and actions. At the tactical level, this prescriptive capability allows you to implement the right changes to maximize competing KPIs. At the operational level, it means triggering interventions in your processes to “wow” customers and to meet SLAs in a cost-effective manner. The challenge here is how to help process owners, workers, and other stakeholders to understand the causes of performance issues and how the recommendations generated by the AI-driven optimization system will tackle those causes?
And finally, as an icing on the cake, generative AI allows you to produce improvement scenarios to adapt to external changes. Importantly, the transformative potential of generative AI in the context of process improvement does not come from its ability to provide question-and-answer interfaces to query data. It comes from its ability to support continuous process adaptation by generating and validating hypotheses based on a holistic view of your organization.
In this talk, we will discuss how organizations are driving sustainable business value by strategically layering predictive, prescriptive, and generative AI onto a process mining foundation, one brick at a time.
Industry keynote talk by Marlon Dumas at the 5th International Conference on Process Mining (ICPM'2023), Rome, Italy, 25 October 2023
More Related Content
Similar to Discovering Unbounded Synchronization Conditions in Artifact-Centric Process Models
SC7 Hangout 3: Architecture of the BDE Pilot for Secure SocietiesBigData_Europe
This document describes the architecture of a pilot for secure societies that uses big data techniques. It involves workflows for event detection, change detection, and a common workflow. The event detection workflow crawls news, uses Cassandra to store items, detects events using Spark, and performs location enrichment. The change detection workflow aggregates images, detects changes using Spark, and clusters changes. The common workflow converts data to RDF using GeoTriples, stores and queries data using Strabon and SemaGrow, and includes a user interface called Sextant.
Slides of the tutorial on Multi-Dimensional Process Analysis shown at the BPM 2022 conference in Muenster, Germany.
Processes are complex phenomena that emerge from the interplay of human actors, materials, data, and machines. Process science develops effective methods and techniques for studying and improving processes. The BPM field has developed mature methods and techniques for studying and improving process executions from the control-flow perspective, and the limitations of control-flow focused thinking are well-known. Current research explores concepts from related disciplines to study behavioral phenomena “beyond” control-flow. However, it remains challenging to relate models and concepts of other behavioral phenomena to the dominant control-flow oriented paradigm.
This tutorial introduces several recently developed simple models that naturally describe behavior beyond control-flow, but are inherently compatible with control-flow oriented thinking. We discuss the Performance Spectrum to study performance patterns and their propagation over time, Event Knowledge Graphs to study networks of behavior over data objects and actors, and Proclets as a formal model for reasoning over control-flow, data object, queue and actor behavior. For each model, we discuss which phenomena can be studied, which insights can be gained, which tools are available, and to which other fields they relate.
https://doi.org/10.1007/978-3-031-16103-2_3
Flink Forward SF 2017: Stefan Richter - Improvements for large state and reco...Flink Forward
Stateful stream processing with exactly-once guarantees is one of Apache Flink's distinctive features and we can observe that the scale of state that is managed by Flink in production constantly grows. This leads to a couple of interesting challenges for state handling in Flink. In this talk, we presents current and future developments to improve the handling of large state and recovery in Apache Flink. We show how to keep snapshots for large state swift and how to minimize negative effects on job performance through incremental and asynchronous checkpointing. Furthermore, we discuss how to greatly accelerate recovery under failures and for rescaling. In this context, we go into details about improved execution graph recovery, caching state on task managers, and considering new features of modern storage architectures for our state backends.
The document describes an activity analysis and visualization project with the following objectives:
1. Build a system to support groups in learning how to work more effectively through visualizing collaboration data logs.
2. Develop different types of visualizations like activity radars and interaction networks to provide insights into participation, interactions, and timelines of events.
3. Apply data mining techniques to find frequent patterns and sequences of events that characterize aspects of teamwork.
The document provides an overview of the Network Simulator ns-2. It discusses:
1) The history and goals of ns-2 including supporting networking research, protocol design and comparison, and providing a collaborative environment.
2) Current projects using ns-2 including SAMAN to build robust networks and CONSER to extend ns-2's capabilities.
3) The components and functionality of ns-2 including modeling wired and wireless networks, various protocols, traffic sources, and queue management.
Cities are composed of complex systems with physical, cyber, and social components. Current works on extracting and understanding city events mainly rely on technology enabled infrastructure to observe and record events. In this work, we propose an approach to leverage citizen observations of various city systems and services such as traffic, public transport, water supply, weather, sewage, and public safety as a source of city events. We investigate the feasibility of using such textual streams for extracting city events from annotated text. We formalize the problem of annotating social streams such as microblogs as a sequence labeling problem. We present a novel training data creation process for training sequence labeling models. Our automatic training data creation process utilizes instance level domain knowledge (e.g., locations in a city, possible event terms). We compare this automated annotation process to a state-of-the-art tool that needs manually created training data and show that it has comparable performance in annotation tasks. An aggregation algorithm is then presented for event extraction from annotated text. We carry out a comprehensive evaluation of the event annotation and event extraction on a real-world dataset consisting of event reports and tweets collected over four months from San Francisco Bay Area. The evaluation results are promising and provide insights into the utility of social stream for extracting city events.
This document discusses concepts related to data streams and real-time analytics. It begins with introductions to stream data models and sampling techniques. It then covers filtering, counting, and windowing queries on data streams. The document discusses challenges of stream processing like bounded memory and proposes solutions like sampling and sketching. It provides examples of applications in various domains and tools for real-time data streaming and analytics.
This document describes a method for predicting backgrounds to top-antitop quark events using b-tagging information. The method involves measuring b-tagging rates in gamma+jets events and applying those rates to other events to predict whether they would contain one or more b-tagged jets. The rates are measured as functions of muon momentum and position. Applying the rates to gamma+jets, jet, and lepton+jets data shows good agreement, validating the method. The method is then used to predict top-antitop and W+jets backgrounds in lepton+jets data and measure the top-antitop production cross section.
Logical clocks assign sequence numbers to distributed system events to determine causality without a global clock. Lamport's algorithm uses logical clocks to impose a partial ordering on events. Vector clocks extend this to also detect concurrent events that are not causally related, providing a full happened-before relation between all events. Each process maintains a vector clock that is incremented after local events and updated when receiving messages from other processes.
- The document describes a method for understanding city traffic dynamics by utilizing sensor data that measures average speed and link travel time, as well as textual data from tweets and official traffic reports.
- It builds statistical models to learn normal traffic patterns from historical sensor data and identifies anomalies, then correlates anomalies with relevant traffic events extracted from tweets and reports.
- The method was evaluated on data collected for the San Francisco Bay Area, and it was able to scale to large real-world datasets by exploiting the problem structure and using Apache Spark for distributed processing. Events extracted from social media provided complementary information to sensor data for explaining traffic anomalies.
Sequential pattern mining is a technique for discovering frequent subsequences (patterns) in sequence databases. It involves the following steps:
1. Sorting the sequence database based on customer ID and timestamp.
2. Generating candidate sequential patterns of increasing length and calculating their support by scanning the database.
3. Finding maximal sequential patterns that are not subsequences of other higher-support patterns.
Algorithms like FreeSpan improve efficiency by projecting the database to generate candidate patterns and avoiding generating a huge number of candidates. Sequential pattern mining is useful for applications like market basket analysis, web usage mining, and episode mining in event sequences.
Multi-Perspective Comparison of Business Processes Variants Based on Event LogsMarlon Dumas
This document presents a method for multi-perspective comparison of business process variants based on event logs. The method involves constructing perspective graphs from different abstractions of event logs to analyze processes from different perspectives based on event attributes. Differential perspective graphs are then used to identify statistically significant differences between two event logs, representing different process variants. The method was experimentally applied to compare differences between divisions in an IT incident handling process using various abstractions and observations. The experiments revealed differences in activity statuses, control flows between countries, and control flow frequencies over time between the divisions.
Making Use of the Linked Data Cloud: The Role of Index StructuresThomas Gottron
The intensive growth of the Linked Open Data Cloud has spawned a web of data where a multitude of data sources provides huge amounts of valuable information across different domains. Nowadays, when accessing and using Linked Data more and more often the challenging question is not so much whether there is relevant data available, but rather where it can be found and how it is structured. Thus, index structures play an important role for making use of the information in LOD cloud. In this talk I will address three aspects of Linked Data index structures: (1) a high level view and categorization of indices structures and how they can be queried and explored, (2) approaches for building index structures and the need to maintain them and (3) some example applications which greatly benefit from indices over linked data.
[OpenInfra Days Korea 2018] (Track 4) CloudEvents 소개 - 상호 운용 가능성을 극대화한 이벤트 데이...OpenStack Korea Community
The document discusses CloudEvents, a new standard for describing event data formats. CloudEvents provides a common way to describe events across different systems by defining common metadata for events, including event type, source, and time. It aims to allow events to be delivered across various protocols and encodings. The standard defines an event as an occurrence paired with associated data, with context metadata providing information about the event source and time. It describes how events following the CloudEvents standard would be structured and can be implemented in different environments and languages.
MongoDB Solution for Internet of Things and Big DataStefano Dindo
Internet of Things è uno degli scenari di mercato più importanti su cui investire entro il 2020.
L'Internet of Things permette di trasferire sul Web la vita reale delle persone grazie all'interazione con oggetti e spazi fisici scambiando un grande volume di dati.
Durante il Lab è stata fornita una descrizione di architettura necessaria a supportare progetti di Internet of Things con un focus sull'organizzazione dei dati all'interno di MongoDB, database NoSQL Leader di mercato, per raccogliere ed analizzare grandi volumi di dati in tempo reale ed in modo efficiente.
Lab pratico per la progettazione di soluzioni MongoDB in ambito Internet of T...festival ICT 2016
Le aziende sono chiamate a rispondere velocemente ai cambiamenti di mercato a seguito dell’affermarsi di nuovi scenari (Internet of Things, Social Analysis, manifattura 4.0 ecc.) che richiedono sempre di più l’integrazione con nuove tecnologie. Trattandosi di progetti innovativi, che impattano su processi e contesti aziendali, è fondamentale disporre di soluzioni flessibili per la raccolta e l’analisi dei dati.
MongoDB è un database NoSQL in grado di offrire flessibilità, scalabilità e semplificazione delle attività di sviluppo. Il lab avrà lo scopo di illustrare come creare architetture MongoDB e svolgere attività di Schema Design per la gestione dei dati in ambito IoT e Big Data facendo inoltre riferimento a casi pratici reali che si basano su tecnologie Cloud, necessarie a far fronte ad un mercato sempre più globale.
Event Detection and Characterization in Dynamic GraphsShebuti Rayana
The document presents a framework for event detection and characterization in dynamic graphs. It proposes an ensemble approach that uses multiple algorithms for event detection, including eigen-behavior based detection, a probabilistic approach, and SPIRIT. The algorithms produce different scores and rankings that are merged through consensus methods. The approach is evaluated on two datasets: a cyber network dataset with ground truths and a New York Times corpus without ground truths. Major events are successfully detected in both datasets.
Building Applications with Streams and SnapshotsJ On The Beach
Stream processing has been traditionally associated with realtime analytics. Modern stream processors, like Apache Flink, however, go far beyond that and give us a new approach to build applications and services as a whole.
This talk shows how to build applications on *data streams*, *state*, and *snaphots* (point-in-time views of application state) using Apache Flink. Rather than separating computation (application) and state (database), Flink manages the application logic and state as a tight pair and uses snapshots for consistent view onto the application and its state. With features like Flink's queryable state, the stream processor and database effectively become one.
This application pattern has many interesting properties: Aside from having fewer moving parts, it supports very high event rates because of its tight integration between computation and state, and its simple concurrency and recovery model. At the same time, it exposes a powerful consistency model, allows for seamless forking/updating/rollback of online applications, generalizes across historic and real-time data, and easily incorporates event time semantics and handling of late data. Finally, it allows applications to be defined in an easy way via streaming SQL.
Haystax Technology Labs presentation of white-paper on advanced threat analytics at 9th International Semantic Technologies Intelligence for Defense and Security (STIDS)
Conversation with-search-engines (Ren et al. 2020)Vaclav Kosar
To make voice search more natural, this paper compiles new dataset and implements two novel modules into QA architecture.
Authors: Pengjie Ren, Zhumin Chen, Zhaochun Ren, Evangelos Kanoulas, Christof Monz, Maarten de Rijke
+ Other mentioned papers
Similar to Discovering Unbounded Synchronization Conditions in Artifact-Centric Process Models (20)
How GenAI will (not) change your business?Marlon Dumas
Not all new technology waves are the same. Some waves are vertical (3D printing, digital twins, blockchain) while others are horizontal (the PC in the 80s, the Web in the 90s). GenAI is a horizontal wave. The question is not if GenAI will impact my business, but what will be the scope of this impact. In this talk, we will go through a journey of collisions: GenAI colliding with customer service, clerical work, information search, content production, IT development, product design, and other knowledge work. A common thread to understand the impact of GenAI is to distinguish between descriptive use cases (search, summarize, expand, transcribe & translate) versus creative use.
Walking the Way from Process Mining to AI-Driven Process OptimizationMarlon Dumas
While generative AI grabs headlines, most organizations are yet to achieve continuous process improvement from predictive and prescriptive analytics.
Why? It’s largely about data, people, and a methodical approach to deploy AI to connect data and people. The good news is that if your organization has built a process mining capability, you are well placed to climb the ladder to achieve AI-driven process optimization. But to get there, you need a disciplined step-by-step approach along two tracks: a tactical management track and an operational management track.
First, it’s about predicting what will happen if you leave your process as-is, and what will happen if you implement a change in your process. At a tactical level, a predictive capability allows you to prioritize improvement opportunities. At an operational level, it allows you to predict issues, such as deadline violations. The challenges here are how to manage the inherent uncertainty of data-driven AI systems, and how to change your people and culture to manage processes proactively, rather than reactively. One thing is to deploy predictive dashboards, another entirely different thing is to get people to use them effectively to improve the processes.
Next, it’s about becoming preemptive: continuously optimizing your processes by leveraging streams of data-driven recommendations to trigger changes and actions. At the tactical level, this prescriptive capability allows you to implement the right changes to maximize competing KPIs. At the operational level, it means triggering interventions in your processes to “wow” customers and to meet SLAs in a cost-effective manner. The challenge here is how to help process owners, workers, and other stakeholders to understand the causes of performance issues and how the recommendations generated by the AI-driven optimization system will tackle those causes?
And finally, as an icing on the cake, generative AI allows you to produce improvement scenarios to adapt to external changes. Importantly, the transformative potential of generative AI in the context of process improvement does not come from its ability to provide question-and-answer interfaces to query data. It comes from its ability to support continuous process adaptation by generating and validating hypotheses based on a holistic view of your organization.
In this talk, we will discuss how organizations are driving sustainable business value by strategically layering predictive, prescriptive, and generative AI onto a process mining foundation, one brick at a time.
Industry keynote talk by Marlon Dumas at the 5th International Conference on Process Mining (ICPM'2023), Rome, Italy, 25 October 2023
Discovery and Simulation of Business Processes with Probabilistic Resource Av...Marlon Dumas
In the field of business process simulation, the availability of resources is captured by assigning a calendar to each resource, e.g., Monday-Friday 9:00-18:00. Resources are assumed to be always available to perform activities during their calendar. This assumption often does not hold due to interruptions, breaks, or because resources time-share across multiple processes. A simulation model that captures availability via crisp time slots (a resource is either on or off during a slot) does not capture these behaviors, leading to inaccuracies in the simulation output. This paper presents a simulation approach wherein resource availability is modeled probabilistically. In this approach, each availability time slot is associated with a probability, allowing us to capture, for example, that a resource is available on Fridays between 14:00-15:00 with 90% probability and between 17:00-18:00 with 50% probability. The paper proposes an algorithm to discover probabilistic availability calendars from event logs. An empirical evaluation shows that simulation models with probabilistic calendars discovered from event logs, replicate the temporal distribution of activity instances and cycle times of a process more closely than simulation models with crisp calendars.
This presentation was delivered at the 5th International Conference on Process Mining (ICPM'2023), Rome, Italy, October 2023.
The paper is available at: https://easychair.org/publications/preprint/Rz9g
Can I Trust My Simulation Model? Measuring the Quality of Business Process Si...Marlon Dumas
Business Process Simulation (BPS) is an approach to analyze the performance of business processes under different scenarios. For example, BPS allows us to estimate what would be the cycle time of a process if one or more resources became unavailable. The starting point of BPS is a process model annotated with simulation parameters (a BPS model). BPS models may be manually designed, based on information collected from stakeholders and empirical observations, or automatically discovered from execution data. Regardless of its origin, a key question when using a BPS model is how to assess its quality. In this paper, we propose a collection of measures to evaluate the quality of a BPS model w.r.t. its ability to replicate the observed behavior of the process. We advocate an approach whereby different measures tackle different process perspectives. We evaluate the ability of the proposed measures to discern the impact of modifications to a BPS model, and their ability to uncover the relative strengths and weaknesses of two approaches for automated discovery of BPS models. The evaluation shows that the measures not only capture how close a BPS model is to the observed behavior, but they also help us to identify sources of discrepancies.
Presentation delivered by David Chapela-Campa at the BPM'2023 conference, Utrecht, September 2023.
Business Process Optimization: Status and PerspectivesMarlon Dumas
For decades, business process optimization has been largely about art and craft (and sometimes wizardry). Apart from narrowly scoped approaches to optimize resource allocation (often assuming that workers behave like robots), a lot of business process optimization relies on high-level guidelines, with A/B testing for idea validation, which is hard to scale to complex processes. As a result, managers end up settling for a "good enough" process. Can we do more? In this talk, we review recent work on the use of high-fidelity simulation models discovered from execution data. The talk also explores the possibilities (and perils) that LLMs bring to the field of business process optimization.
This talk was delivered at the Workshop on Data-Driven Business Process Optimization at the BPM'2023 conference.
Learning When to Treat Business Processes: Prescriptive Process Monitoring wi...Marlon Dumas
Paper presentation at the 35th International Conference on Advanced Information Systems Engineering (CAiSE'2023).
Abstract.
Increasing the success rate of a process, i.e. the percentage of cases that end in a positive outcome, is a recurrent process improvement goal. At runtime, there are often certain actions (a.k.a. treatments) that workers may execute to lift the probability that a case ends in a positive outcome. For example, in a loan origination process, a possible treatment is to issue multiple loan offers to increase the probability that the customer takes a loan. Each treatment has a cost. Thus, when defining policies for prescribing treatments to cases, managers need to consider the net gain of the treatments. Also, the effect of a treatment varies over time: treating a case earlier may be more effective than later in a case. This paper presents a prescriptive monitoring method that automates this decision-making task. The method combines causal inference and reinforcement learning to learn treatment policies that maximize the net gain. The method leverages a conformal prediction technique to speed up the convergence of the reinforcement learning mechanism by separating cases that are likely to end up in a positive or negative outcome, from uncertain cases. An evaluation on two real-life datasets shows that the proposed method outperforms a state-of-the-art baseline.
Why am I Waiting Data-Driven Analysis of Waiting Times in Business ProcessesMarlon Dumas
Presentation of a research paper at the 35th International Conference on Advanced Information Systems Engineering (CAiSE) in Zaragoza Spain. The paper presents a classification of causes of waiting times in business processes and a method to automatically detect and quantify the presence of each of these causes in a business process recorded in an event log.
This talk introduces the concept of Augmented Business Process Management System: An ABPMS is a process-aware information system that relies on trustworthy AI technology to
reason and act upon data, within a set of restrictions, with the aim to continuously adapt and
improve a set of business processes with respect to one or more key performance indicators.
The talk describes the transition from existing process mining technology to AI-Augmented BPM as a pyramid, where predictive, prescriptive, conversational and reasoning capabilities are stacked up incrementally to reach the level of Augmented BPM.
Talk delivered at the AAAI'2023 Workshop on AI for Business Process Management.
Process Mining and Data-Driven Process SimulationMarlon Dumas
Guest lecture delivered at the - Institut Teknologi Sepuluh on 8 December 2022.
This lecture gives an overview of process mining and simulation techniques, and how the two can be used together in process improvement projects.
Modeling Extraneous Activity Delays in Business Process SimulationMarlon Dumas
This paper presents a technique to enhance the fidelity of business process simulation models by detecting unexplained (extraneous) delays from business process execution data, and modeling these delays in the simulation model, via timer events.
The presentation was delivered at the 4th International Conference on Process Mining (ICPM'2022).
Paper available at: https://arxiv.org/abs/2206.14051
Business Process Simulation with Differentiated Resources: Does it Make a Dif...Marlon Dumas
Existing methods for discovering business process simulation models from execution data (event logs) assume that all resources in a pool have the same performance and share the same availability calendars. This paper proposes a method for discovering simulation models, wherein each resource is treated as an individual entity, with its own performance and availability calendar. An evaluation shows that simulation models with differentiated resources more closely replicate the distributions of cycle times and the work rhythm in a process than models with undifferentiated resources. The paper is available at: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_24
Prescriptive Process Monitoring Under Uncertainty and Resource ConstraintsMarlon Dumas
This paper presents an approach to trigger runtime interventions at runtime, in order to improve the success rate of a process, when the number of resources who can perform these interventions is limited.
The paper is available at: https://link.springer.com/chapter/10.1007/978-3-031-16171-1_13
The presentation delivered at the 20th International Conference on Business Process Management (BPM'2022), in Muenster, Germany, September 2022.
Slides of a lecture delivered at the First Process Mining Summer School in Aachen, Germany, July 2022.
This lecture introduces techniques in the area of "task mining" with an emphasis on Robotic Process Mining. Robotic Process Mining (RPM) is a family of techniques to discover repetitive routines that can be automated using Robotic Process Automation (RPA) technology, by analyzing interactions between
one or more workers and one or more software applications, during the performance of one or more tasks in a business process. In general, RPM techniques take as input logs of User Interactions (UI logs). These UI logs are recorded while workers interact with one or more applications, typically desktop applications. Based on these logs, RPM techniques produce specifications of one or more routines that can be automated using RPA or related tools.
Accurate and Reliable What-If Analysis of Business Processes: Is it Achievable?Marlon Dumas
This document discusses using event logs to generate business process simulation models. It describes traditional discrete event simulation approaches that discover simulation models from event logs recorded by information systems. Deep learning techniques are also discussed that can generate traces without an explicit process model. The document suggests that combining discrete event simulation and deep learning may produce more accurate simulations, but challenges remain around validating such hybrid approaches and testing them in previously unseen scenarios. More research is needed before these data-driven simulation methods can reliably predict the effects of interventions.
Learning Accurate Business Process Simulation Models from Event Logs via Auto...Marlon Dumas
Paper presentation at the International Conference on Advanced Information Systems Engineering (CAiSE).
This paper presents an approach to automatically discover business process simulation models from event logs by combining process mining and deep learning techniques.
Paper available at: https://link.springer.com/chapter/10.1007/978-3-031-07472-1_4
Process Mining: A Guide for PractitionersMarlon Dumas
This document presents a guide for practitioners on process mining. It introduces process mining and discusses its main use cases. These use cases are categorized into discovery oriented, future and change oriented, alignment oriented, variant oriented, and performance oriented. The document also provides a framework to classify use cases and discusses the business-oriented questions that can be answered using different process mining use cases, such as improving transparency, quality, agility, efficiency and conformance.
Process Mining for Process Improvement.pptxMarlon Dumas
Presentation of a research paper at the 16th International Conference on Research Challenges in Information Science (RCIS). The paper presents the results of an empirical study on how practitioners use process mining to identify business process improvement opportunities. The paper is available at: https://link.springer.com/chapter/10.1007/978-3-031-05760-1_13
Data-Driven Analysis of Batch Processing Inefficiencies in Business ProcessesMarlon Dumas
Slides of a research paper presentation at the 16th International Conference on Research Challenges in Information Science (RCIS).
The research paper presents an approach to analyze event logs of business processes in order to identify batched activities and to analyze the waiting times caused by these activities.
Paper available at: https://link.springer.com/chapter/10.1007/978-3-031-05760-1_14
Optimización de procesos basada en datosMarlon Dumas
Ponencia en BPM Day Lima 2021.
En esta charla, hablaremos de métodos y aplicaciones emergentes en el ámbito de la optimización de procesos basada en datos. Hablaremos de avances en el área de la minería de procesos, de métodos de construcción de gemelos digitales de procesos y de métodos de monitoreo predictivo. Mostraremos por medio de ejemplos y casos de estudio, cómo estos métodos permiten guiar las iniciativas de transformación digital y de mejora continua de procesos, En particular, ilustraremos el uso de estos métodos para: (1) analizar el rendimiento de los procesos de negocio de manera a identificar fricciones y oportunidades de automatización; (2) predecir el impacto de cambios, y en particular, predecir el impacto de una iniciativa de automatización; (3) realizar predicciones sobre el rendimiento del proceso y ajustar la ejecución del proceso de manera a prevenir incumplimientos del SLA, quejas de clientes, y otros eventos indeseables.
Process Mining and AI for Continuous Process ImprovementMarlon Dumas
Talk delivered at BPM Day Rio Grande do Sul on 11 November 2021.
Abstract.
Process mining is a technology that marries methods from business process management and from data science, to support operational excellence and digital transformation. Process mining tools can transform data extracted from enterprise systems, into visualizations and reports that allow managers to improve organizational performance along different dimensions, such as efficiency, quality, and compliance. In this talk, we will give an overview of the capabilities of process mining tools, and we will illustrate the benefits of process mining via several case studies in the fields of insurance, manufacturing, and IT service management.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Discovering Unbounded Synchronization Conditions in Artifact-Centric Process Models
1. Viara Popova and Marlon Dumas
University of Tartu
Discovering Synchronization
Conditions in Artifact-Centric
Process Models
2. Artifact Model Discovery Tool
Raw Log
(text or DB)
Discover
Artifact Types &
Associations
Extract Artifact
Logs
Discover GSM
per Artifact
Discover Intra-
Artifact
Conditions
Discover
Synchronization
Conditions
GSM Artifact
Model
BizArtifact
(Barcelona)
2
3. Raw Logs
2008-12-09T08:20:01.527+01:00 Received_order 245BG „Metallica” „Dead Magnetic”
2008-12-09T20:11:15.342+01:00 Received_order 246BL „Ray Baretto” „Acid”
2008-12-10T08:22:01.427+01:00 Sent_quote 245BG
2008-12-10T08:30:01.427+01:00 Sent_quote 246BL
2008-12-11T11:20:14.534+01:00 Accepted_quote 246BL
...
Timestamp
Event type
Data attribute
Data attribute Data
attribute
3
4. Raw Logs Artifact Logs
1. Discover entities
Functional dependencies primary keys
2. Discover relationships
Inclusion dependencies foreign keys, multiplicities
3. Discover artifact types
4. Extract logs of each artifact
4
Material Order
(MO)
Customer PO
Artifact log
Artifact
logs
*1
5. Discovered GSM
Material Order (MO)
t1: Create
MO
on create t1
complete
t2: Send MO
to supplier
on t1
complete
t2
complete
t3: Receive
suppl. response
on t2
complete
t3
complete
t4: Receive
items
on t3 complete
on t4
complete
t5: Receive
invoice
on t3
complete
t5
complet
e
t6: Reassign
supplier
on t3
complete
t6
complete
t7: Close
MO
on t4 complete
and t5 complete
t7
complete
5
One artifact, pure control-flow
6. GSM Discovery Tool Chain
Raw Log
(text or DB)
Discover
Artifact Types &
Associations
Extract Artifact
Logs
Discover GSM
per Artifact
Discover Intra-
Artifact
Conditions
Discover
Synchronization
Conditions
GSM Artifact
Model
6
7. Composition of guard sentries
A guard sentry includes:
1. Intra-artifact data conditions
“weight > 100”, “response = positive”
Discovered using ProM decision
miner or BranchMiner
2. Inter-artifact synch conditions
7
8. GSM with intra-artifact conditions
(MO)
t1: Create
MO
on create t1
complete
t2: Send MO
to supplier
on t1
complete
t2
complete
t3: Receive
suppl. response
on t2
complete
t3
complete
t4: Receive
items
on t3 complete &
positive response on t4
complete
t5: Receive
invoiceon t3 complete
& positive response
t5
complet
e
t6: Reassign
supplier
on t3 complete &
negative response t6
complete
t7: Close
MO
on t4 complete
and t5 complete
t7
complete
8
9. Inter-artifact synch conditions
Points of synchronization between artifacts:
the transition to a new state of one artifact is
triggered by the states of related instances of
another artifact
Paper can only be evaluated when at least three
reviews are completed.
Meeting can only be confirmed when at least half
of the members have confirmed participation.
Akin to completion conditions in BPMN multi-
instance activities
9
13. Find synchronization points
Heuristic to find probable points:
Activity level of S – average number of events
in secondary artifact happening immediately
before S
A B C D E S F G D S
2 1
The higher the activity level the more likely S is a
synchronization point
13
14. Find conditions for point S
For each execution of S: a snapshot of the current
states in the related instances
Feature vector:
For each event type T in secondary artifact: how
many instances were in state T (T last executed)
when S was executed
Positive examples: one vector per execution of S
Negative examples: one vector per execution of
other event in main artifact + one vector per
execution of an event in secondary artifact
A B C D E S F G D S
S: positive examples: (B:0,D:1,E:1)(B:0,D:1,E:0)
14
15. Find conditions for point S
Refinements:
Remove redundant samples
Balance number of positive and negative
examples
Decision tree synch conditions
Scoring each condition:
Quality of decision tree – F-measure
Size of decision tree (normalized)
Activity level (normalized)
15
16. GSM with synch conditions (PO)
t1: Record
PO
on create t1
complete
t2: Analyze
PO
on t1
complete
t2
complete
t3: Generate
MOs
on t2
complete
t3
complete
t4: Assemble
Product
on t3 complete &
all MOs fulfilled on t4
complete
t5: Receive
payment
on t3
complete
t5
complet
e
t6: Notify
customer
on t3 complete
& MOs
unfulfilled
t6
complete
t7: Close
PO
on t4 complete
and t5 complete
t7
complete
16
Method and tool for reverse-engineering an artifact-centric model from logs, including:Artifact typesLifecycles (GSM)GuardsMethod and tool for checking conformance between models and log Detects inconsistencies between an artifact-centric model and logsMethod and tool for repairing non-conforming modelsDetermines smallest set of changes to repair a non-conformant model