Slides of the keynote speech on "Constraints for process framing in Augmented BPM" at the AI4BPM 2022 International Workshop, co-located with BPM 2022. The keynote focuses on the problem of "process framing" in the context of the new vision of "Augmented BPM", where BPM systems are augmented with AI capabilities. This vision is described in a manifesto, available here: https://arxiv.org/abs/2201.12855
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
Here are some tips for breaking down work in an agile way:
- Focus on delivering value to users. Each story and task should provide some value.
- Iterate frequently. Stories and tasks should be small enough that you can complete and release them within a sprint or two.
- Get early feedback. Small slices allow testing work sooner and adjusting based on feedback.
- Prioritize flexibility. Small slices give you options to reorder or drop work as priorities change.
- Estimate costs accurately. Tasks should take 1-5 days; if longer, may need refactoring. Consider spikes for technical challenges.
- Refactor when repetitive. If work is very similar, look for ways to simplify through ref
Visualisez le bon produit grâce au User Story MappingAgile Montréal
"« Je pense qu’on devrait faire ça. » « Ah, mais par quoi commencer ? » « Où se trouve cette fonctionnalité dans le flot ? »
Définir LE bon produit n’est jamais facile. Le « User Story Mapping » est un cadre simple et puissant qui génère les bonnes conversations et permet de co-créer la vision du produit à bâtir.
Venez (re)découvrir cet outil par la pratique en construisant une map ensemble !"
Atelier par Jean-Christophe Lauffer et Alain Benoit au Agile Tour Montréal 2018
This document discusses different aspects of interaction design and prototyping. It covers conceptual design, which transforms user requirements into a conceptual model. It also discusses different types of prototyping like low and high fidelity, as well as compromises in prototyping. Finally, it discusses how prototypes can be used to support the design process by answering questions and testing ideas.
The document discusses user-centered design and provides guidance on creating personas, scenarios, and user stories to represent users and their goals when interacting with a product. It describes personas as archetypes based on research that show a user's goals and behaviors. Scenarios are stories that illustrate interactions and help understand a user's goals, motivations, and actions. User stories are simple descriptions of specific user needs following a "As a _I want to_ so that_" format. The document provides templates and examples for documenting each of these representations.
This was a talk given by Jenny Wong and Danilo Sato at Agile Brazil 2011:
Agile teams learned that splitting teams into silos and functional areas is not a productive way to develop software. Having cross-functional teams improve collaboration and enables delivery of value faster. The same arguments can be applied to user stories. The way in which business requirements and features are translated into user stories play an important role on the software delivery process. This session will discuss the benefits of working with small user stories, and present different ways to split requirements into user stories.
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
Here are some tips for breaking down work in an agile way:
- Focus on delivering value to users. Each story and task should provide some value.
- Iterate frequently. Stories and tasks should be small enough that you can complete and release them within a sprint or two.
- Get early feedback. Small slices allow testing work sooner and adjusting based on feedback.
- Prioritize flexibility. Small slices give you options to reorder or drop work as priorities change.
- Estimate costs accurately. Tasks should take 1-5 days; if longer, may need refactoring. Consider spikes for technical challenges.
- Refactor when repetitive. If work is very similar, look for ways to simplify through ref
Visualisez le bon produit grâce au User Story MappingAgile Montréal
"« Je pense qu’on devrait faire ça. » « Ah, mais par quoi commencer ? » « Où se trouve cette fonctionnalité dans le flot ? »
Définir LE bon produit n’est jamais facile. Le « User Story Mapping » est un cadre simple et puissant qui génère les bonnes conversations et permet de co-créer la vision du produit à bâtir.
Venez (re)découvrir cet outil par la pratique en construisant une map ensemble !"
Atelier par Jean-Christophe Lauffer et Alain Benoit au Agile Tour Montréal 2018
This document discusses different aspects of interaction design and prototyping. It covers conceptual design, which transforms user requirements into a conceptual model. It also discusses different types of prototyping like low and high fidelity, as well as compromises in prototyping. Finally, it discusses how prototypes can be used to support the design process by answering questions and testing ideas.
The document discusses user-centered design and provides guidance on creating personas, scenarios, and user stories to represent users and their goals when interacting with a product. It describes personas as archetypes based on research that show a user's goals and behaviors. Scenarios are stories that illustrate interactions and help understand a user's goals, motivations, and actions. User stories are simple descriptions of specific user needs following a "As a _I want to_ so that_" format. The document provides templates and examples for documenting each of these representations.
This was a talk given by Jenny Wong and Danilo Sato at Agile Brazil 2011:
Agile teams learned that splitting teams into silos and functional areas is not a productive way to develop software. Having cross-functional teams improve collaboration and enables delivery of value faster. The same arguments can be applied to user stories. The way in which business requirements and features are translated into user stories play an important role on the software delivery process. This session will discuss the benefits of working with small user stories, and present different ways to split requirements into user stories.
Build – Measure – Learn is one of the most important mechanisms of agile software development. However, this mechanism is often crippled in nowadays projects, where traditional approaches of requirements gathering are bloating up product backlogs that cannot be prioritized anymore in a meaningful way. The results are customers not interested in iteration results, release to production that happens only at the end of the project, and feedback from customers when it is already too late and the budget is burned up.
Story mapping is a method that aligns user stories along desirable outcomes, so that customers can give sooner meaningful feedback, and release to production can happen earlier. The method helps slicing and prioritizing user stories, and addresses the product design aspect that is missing when just working with a product backlog. The method is highly visual and facilitates shared product ownership among product owner, team and customer.
This presentation provide an introduction to the concept of story mapping, with examples and experience gathered in own projects.
This document discusses interaction design basics and provides guidance on screen design and layout principles. It recommends grouping related items logically and physically close together, ordering items in a natural sequence, using alignment and white space to make screens readable, and considering both local and global navigation structures. The document emphasizes understanding users and scenarios to design effective interactions rather than just interfaces.
One of the most persistent factors limiting the impact of user research in business is that projects often stop with a catalog of findings and implications rather than generating opportunities that directly enable the findings. We’ve long heard the lament, “Well, we got this report, and it just sat there. We didn’t know what to do with it.”
But design research (or ethnography, or user research, or whatever the term du jour may be) has also become standard practice, as opposed to something exceptional or innovative. That means that designers are increasingly involved in using contextual research to inform their design work.
Ongoing acceptance of user research has increased the ranks of designers and others who feel comfortable conducting research. But analysis and synthesis is a more slippery skill set, and we see how easy it is for teams to ignore (more out of frustration than anything malicious) data that doesn’t immediately seem actionable. This workshop gives people the tools to take control over synthesis and ideation themselves by breaking it down into a manageable framework and process.
In this session, you'll:
Collaborate in teams to experience an effective framework for synthesizing raw field data.
Gain perspective on the difference between surface observations and deeper, interpreted insights.
Learn how to move from data to insights to opportunities.
Get techniques for generating ideas and strategies across a broad scope of business and design concerns.
Focus on individual and group analysis to create a top-line report.
Brainstorm on patterns, cluster analysis, and diagrams to rethink problems.
Prioritize findings and create new opportunities.
The March 2017 event featured a talk by business designer Christian Rudolph on service design for the circular economy. Christian runs the consultancy ‘next cycle’ and focusses on resource-intensive business models in his work.
Recently, IDEO and the Ellen McArthur Foundation released their Circular Design Guide which introduced more designers to the approach. So it was time for us to discuss the concept’s implications for service designers. Christian has years of experience in consulting industry heavy-weights like Philips and BASF, and helping them transform from linear product-focussed to circular service-oriented businesses. The evening event took take place on Wednesday, March 22nd.
Introduction to Business Process Monitoring and Process MiningMarlon Dumas
Two-day course delivered at the Chinese Business Process Management (BPM) Summer School in Jinan, China, 23-24 August 2018. The course introduces a range of techniques, tools, and algorithms for process monitoring and mining.
Document de présentation accompagnant l'intervention de
Annie Le Guern Porchet, directrice de la Médiathèque municipale de Languidic. Table ronde n°2 « mieux accueillir les publics éloignés de la culture » - Rencontres Nationales des Bibliothécaires Musicaux , organisées par l'ACIM - Clermont-Ferrand, 14 et 15 mars 2016
The document provides guidance on business process modelling and mapping. It defines business process modelling and the three main types of process models. Process mapping is described as a technique to diagrammatically model processes by representing the steps, participants, and decision logic through a visual map. The document then provides instructions on how to produce a process map, including identifying boundaries and participants, drawing the initial flow, and adding and reviewing details like swimlanes and decision points. An example process map is also included to demonstrate a completed map.
These slides provide an introduction to usability testing. This well-known method in user-centred design is used to improve products, by having participants interact with these products and by measuring their performances and responses.
I presented this topic as a guest lecturer to first-year Psychology students at the University of Twente at February 6th, 2017. Providing examples and best practices from Dutch digital design agency Mirabeau, I explained to them the required steps for the preparation, the moderation, and the analysis of usability tests. Moreover, I highlighted the importance of psychologists’ knowledge, (research) methods and skills for design, which I believe to be invaluable.
User experience can be drastically elevated by combining data science insights with user-based insights from research. Data analytics on its own can make themes and correlations difficult to explain and to provide accurate recommendations. For example, themes identified via large global surveys and usage data can be better understood with UX insights from focused user research, such as user interviews and/or cognitive walkthroughs. This presentation will highlight the complimentary nature of data science and UX and will focus on the benefits of bringing the two disciplines together. This will be buttressed with practical examples of enterprise projects and applications that combined data and skills from the two disciplines, guidance on how the two disciplines can better work together, and the skills needed to improve as a UX professional when working with data science teams.
The application of User Centered Design in various fields, specially in Architecture and Design. Based on Don Norman's book- Design of Everyday Things.
Presentation from https://events.epam.com/events/mobile-people-open-android-meetup
Describes how alert dialogs can be treated in MVVM architecture and how we can use Android Databinding plugin to control their appearance.
This document discusses various aspects of face-to-face communication and how they compare to computer-mediated communication. It addresses topics like nonverbal cues, turn-taking, establishing common ground, and maintaining context in both synchronous and asynchronous text-based communication. Key challenges in computer-mediated communication include lack of nonverbal cues, weaker back channels, loss of granularity and pace, and difficulties maintaining sequencing and context in discussions.
"How to write better User Stories" por @jrhuertawebcat
This document discusses improving user stories by following best practices like the INVEST acronym. It explains that user stories address common requirements gathering pitfalls by focusing on delivering value to end users, using their language, and enabling prioritization and incremental development. The document provides guidelines for writing "good" user stories, including having context, value, and acceptance criteria, as well as being independent, negotiable, estimable, small in size, and testable. It also identifies potential "user story smells" to avoid.
Slides supporting the book "Process Mining: Discovery, Conformance, and Enhancement of Business Processes" by Wil van der Aalst. See also http://springer.com/978-3-642-19344-6 (ISBN 978-3-642-19344-6) and the website http://www.processmining.org/book/start providing sample logs.
This document provides an overview of different types of human-computer interfaces discussed in a university lecture. It describes 12 interfaces: command-based, WIMP/GUI, multimedia, virtual reality, information visualization, web, consumer electronics, mobile, speech, pen, touch, and air-based gestures. For each interface, it discusses key characteristics, examples, research and design considerations. The goal is to help students understand different interface approaches and important user experience factors to consider in interface design.
Historias de Usuario en acción: potenciando el valor de los productosMarco Avendaño
En esta charla, se explora las historias de usuario, como herramienta fundamental en el desarrollo ágil para potenciar significativamente el valor de los productos.
A través de ejemplos y casos reales, se desentraña el poder de las historias de usuario para comprender las necesidades del usuario, mejorar la colaboración en el equipo y garantizar la entrega de productos que no solo cumplan, sino que superen las expectativas.
User Story Mapping (USM) helps teams get a common understanding of requirements from the user's perspective to facilitate backlog creation. It improves backlog quality and team communication. USM creates a map with user stories arranged in a usage flow. Each story follows the "As a <user>, I want <goal> so that <benefit>" format. Together, the mapped stories provide an overview of a product from the user experience while maintaining granular stories for planning and testing.
Content Design, UI Architecture and Content-UI-MappingWolfram Nagel
When you want to gather, manage and publish content and display it independently on any user interface and/or target channel you need a system that supports “Content Design and Content UI Mapping”. Content and user interfaces can be planned and assembled modularly and structured in a similar manner — comparable to bricks in a building block system. Content basically runs through three steps until it reaches its recipient: Gathering, management and output. A mapping has to occure at the intersections of these three steps.
This is the extended slides version on the topic.
There's also an article on the topic: https://medium.com/@wolframnagel/content-design-and-ui-mapping-a35af8cac3f6#.3ylkxrakf
- The document discusses the history and evolution of software process research from the 1980s to present day, including the development of process modeling languages, standards like CMMI, and agile methods like Scrum.
- It notes that while early attempts at fully automated software engineering environments did not succeed, many useful tools were developed for configurations management, testing, project management, and more.
- Current challenges discussed include coping with faster development speeds, integrating research ideas into practical tools, educating engineers, and addressing social and technical aspects together.
Velocity. Agility. Python. (Pycon APAC 2017)Sian Lerk Lau
Speaker Notes Edition.
Often there are conflicts of interest between velocity and agility of a project.
Some see velocity as mere time taken to complete a task, whilst some see agility which requires quick response to change in meeting objectives of a task as a significant contributing factor behind delays.
Being a python developer and an engineering process facilitator, I would like to share my journey in discovering the beauty of velocity and agility of continuous software delivery.
In the end of this talk, I wish to lead all attendees to reflect on the engineering process and organisation culture in their respective workplace, to delivery quality python projects with velocity and agility in mind.
Build – Measure – Learn is one of the most important mechanisms of agile software development. However, this mechanism is often crippled in nowadays projects, where traditional approaches of requirements gathering are bloating up product backlogs that cannot be prioritized anymore in a meaningful way. The results are customers not interested in iteration results, release to production that happens only at the end of the project, and feedback from customers when it is already too late and the budget is burned up.
Story mapping is a method that aligns user stories along desirable outcomes, so that customers can give sooner meaningful feedback, and release to production can happen earlier. The method helps slicing and prioritizing user stories, and addresses the product design aspect that is missing when just working with a product backlog. The method is highly visual and facilitates shared product ownership among product owner, team and customer.
This presentation provide an introduction to the concept of story mapping, with examples and experience gathered in own projects.
This document discusses interaction design basics and provides guidance on screen design and layout principles. It recommends grouping related items logically and physically close together, ordering items in a natural sequence, using alignment and white space to make screens readable, and considering both local and global navigation structures. The document emphasizes understanding users and scenarios to design effective interactions rather than just interfaces.
One of the most persistent factors limiting the impact of user research in business is that projects often stop with a catalog of findings and implications rather than generating opportunities that directly enable the findings. We’ve long heard the lament, “Well, we got this report, and it just sat there. We didn’t know what to do with it.”
But design research (or ethnography, or user research, or whatever the term du jour may be) has also become standard practice, as opposed to something exceptional or innovative. That means that designers are increasingly involved in using contextual research to inform their design work.
Ongoing acceptance of user research has increased the ranks of designers and others who feel comfortable conducting research. But analysis and synthesis is a more slippery skill set, and we see how easy it is for teams to ignore (more out of frustration than anything malicious) data that doesn’t immediately seem actionable. This workshop gives people the tools to take control over synthesis and ideation themselves by breaking it down into a manageable framework and process.
In this session, you'll:
Collaborate in teams to experience an effective framework for synthesizing raw field data.
Gain perspective on the difference between surface observations and deeper, interpreted insights.
Learn how to move from data to insights to opportunities.
Get techniques for generating ideas and strategies across a broad scope of business and design concerns.
Focus on individual and group analysis to create a top-line report.
Brainstorm on patterns, cluster analysis, and diagrams to rethink problems.
Prioritize findings and create new opportunities.
The March 2017 event featured a talk by business designer Christian Rudolph on service design for the circular economy. Christian runs the consultancy ‘next cycle’ and focusses on resource-intensive business models in his work.
Recently, IDEO and the Ellen McArthur Foundation released their Circular Design Guide which introduced more designers to the approach. So it was time for us to discuss the concept’s implications for service designers. Christian has years of experience in consulting industry heavy-weights like Philips and BASF, and helping them transform from linear product-focussed to circular service-oriented businesses. The evening event took take place on Wednesday, March 22nd.
Introduction to Business Process Monitoring and Process MiningMarlon Dumas
Two-day course delivered at the Chinese Business Process Management (BPM) Summer School in Jinan, China, 23-24 August 2018. The course introduces a range of techniques, tools, and algorithms for process monitoring and mining.
Document de présentation accompagnant l'intervention de
Annie Le Guern Porchet, directrice de la Médiathèque municipale de Languidic. Table ronde n°2 « mieux accueillir les publics éloignés de la culture » - Rencontres Nationales des Bibliothécaires Musicaux , organisées par l'ACIM - Clermont-Ferrand, 14 et 15 mars 2016
The document provides guidance on business process modelling and mapping. It defines business process modelling and the three main types of process models. Process mapping is described as a technique to diagrammatically model processes by representing the steps, participants, and decision logic through a visual map. The document then provides instructions on how to produce a process map, including identifying boundaries and participants, drawing the initial flow, and adding and reviewing details like swimlanes and decision points. An example process map is also included to demonstrate a completed map.
These slides provide an introduction to usability testing. This well-known method in user-centred design is used to improve products, by having participants interact with these products and by measuring their performances and responses.
I presented this topic as a guest lecturer to first-year Psychology students at the University of Twente at February 6th, 2017. Providing examples and best practices from Dutch digital design agency Mirabeau, I explained to them the required steps for the preparation, the moderation, and the analysis of usability tests. Moreover, I highlighted the importance of psychologists’ knowledge, (research) methods and skills for design, which I believe to be invaluable.
User experience can be drastically elevated by combining data science insights with user-based insights from research. Data analytics on its own can make themes and correlations difficult to explain and to provide accurate recommendations. For example, themes identified via large global surveys and usage data can be better understood with UX insights from focused user research, such as user interviews and/or cognitive walkthroughs. This presentation will highlight the complimentary nature of data science and UX and will focus on the benefits of bringing the two disciplines together. This will be buttressed with practical examples of enterprise projects and applications that combined data and skills from the two disciplines, guidance on how the two disciplines can better work together, and the skills needed to improve as a UX professional when working with data science teams.
The application of User Centered Design in various fields, specially in Architecture and Design. Based on Don Norman's book- Design of Everyday Things.
Presentation from https://events.epam.com/events/mobile-people-open-android-meetup
Describes how alert dialogs can be treated in MVVM architecture and how we can use Android Databinding plugin to control their appearance.
This document discusses various aspects of face-to-face communication and how they compare to computer-mediated communication. It addresses topics like nonverbal cues, turn-taking, establishing common ground, and maintaining context in both synchronous and asynchronous text-based communication. Key challenges in computer-mediated communication include lack of nonverbal cues, weaker back channels, loss of granularity and pace, and difficulties maintaining sequencing and context in discussions.
"How to write better User Stories" por @jrhuertawebcat
This document discusses improving user stories by following best practices like the INVEST acronym. It explains that user stories address common requirements gathering pitfalls by focusing on delivering value to end users, using their language, and enabling prioritization and incremental development. The document provides guidelines for writing "good" user stories, including having context, value, and acceptance criteria, as well as being independent, negotiable, estimable, small in size, and testable. It also identifies potential "user story smells" to avoid.
Slides supporting the book "Process Mining: Discovery, Conformance, and Enhancement of Business Processes" by Wil van der Aalst. See also http://springer.com/978-3-642-19344-6 (ISBN 978-3-642-19344-6) and the website http://www.processmining.org/book/start providing sample logs.
This document provides an overview of different types of human-computer interfaces discussed in a university lecture. It describes 12 interfaces: command-based, WIMP/GUI, multimedia, virtual reality, information visualization, web, consumer electronics, mobile, speech, pen, touch, and air-based gestures. For each interface, it discusses key characteristics, examples, research and design considerations. The goal is to help students understand different interface approaches and important user experience factors to consider in interface design.
Historias de Usuario en acción: potenciando el valor de los productosMarco Avendaño
En esta charla, se explora las historias de usuario, como herramienta fundamental en el desarrollo ágil para potenciar significativamente el valor de los productos.
A través de ejemplos y casos reales, se desentraña el poder de las historias de usuario para comprender las necesidades del usuario, mejorar la colaboración en el equipo y garantizar la entrega de productos que no solo cumplan, sino que superen las expectativas.
User Story Mapping (USM) helps teams get a common understanding of requirements from the user's perspective to facilitate backlog creation. It improves backlog quality and team communication. USM creates a map with user stories arranged in a usage flow. Each story follows the "As a <user>, I want <goal> so that <benefit>" format. Together, the mapped stories provide an overview of a product from the user experience while maintaining granular stories for planning and testing.
Content Design, UI Architecture and Content-UI-MappingWolfram Nagel
When you want to gather, manage and publish content and display it independently on any user interface and/or target channel you need a system that supports “Content Design and Content UI Mapping”. Content and user interfaces can be planned and assembled modularly and structured in a similar manner — comparable to bricks in a building block system. Content basically runs through three steps until it reaches its recipient: Gathering, management and output. A mapping has to occure at the intersections of these three steps.
This is the extended slides version on the topic.
There's also an article on the topic: https://medium.com/@wolframnagel/content-design-and-ui-mapping-a35af8cac3f6#.3ylkxrakf
- The document discusses the history and evolution of software process research from the 1980s to present day, including the development of process modeling languages, standards like CMMI, and agile methods like Scrum.
- It notes that while early attempts at fully automated software engineering environments did not succeed, many useful tools were developed for configurations management, testing, project management, and more.
- Current challenges discussed include coping with faster development speeds, integrating research ideas into practical tools, educating engineers, and addressing social and technical aspects together.
Velocity. Agility. Python. (Pycon APAC 2017)Sian Lerk Lau
Speaker Notes Edition.
Often there are conflicts of interest between velocity and agility of a project.
Some see velocity as mere time taken to complete a task, whilst some see agility which requires quick response to change in meeting objectives of a task as a significant contributing factor behind delays.
Being a python developer and an engineering process facilitator, I would like to share my journey in discovering the beauty of velocity and agility of continuous software delivery.
In the end of this talk, I wish to lead all attendees to reflect on the engineering process and organisation culture in their respective workplace, to delivery quality python projects with velocity and agility in mind.
Supporting Knowledge Workers With Adaptive Case ManagementNathaniel Palmer
• Why Empowering Knowledge Workers is the Management Challenge of the 21st Century
• How Case Management Offers Sanity in the Face of IT Consumerization and BYOD
• Where Cloud, Big Data, Mobile and Social Computing Intersect With Case Management
• Case Management Market Landscape, Categorization, and Use Case Patterns
This document discusses hybrid process models that combine both declarative and imperative modeling techniques. It notes that different parts of a process may be more or less flexible, so a hybrid approach can model this better than a single paradigm. Examples of hybrid approaches include mixing Petri nets with declarative constraints, or having flexible regions within an otherwise rigid workflow. The document also discusses evaluating hybrid modeling through human modeling experiments and automated discovery techniques using event logs. It suggests that while hybrid models seem promising, more work is needed to determine how best to apply different techniques.
This document discusses the role of compliance in security audits. It covers information security compliance and its relationship to reputation, regulation, revenue, resilience, and recession proofing. It provides an overview of ISO 27001, including its family of standards and steps for implementing an information security management system. Finally, it discusses common myths about ISO 27001 and memory techniques for quick revision like mnemonics, sentence aids, workflow diagrams, and color coding.
This document outlines a proposed solution to improve the performance of alignment-based conformance checking for process mining by using a decomposition approach. It discusses decomposing large event logs and process models into smaller subcomponents to allow for conformance checking in parallel. The key challenges are ensuring the merged results from subcomponents are complete and correspond to the exact overall solution, improving performance, developing effective decomposition strategies, and extending the approach to other process modeling notations and data-aware models. The current state of the investigation is described, which involves further developing the recomposition framework, addressing issues with conformance metrics, and evaluating different decomposition strategies.
Seminar given by Marco Montali on 31/05/2016 at the Department of Computer Science, University of Verona. Title: Data-aware business - balancing between expressiveness and verifiability.
This document provides an overview of key object-oriented concepts including encapsulation, abstraction, inheritance, and polymorphism. Encapsulation involves grouping related data and procedures together and controlling access to an object's data. Abstraction focuses on common behaviors across objects rather than implementation details. Inheritance organizes similar objects in a hierarchy to consolidate common functionality. Polymorphism allows modifying inherited behaviors in descendant classes. The document uses examples of reactive substances like Mentos and Diet Coke to illustrate these concepts.
This document discusses event-driven business process management using jBPM and Drools. It provides an example of how software development tools and infrastructure can be combined with jBPM 5 and Drools to gain real-time understanding of processes and increase agility. Traditional BPM systems struggle with change, complexity, and flexibility, but event-driven BPM addresses these issues. The document also outlines a logistics company use case and how a rules-based solution could dynamically manage shipments and routing.
The document discusses the principles and practices of continuous delivery. It defines continuous delivery as making software deployable at any time by having automated tests, builds, and deployments. This allows for faster feedback and release cycles. The document outlines topics like collaboration, version control, continuous integration, automated deployments, monitoring, and migrating databases to support continuous delivery. It emphasizes keeping code releasable at all times and separating work in progress from releasable code.
Getting software released to users can be risky, time-consuming and painful. The solution is the ability to deliver reliable software continuously through build, test and deployment automation, and through improved collaboration between developers, testers and operations. In this tutorial we will present principles and technical practices that enable teams to incrementally deliver software of high quality and value into production whenever they want, and extremely fast. The size of the project or the complexity of its code base does not matter.
In the first half of the tutorial we will introduce the concepts of continuous delivery, through continuous integration; and automation of the build, test and deployment process. We will also go through som basic principles and patterns for building automatable applications (architecture). We will cover experiences on team collaboration patterns and lastly; techniques for solving tasks such as an easy and comprehendible version control strategy.
The second half of the tutorial we will be working with automated provisioning of agile infrastructure, including the use of tools (puppet) to automate the management of testing and production environments. We will go through some scripting lessons examplifying how to implement zero-downtime deploys (… and rollback – if something goes wrong!), with examples in both bash and Ruby. Along with controlling the start, stop, restart lifecycles during deploys, we will also show some simple techniques for backups, logging, error handling, monitoring and verification of application health that can make the automation more robust.
We will also use servers "in the cloud" to demonstrate different techniques, and we hope to make it a fun day and to deliver software (examples) several times throughout the workshop.
Required knowledge: Agile/Lean basics, Linux basics, version control basics, maven basics.
Invited talk on "Data and Processes: a Challenging, though Necessary, Marriage" at the 14th Italian Conference on Artificial Intelligence (#AI4), related to the "Marco Somalvico 2015 Award".
Technology is enabling greater product offerings in financial services but also bringing challenges of managing increasingly complex systems over time. Legacy systems can be difficult to migrate and modernize due to their age and number of products supported. Business knowledge is lost when outsourcing increases. Systems are becoming old yet critical, and supporting outdated closed products is costly. Compliance requirements also continuously add management challenges. When redesigning such systems, it is important to focus on information rather than individual applications or technologies, separate stable and changing elements, and design for continuous change, monitoring and knowledge distribution across teams.
Technology is enabling greater product offerings in financial services but also bringing challenges of managing aging complex systems over long periods. To manage constant change, a good architecture separates elements that change frequently from those that don't. Microservices principles allow isolation while avoiding silos. Focusing on information modeling rather than representations reduces issues when characteristics change. Monitoring and auditing must be built-in due to regulatory requirements. Drawing from open source practices helps manage ongoing versions and releases in a complex environment. Ultimately new systems may be needed to fully take advantage of new technologies and avoid accumulating further technical debt from old systems.
1. Transactional Black Belts often face different challenges than manufacturing Black Belts as processes are less defined and changes are harder to reverse in human-centered environments.
2. A key difference is that cycle time is a more useful overall measure than defect count for selecting projects in service/transaction environments where establishing a baseline process is often needed.
3. The training presented teaches how to model service and transaction processes, understand decision-making reliability, measure and improve cycle time, and manage change to achieve greater process efficiency.
This document provides an overview of key concepts in human-computer interaction (HCI). It defines HCI and related fields like usability engineering. The document discusses HCI in the design process using models like the waterfall model and spiral lifecycle model. It presents a general framework for HCI including the human, task, computer, environment, and interface. The document also describes Norman's action cycle model and sources of errors. Finally, it discusses design rules, principles, guidelines, and standards for HCI like those from Shneiderman, Norman and Nielsen.
1. Scenario based design uses narratives or stories to describe how users will interact with a system. These scenarios help designers understand user needs and how people will accomplish tasks with the system.
2. Scenarios are both concrete, providing specific examples of usage, and flexible, allowing for refinement and elaboration. This helps designers manage the fluid nature of design situations.
3. Considering scenarios promotes a work-oriented design process focused on the needs of users. Scenarios also help designers reflect on and evaluate their work throughout the design process.
The document discusses integrating social software like wikis with business process management systems using shared spaces. It summarizes the experience of two Swedish companies in developing such systems. One system used a static structure for shared spaces around each process instance. The other used a dynamic structure based on visualizing the process map. Both approaches had advantages for certain types of processes. Going forward, the authors propose further developing a state-oriented theoretical view of business processes to inform shared space architectures.
Empirical Methods in Software Engineering - an Overviewalessio_ferrari
A first introductory lecture on empirical methods in software engineering. It includes:
1) Motivation for empirical software engineering studies
2) How to define research questions
3) Measures and data collection methods
4) Formulating theories in software engineering
5) Software engineering research strategies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
Similar to Constraints for Process Framing in Augmented BPM (20)
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
Keynote speech at KES 2022 on "Intelligent Systems for Process Mining". I introduce process mining, discuss why process mining tasks should be approached by using intelligent systems, and show a concrete example of this combination, namely (anticipatory) monitoring of evolving processes against temporal constraints, using techniques from knowledge representation and formal methods (in particular, temporal logics over finite traces and their automata-theoretic characterization).
1. The document discusses representing business processes with uncertainty using ProbDeclare, an extension of Declare that allows constraints to have uncertain probabilities.
2. ProbDeclare models contain both crisp constraints that must always hold and probabilistic constraints that hold with some probability. This leads to multiple possible "scenarios" depending on which constraints are satisfied.
3. Reasoning involves determining which scenarios are logically consistent using LTLf, and computing the probability distribution over scenarios by solving a system of inequalities defined by the constraint probabilities.
Presentation on "From Case-Isolated to Object-Centric Processes - A Tale of Two Models" as part of the Hasselt University BINF Research Seminar Series (see https://www.uhasselt.be/en/onderzoeksgroepen-en/binf/research-seminar-series).
Invited seminar on "Modeling and Reasoning over Declarative Data-Aware Processes" as part of the KRDB Summer Online Seminars 2020 (https://www.inf.unibz.it/krdb/sos-2020/).
Presentation of the paper "Soundness of Data-Aware Processes with Arithmetic Conditions" at the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022). Paper available here: https://doi.org/10.1007/978-3-031-07472-1_23
Abstract:
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
Presentation of the paper "Probabilistic Trace Alignment" at the 3rd International Conference on Process Mining (ICPM 2021). Paper available here: https://doi.org/10.1109/ICPM53251.2021.9576856
Abstract:
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model. Alignment-based approaches for conformance checking have so far used crisp process models as a reference. Recent probabilistic conformance checking approaches check the degree of conformance of an event log as a whole with respect to a stochastic process model, without providing alignments. For the first time, we introduce a conformance checking approach based on trace alignments using stochastic Workflow nets. This requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
Presentation of the paper "Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors" at the 7th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). Paper available here: https://proceedings.kr.org/2020/32/
Abstract: The integrated modeling and analysis of dynamic systems and the data they manipulate has been long advocated, on the one hand, to understand how data and corresponding decisions affect the system execution, and on the other hand to capture how actions occurring in the systems operate over data. KR techniques proved successful in handling a variety of tasks over such integrated models, ranging from verification to online monitoring. In this paper, we consider a simple, yet relevant model for data-aware dynamic systems (DDSs), consisting of a finite-state control structure defining the executability of actions that manipulate a finite set of variables with an infinite domain. On top of this model, we consider a data-aware version of reactive synthesis, where execution strategies are built by guaranteeing the satisfaction of a desired linear temporal property that simultaneously accounts for the system dynamics and data evolution.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the 18th Int. Conference on Business Process Management (BPM 2020). Paper available here: https://doi.org/10.1007/978-3-030-58666-9_3
Abstract: Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the CAiSE2020 Forum. The paper is available here: https://link.springer.com/chapter/10.1007/978-3-030-58135-0_8
Abstract: Conformance checking is a fundamental task to detect deviations between the actual and the expected courses of execution of a business process. In this context, temporal business constraints have been extensively adopted to declaratively capture the expected behavior of the process. However, traditionally, these constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, we equip business constraints with a natural, probabilistic notion of uncertainty. We discuss the semantic implications of the resulting framework and show how probabilistic conformance checking and constraint entailment can be tackled therein.
Presentation of the paper "Modeling and Reasoning over Declarative Data-Aware Processes with Object-Centric Behavioral Constraints" at the 17th Int. Conference on Business Process Management (BPM 2019). Paper available here: https://link.springer.com/chapter/10.1007/978-3-030-26619-6_11
Abstract
Existing process modeling notations ranging from Petri nets to BPMN have difficulties capturing the data manipulated by processes. Process models often focus on the control flow, lacking an explicit, conceptually well-founded integration with real data models, such as ER diagrams or UML class diagrams. To overcome this limitation, Object-Centric Behavioral Constraints (OCBC) models were recently proposed as a new notation that combines full-fledged data models with control-flow constraints inspired by declarative process modeling notations such as DECLARE and DCR Graphs. We propose a formalization of the OCBC model using temporal description logics. The obtained formalization allows us to lift all reasoning services defined for constraint-based process modeling notations without data, to the much more sophisticated scenario of OCBC. Furthermore, we show how reasoning over OCBC models can be reformulated into decidable, standard reasoning tasks over the corresponding temporal description logic knowledge base.
Keynote speech at the Belgian Process Mining Research Day 2021. I discuss the open, critical challenge of data preparation in process mining, considering the case where the original event data are implicitly stored in (legacy) relational databases. This case covers the common situation where event data are stored inside the data layer of an ERP or CRM system. This is usually handled using manual, ad-hoc, error-prone ETL procedures. I propose instead to adopt a pipeline based on semantic technologies, in particular the framework of ontology-based data access (also known as virtual knowledge graph). The approach is code-less, and relies on three main conceptual steps: (1) the creation of a data model capturing the relevant classes, attributes, and associations in the domain of interest (2) the definition of declarative mappings from the source database to the data model, following the ontology-based data access paradigm (3) the annotation of the data model with indications on which classes/associations/attributes provide the relevant notions of case, events, event attributes, and event-to-case relation. Once this is done, the framework automatically extracts the event log from the legacy data. This makes extremely smooth to generate logs by taking multiple perspectives on the same reality. The approach has been operationalized in the onprom tool, which employs semantic web standard languages for the various steps, and the XES standard as the target format for the event logs.
Keynote speech at the 7th International Workshop on DEClarative, DECision and Hybrid approaches to processes ( DEC2H 2019) In conjunction with BPM 2019.
This is a talk about the combined modeling and reasoning techniques for decisions, background knowledge, and work processes.
The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.
In this keynote, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and re
Presentation at "Ontology Make Sense", an event in honor of Nicola Guarino, on how to integrate data models with behavioral constraints, an essential problem when modeling multi-case real-life work processes evolving multiple objects at once. I propose to combine UML class diagrams with temporal constraints on finite traces, linked to the data model via co-referencing constraints on classes and associations.
The document discusses representing and querying norm states using temporal ontology-based data access (OBDA). It presents the QUEN framework which models norms and their state transitions declaratively on top of a relational database. QUEN has three layers: 1) an ontological layer representing norms, 2) a specification of norm state transitions in response to database events, and 3) a legacy relational database storing events. It demonstrates QUEN on an example of patient data access consent, modeling authorizations and their lifecycles. Norm state queries are answered directly over the database using the declarative specifications without materializing states.
Presentation ad EDOC 2019 on monitoring multi-perspective business constraints accounting for time and data, with a specific focus on the (unsolvable in general) problem of conflict detection.
1) The document discusses business process management and how conceptual modeling and process mining can help understand and improve digital enterprises.
2) Process mining techniques like process discovery from event logs, decision mining, and social network mining can provide insights into how processes are executed in reality.
3) Replay techniques can enhance process models with timing information and detect deviations to help align actual behaviors with expected behaviors.
Presentation at BPM 2019, focused on a data-aware extension of BPMN encompassing read-write and read-only data, and on SMT-techniques for effectively tackling parameterized verification of the resulting integrated models.
Presentation at BPM 2019, proposing object-centric behavioral constraints as a modeling approach to capture real-life processes with many-to-many and one-to-many relations, and bringing forward a formal semantics and automated reasoning techniques grounded in temporal description logics.
More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
20240520 Planning a Circuit Simulator in JavaScript.pptx
Constraints for Process Framing in Augmented BPM
1. 09/09/2022 - KES 2022, Verona, Italy
Marco Montali
Free University of Bozen-Bolzano, Italy
AI4BPM 2022, Münster, Germany
Constraints
for process framing
in Augmented BPM
Marco Montali
Free University of Bozen-Bolzano
2. What do we do
We develop foundational and applied techniques
grounded in arti
fi
cial intelligence, logics, and
formal methods,
to create intelligent agents and information
systems that combine
processes and data.
3. How to attack these challenges?
Arti
fi
cial
Intelligence
Knowledge representation
Automated reasoning
Computational logics
Information
Systems
Business process
management
Data management
Decision management
Formal
Methods
In
fi
nite-state systems
Veri
fi
cation
Petri nets
Data
Science
Process mining
Data access
and integration
6. Augmented BPM
An increased availability of business process execution data, combined
with advances in AI, have laid the ground for the emergence of
information systems where the execution
fl
ows are not pre-
determined, adaptations do not require explicit changes to software
applications, and improvement opportunities are autonomously
discovered, validated, and enabled on-the-
fl
y.
7. Augmented BPM System
AI-empowered, trustworthy, and process-aware
information system that
reasons and acts upon data within a set of
constraints and assumptions
with the aim to
continuously adapt and improve a set of business
processes with respect to one or more performance
indicators
8. Augmented BPM System
AI-empowered, trustworthy, and process-aware
information system that
reasons and acts upon data within a set of
constraints and assumptions
with the aim to
continuously adapt and improve a set of business
processes with respect to one or more performance
indicators
Strongly related to
BPM and integrative AI
Thu 09:00-10:00
Keynote by
Chiara Ghidini
Data, Conceptual Knowledge,
and AI: What can they do
together?
14. Features of an ABPMS
Framed autonomy
ABPMS acts autonomously
• Lifecycle steps performed proactively and continuously
ABPMS acts “within its frame”
• Maximally permissive, goal-driven strategy
15. Features of an ABPMS
Framed autonomy
ABPMS acts autonomously
• Lifecycle steps performed proactively and continuously
ABPMS acts “within its frame”
• Maximally permissive, goal-driven strategy
What does this mean?
Hard vs soft constraints,
reframing, meta-framing, …
16. Features of an ABPMS
Conversationally actionable
Autonomy does not mean isolation
• Need of continuous conversation with human principal(s)
Conversational
• Language-based communication with humans (proactive
and reactive)
Actionable
• Interaction leads to actual decision making
17. Features of an ABPMS
Conversationally actionable
Autonomy does not mean isolation
• Need of continuous conversation with human principal(s)
Conversational
• Language-based communication with humans (proactive
and reactive)
Actionable
• Interaction leads to actual decision making
Which focus?
Actions, goals, intensions, but
also models!
18. Features of an ABPMS
Adaptive
Motto: react to changes and self-improve
• Prediction
• Instance- and model-level adaptations
19. Features of an ABPMS
Adaptive
Motto: react to changes and self-improve
• Prediction
• Instance- and model-level adaptations
20. Features of an ABPMS
Adaptive
Motto: react to changes and self-improve
• Prediction
• Instance- and model-level adaptations
• Multi-objective optimisation
• Evaluation of trade-o
ff
s
21. Features of an ABPMS
Adaptive
Motto: react to changes and self-improve
• Prediction
• Instance- and model-level adaptations
• Multi-objective optimisation
• Evaluation of trade-o
ff
s
What if there are multiple
principals?
23. Features of an ABPMS
Explainable
An ABPMS should be trustworthy
• “trust is the willingness of a party to be vulnerable to the actions of another
party based on the expectation that the other will perform a particular action
important to the trustor, irrespective of the ability to monitor or control that
other party” [MayerEtAl,TAMR95]
How to be trustworthy?
• Fair
• Explainable
• Auditable
• Safe
Need of speci
fi
c focus on
“process-aware” systems
30. Σ*
Is this enough?
“A process is (not) a point mass in a vacuum” (cit.)
time
cases
traditional view
one process instance
one case
31. Σ*
Is this enough?
“A process is (not) a point mass in a vacuum” (cit.)
traditional view
one process instance
one case
time
cases
32. Σ*
Is this enough?
“A process is (not) a point mass in a vacuum” (cit.)
multi-case/object-centric view
one process instance
many case
time
cases/objects
33. Σ*
Is this enough?
“A process is (not) a point mass in a vacuum” (cit.)
multi-process view
many process instances
one case
time
cases/objects
34. More in general…
Wed 10:30-12:30
Tutorial by
Dirk Fahland
Multi-dimensional
process analysis
42. Outline
the declarative
approach
monitoring and
operational support
dealing with
uncertainty
dealing with
multiple processes dealing with multiple
objects
How to deal with multiple processes and cases?
[____,CAiSE22]
Interested in uncertainty for
procedural processes?
See our papers
at the main track
(presentations Tue and
Wed afternoon)
44. Which Italian food do you like best?
complexity ->
predictability <-
repetitiveness <-
Control
degree to which a
central
orchestrator
decides how to
execute the
process
Flexibility
degree to which
process
stakeholders
locally decide how
to execute the
process
Lasagna
processes Spaghetti
processes
57. Constraint templates
Constraint types de
fi
ned on activity placeholders, each with a speci
fi
c
meaning
• … then instantiated on actual activities (by grounding)
Dimensions
• Activities: how many are involved
• Time: temporal orientation (past, future, either) and strength (when)
• Expectation: negative vs positive
Much richer than the precedence
fl
ow relation of imperative languages
58. Declare specification
A set of constraints (templates grounded on the
activities of interest)
• Constraints have to be all satis
fi
ed globally over
each, complete trace
• Compositional approach by conjunction
59. Flexible shopper in Declare
“Whenever you close an order, you have to pay later at least once”
1. <> (empty trace)
2. <i,i,i>
3. <i,i,i,c,p>
4. <i,i,i,c,p,p>
5. <i,i,i,p,c>
6. <i,c,p,i,i,c,p>
Pick
item
Close
order
Pay
Accepts all {i, c, p}*
60. Flexible shopper in Declare
“Whenever you close an order, you have to pay later at least once”
1. <> (empty trace)
2. <i,i,i>
3. <i,i,i,c,p>
4. <i,i,i,c,p,p>
5. <i,i,i,p,c>
6. <i,c,p,i,i,c,p>
Pick
item
Close
order
Pay
response
61. Interaction among constraints
Aka hidden dependencies [____,TWEB2010]
Cancel
order
Close
order
Pay
If you cancel the order,
you cannot pay for it
If you close the order,
you must pay for it
Pick
item
To close an order, you
must
fi
rst pick an item
62. Interaction among constraints
Aka hidden dependencies [____,TWEB2010]
Cancel
order
Close
order
Pay
If you cancel the order,
you cannot pay for it
If you close the order,
you must pay for it
Pick
item
To close an order, you
must
fi
rst pick an item
Implied: cannot
cancel and
con
fi
rm!
63. Interaction among constraints
Aka hidden dependencies [____,TWEB2010]
Cancel
order
Close
order
Pay
If you cancel the order,
you cannot pay for it
If you close the order,
you must pay for it
Pick
item
To close an order, you
must
fi
rst pick an item
Implied: cannot
cancel and
con
fi
rm!
Key questions
1. How to
characterise the
language of a
Declare
speci
fi
cation?
2. How to
understand
whether a
speci
fi
cation is
correct?
65. Back to the roots
Patterns in Linear
Temporal Logic
(LTL)
66. LTL: a logic for infinite traces
Logic interpreted over in
fi
nite traces
' ::= A | ¬' | '1 ^ '2 | ' | '1U'2
Atomic propositions
Boolean connectives
At next step φ holds
At some point φ2 holds, and φ1 holds until φ2 does
φ eventually holds
φ always holds
φ1 holds forever or until φ2 does
' ::= A | ¬' | '1 ^ '2 | ' | '1U'2
::= A | ¬' | '1 ^ '2 | ' | '1U'2
'1 ^ '2 | ' | '1U'2
| ' | '1U'2
⌃' ⌘ true U'
⇤' ⌘ ¬⌃¬'
'1W'2 = '1U'2 _ ⇤'1
⇤' ⌘ ¬⌃¬'
…
Each state indicates which
propositions hold
67. LTL: a logic for infinite traces
Logic interpreted over in
fi
nite traces
' ::= A | ¬' | '1 ^ '2 | ' | '1U'2
Atomic propositions
Boolean connectives
At next step φ holds
At some point φ2 holds, and φ1 holds until φ2 does
φ eventually holds
φ always holds
φ1 holds forever or until φ2 does
' ::= A | ¬' | '1 ^ '2 | ' | '1U'2
::= A | ¬' | '1 ^ '2 | ' | '1U'2
'1 ^ '2 | ' | '1U'2
| ' | '1U'2
⌃' ⌘ true U'
⇤' ⌘ ¬⌃¬'
'1W'2 = '1U'2 _ ⇤'1
⇤' ⌘ ¬⌃¬'
…
Each state indicates which
propositions hold
Can be
seamlessly
extended
with past-
tense
operators
68. LTL: a logic for infinite traces
Logic interpreted over in
fi
nite traces
' ::= A | ¬' | '1 ^ '2 | ' | '1U'2
Atomic propositions
Boolean connectives
At next step φ holds
At some point φ2 holds, and φ1 holds until φ2 does
φ eventually holds
φ always holds
φ1 holds forever or until φ2 does
' ::= A | ¬' | '1 ^ '2 | ' | '1U'2
::= A | ¬' | '1 ^ '2 | ' | '1U'2
'1 ^ '2 | ' | '1U'2
| ' | '1U'2
⌃' ⌘ true U'
⇤' ⌘ ¬⌃¬'
'1W'2 = '1U'2 _ ⇤'1
⇤' ⌘ ¬⌃¬'
…
Each state indicates which
propositions hold
Can be
seamlessly
extended
with past-
tense
operators
Trace t satis
fi
es a
formula is now
formally de
fi
ned:
φ
t ⊧ φ
70. Semantics of Declare via LTL
Atomic propositions are activities
Each constraint is an LTL formula (built from template formulae)
…
Each state contains
a single activity
delete
order
close
order
pay
pick
item
71. delete
order
pick
item
Semantics of Declare via LTL
Atomic propositions are activities
Each constraint is an LTL formula (built from template formulae)
…
Each state contains
a single activity
close
order
pay
□ (
𝚌
𝚕
𝚘
𝚜
𝚎
→ ◊
𝚙
𝚊
𝚢
)
72. Semantics of Declare via LTL
Atomic propositions are activities
A Declare speci
fi
cation is the conjunction of its constraint formulae
…
Each state contains
a single activity
delete
order
pick
item
close
order
pay
□ (
𝚌
𝚕
𝚘
𝚜
𝚎
→ ◊
𝚙
𝚊
𝚢
)
∧ □ (
𝚌
𝚕
𝚘
𝚜
𝚎
→ ◊
𝚒
𝚝
𝚎
𝚖
)
∧ □ (
𝚌
𝚊
𝚗
𝚌
𝚎
𝚕
→ ¬ □
𝚙
𝚊
𝚢
)
73. An unconventional use of logics!
From …
Temporal logics for specifying (un)desired properties of
a dynamic system
… to …
Temporal logics for specifying the dynamic system itself
75. Wait a moment…
Close
order
Pay
□ (
𝚌
𝚕
𝚘
𝚜
𝚎
→ ◊
𝚙
𝚊
𝚢
)
Just what we needed!
But recall: each process
instance should complete
in a
fi
nite number of steps!
76. LTLf: LTL over finite traces
[DeGiacomoVardi,IJCAI2013]
LTL interpreted over
fi
nite traces
' ::= A | ¬' | '1 ^ '2 | ' | '1U'2
No successor!
Same syntax of LTL
In LTL, there is always a next moment… in LTLf, the contrary!
φ always holds from current to the last instant
The next step exists and at next step φ holds
(weak next) If the next step exists, then at next step φ holds
last instant in the trace
' | '1 ^ '2 | ' | '1U'2
⇤'
Last ⌘ ¬ true
' ⌘ ¬ ¬'
77. Look the same, but they are not the same
Many researchers: misled by moving from in
fi
nite to
fi
nite traces
In [____,AAAI14], we studied why!
• People typically focus on “patterns”, not on the entire logic
• Many of such patterns in BPM, reasoning about actions, planning, etc. are
“insensitive to in
fi
nity”
78. Quiz: does this specification accept traces?
b c
0..1
1..*
a d
79. Quiz: does this specification accept traces?
b c
0..1
1..*
a d
80. Quiz: does this specification accept traces?
b c
0..1
1..*
a d
81. Quiz: does this specification accept traces?
b c
0..1
1..*
a d
82. Quiz: does this specification accept traces?
b c
0..1
1..*
a d
83. Quiz: does this specification accept traces?
b c
0..1
1..*
a d
85. Quiz: does this specification accept traces?
a b
Only the empty trace <>,
due to
fi
nite-trace
semantics
86. How to do this, systematically?
Declare speci
fi
cation: encoded in LTLf
LTLf: the star-free fragment of regular expressions
Regular expressions: intimately connected to
fi
nite-state
automata
• No exotic automata models over in
fi
nite structures!
• Just the good-old deterministic
fi
nite-state automata
(DFAs) you know from a typical course in programming
languages and compilers
87. From Declare to automata
LTLf NFA
nondeterministic
DFA
deterministic
LTLf2aut determin.
'
nt
lf formulas can be translated into equivalent nfa:
t |= Ï iff t œ L(AÏ)
/ldlf to nfa (exponential)
to dfa (exponential)
often nfa/dfa corresponding to ltlf /ldlf are in fact small!
ompile reasoning into automata based procedures!
Process engine!
[DeGiacomoVardi,IJCAI2013]
[____,TOSEM2022]
88. Vision realised!
LTLf NFA
nondeterministic
DFA
deterministic
LTLf2aut determin.
'
nt
lf formulas can be translated into equivalent nfa:
t |= Ï iff t œ L(AÏ)
/ldlf to nfa (exponential)
to dfa (exponential)
often nfa/dfa corresponding to ltlf /ldlf are in fact small!
ompile reasoning into automata based procedures!
Process engine!
[DeGiacomoVardi,IJCAI2013]
[____,TOSEM2022]
91. Few lines of code
[____,TOSEM2022] 0:8 G. De Giacomo et al.
(tt, ⇧) = true
(↵ , ⇧) = false
( , ⇧) = (h itt, ⇧) ( prop.)
('1 ^ '2, ⇧) = ('1, ⇧) ^ ('2, ⇧)
('1 _ '2, ⇧) = ('1, ⇧) _ ('2, ⇧)
(h i', ⇧) =
⇢
E (') if ⇧ |= ( prop.)
false if ⇧ 6|=
(h ?i', ⇧) = ( , ⇧) ^ (', ⇧)
(h⇢1 + ⇢2i', ⇧) = (h⇢1i', ⇧) _ (h⇢2i', ⇧)
(h⇢1; ⇢2i', ⇧) = (h⇢1ih⇢2i', ⇧)
(h⇢⇤
i', ⇧) = (', ⇧) _ (h⇢iF h⇢⇤i', ⇧)
([ ]', ⇧) =
⇢
E (') if ⇧ |= ( prop.)
true if ⇧ 6|=
([ ?]', ⇧) = (nnf (¬ ), ⇧) _ (', ⇧)
([⇢1 + ⇢2]', ⇧) = ([⇢1]', ⇧) ^ ([⇢2]', ⇧)
([⇢1; ⇢2]', ⇧) = ([⇢1][⇢2]', ⇧)
([⇢⇤
]', ⇧) = (', ⇧) ^ ([⇢]T [⇢⇤]', ⇧)
(F , ⇧) = false
(T , ⇧) = true
Fig. 1: Definition of , where E (') recursively replaces in ' all occurrences of atoms of
the form T and F by .
1: algorithm LDLf 2NFA
2: input LDLf formula '
3: output NFA A(') = (2P
, S, s0, %, Sf )
4: s0 {'} . set the initial state
5: Sf {;} . set final states
6: if ( (', ✏) = true) then . check if initial state is also final
7: Sf Sf [ {s0}
8: S {s0, ;}, % ;
9: while (S or % change) do
10: for (s 2 S) do
11: if (s0
|=
V
( 2s) ( , ⇧) then . add new state and transition
12: S S [ {s0
}
13: % % [ {(s, ⇧, s0
)}
14: if (
V
( 2s0) ( , ✏) = true) then . check if new state is also final
15: Sf Sf [ {s0
}
Fig. 2: NFA construction.
' NFA
92. Few lines of code
[____,TOSEM2022] 0:8 G. De Giacomo et al.
(tt, ⇧) = true
(↵ , ⇧) = false
( , ⇧) = (h itt, ⇧) ( prop.)
('1 ^ '2, ⇧) = ('1, ⇧) ^ ('2, ⇧)
('1 _ '2, ⇧) = ('1, ⇧) _ ('2, ⇧)
(h i', ⇧) =
⇢
E (') if ⇧ |= ( prop.)
false if ⇧ 6|=
(h ?i', ⇧) = ( , ⇧) ^ (', ⇧)
(h⇢1 + ⇢2i', ⇧) = (h⇢1i', ⇧) _ (h⇢2i', ⇧)
(h⇢1; ⇢2i', ⇧) = (h⇢1ih⇢2i', ⇧)
(h⇢⇤
i', ⇧) = (', ⇧) _ (h⇢iF h⇢⇤i', ⇧)
([ ]', ⇧) =
⇢
E (') if ⇧ |= ( prop.)
true if ⇧ 6|=
([ ?]', ⇧) = (nnf (¬ ), ⇧) _ (', ⇧)
([⇢1 + ⇢2]', ⇧) = ([⇢1]', ⇧) ^ ([⇢2]', ⇧)
([⇢1; ⇢2]', ⇧) = ([⇢1][⇢2]', ⇧)
([⇢⇤
]', ⇧) = (', ⇧) ^ ([⇢]T [⇢⇤]', ⇧)
(F , ⇧) = false
(T , ⇧) = true
Fig. 1: Definition of , where E (') recursively replaces in ' all occurrences of atoms of
the form T and F by .
1: algorithm LDLf 2NFA
2: input LDLf formula '
3: output NFA A(') = (2P
, S, s0, %, Sf )
4: s0 {'} . set the initial state
5: Sf {;} . set final states
6: if ( (', ✏) = true) then . check if initial state is also final
7: Sf Sf [ {s0}
8: S {s0, ;}, % ;
9: while (S or % change) do
10: for (s 2 S) do
11: if (s0
|=
V
( 2s) ( , ⇧) then . add new state and transition
12: S S [ {s0
}
13: % % [ {(s, ⇧, s0
)}
14: if (
V
( 2s0) ( , ✏) = true) then . check if new state is also final
15: Sf Sf [ {s0
}
Fig. 2: NFA construction.
' NFA
Automata manipulations
much easier to handle
than in the in
fi
nite case,
with huge performance
improvements
[ZhuEtAl,IJCAI17]
[Westergaard,BPM11]
93. Constraint automata
Template: pre-compiled into a DFA
Constraint: grounds the template DFA on speci
fi
c activities
close
order
delete
order
pay
pick
item
close
order
pay
0 1
2
c
Σ
{i,c}
Σ∖ Σ
i
1
1
{c}
Σ∖
c
{p}
Σ∖
p
0 1
2
p
Σ
{d}
Σ∖
d
{p}
Σ∖
99. From local automata to global automaton
Entire speci
fi
cation: product automaton of all local
automata
• Corresponds to the automaton of the conjunction of
all formulae
• Many optimisations available
Declare speci
fi
cation consistent if and only if its
global automaton is non-empty
100. Constraints: hard or soft?
Logically: hard
Conceptually: not so clear
• Model level: mix of constraints of di
ff
erent nature
• Physical, best practices, policies, legal, … [AdamoEtAl,InfSys2021]
• Hard at the IS level <-> hard or soft in reality
• Manual task vs user-interaction task
• Ontological reversal [BaskervilleEtAl,MISQ2020]
101. Constraints: hard or soft?
Logically: hard
Conceptually: not so clear
• Model level: mix of constraints of di
ff
erent nature
• Physical, best practices, policies, legal, … [AdamoEtAl,InfSys2021]
• Hard at the IS level <-> hard or soft in reality
• Manual task vs user-interaction task
• Ontological reversal [BaskervilleEtAl,MISQ2020]
ABPMS needs to to account for deviations, at runtime
103. (Anticipatory) monitoring
t
Evolving trace
Declare speci
fi
cation
Monitor
Continuous feedback
Track a running process execution to check conformance to a reference process
model
• Goal: Detect and report
fi
ne-grained feedback and deviations as early as possible
One of the main operational support tasks
• Complementary to predictive monitoring!
104. (Anticipatory) monitoring
Track a running process execution to check conformance to a reference process
model
• Goal: Detect and report
fi
ne-grained feedback and deviations as early as possible
One of the main operational support tasks
• Complementary to predictive monitoring!
…
…
t
…
…
…
Declare speci
fi
cation
Monitor
Continuous feedback
Evolving trace
105. Fine-grained feedback
Re
fi
ned analysis of the “truth value”
of a constraint, looking into (all)
possible futures
Consider a partial trace t, and a
constraint C...
…
…
…
t
C
satis
fi
ed?
…
…
C
satis
fi
ed?
106. RV-LTL truth values
[BauerEtAl,InfCom2010]
C is permanently satis
fi
ed if t
satis
fi
es C and no matter how t is
extended, C will stay satis
fi
ed
C is currently satis
fi
ed if t satis
fi
es C
but there is a continuation of t that
violates C
…
…
t
…
…
…
t
107. RV-LTL truth values
C is currently violated if t violates C
but there is a continuation that leads
to satisfy C
C is permanently violated if t violates
C and no matter how t is extended, C
will stay violated
t
…
…
…
…
…
[BauerEtAl,InfCom2010]
108. RV-LTL on finite traces
[____,BPM2011] [____,BPM2014] [____,TOSEM2022]
close
order
delete
order
pay
pick
item
close
order
pay
0 1
2
i
Σ
{i,c}
Σ∖ Σ
c
1
1
{c}
Σ∖
c
{p}
Σ∖
p
0 1
2
p
Σ
{d}
Σ∖
d
{p}
Σ∖
Su
ffi
xes of the current trace: each with unbounded,
fi
nite length
• Again standard DFAs -> all formulae of LTLf are monitorable
Each state of the DFA: colored with an RV-LTL value via simple reachability checks
109. RV-LTL on finite traces
[____,BPM2011] [____,BPM2014] [____,TOSEM2022]
Su
ffi
xes of the current trace: each with unbounded,
fi
nite length
• Again standard DFAs -> all formulae of LTLf are monitorable
Each state of the DFA: colored with an RV-LTL value via simple reachability checks
Σ
Σ
Σ
0
[cs]
1
[ps]
2
[pv]
1
[cv]
0
[cs]
0
[cs]
1
[cs]
2
[pv]
c
{i,c}
Σ∖
i
{c}
Σ∖
c
{p}
Σ∖
p
p
{d}
Σ∖
d
{p}
Σ∖
close
order
delete
order
pay
pick
item
close
order
pay
123. LDLf
linear dynamic logic
over
fi
nite traces
[DeGiacomoVardi,
IJ
CAI2013]
MSOL over
fi
nite traces
Regular
expressions
Can we do more?
LTLf
FOL
over
fi
nite traces
Star-free
regular
expressions
Finite-state
automata
124. LDLf
linear dynamic logic
over
fi
nite traces
[DeGiacomoVardi,
IJ
CAI2013]
MSOL over
fi
nite traces
Regular
expressions
Can we do more?
[____,BPM2014] [____,TOSEM2022]
LTLf
FOL
over
fi
nite traces
Star-free
regular
expressions
Finite-state
automata
125. LDLf
linear dynamic logic
over
fi
nite traces
[DeGiacomoVardi,
IJ
CAI2013]
MSOL over
fi
nite traces
Regular
expressions
Can we do more?
[____,BPM2014] [____,TOSEM2022]
LTLf
FOL
over
fi
nite traces
Star-free
regular
expressions
Finite-state
automata
126. LDLf
linear dynamic logic
over
fi
nite traces
[DeGiacomoVardi,
IJ
CAI2013]
MSOL over
fi
nite traces
Regular
expressions
Can we do more?
[____,BPM2014] [____,TOSEM2022]
LTLf
FOL
over
fi
nite traces
Star-free
regular
expressions
Finite-state
automata
127. From constraints to metaconstraints
[____,BPM2014] [____,TOSEM2022]
LDLf expresses RV-LTL monitoring states of LDLf constraints
• Support for metaconstraints predicating over the monitoring status of
other constraints
Example: a form of “contrary-to-duty” process constraint
• If constraint C1 gets permanently violated, eventually satisfy a
compensation constraint C2
Interesting open problem: relationship with normative frameworks and
defeasible reasoning
• [Governatori,EDOC2015] -> LTL cannot express normative notions
• [AlechinaEtAl,FLAP2018]-> not true!
128. Tooling
0:28 G. De Giacomo et al.
Fig. 10: Screenshot of one of the LDL MONITOR clients.
adopted by the IEEE task force on process mining. The response produced by the LDL
MONITOR provider is composed of two parts. The first part contains the temporal infor-
Fully implemented as part
of the RuM toolkit
(rulemining.org)
129. Adding event attributes and arithmetics
[____,AAAI2022]
Study of LTLf over numerical variables with arithmetic
conditions
• Undecidability around the corner
Identi
fi
cation of decidable fragments tuning condition
language and variable interaction
• Lifting of automata-based techniques
• SMT reasoners to deal with conditions
130. Challenging Declare
Frequencies and uncertainty
•Best practices: constraints that must hold in the majority, but not
necessarily all, cases.
90% of the orders are shipped via truck.
•Outlier behaviors: constraints that only apply to very few, but still
conforming, cases.
Only 1% of the orders are canceled after being paid.
•Constraints involving external parties: contain uncontrollable
activities for which only partial guarantees can be given.
In 8 cases out of 10, the customer accepts the order and also pays for it.
132. Declare is crisp
Crisp semantics: an execution trace conforms to the
model if it satis
fi
es every constraint in the model
close
order
1..1
accept
refuse
134. probability reference value: number in [0,1]
probability operator: { = , ≠ , ≤ , ≥ , < , > }
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
ProbDeclare
Crisp and uncertain constraints [____,BPM2020] [____,InfSys2022]
ProbDeclare constraint over :
triple
Σ
⟨φ, ⋈, p⟩
process condition: LTLf formula over Σ
Well-behaved fragment of full probabilistic LTLf [____,AAAI2020]
139. From traces to stochastic languages and logs
A stochastic language over is a function
such that
•
fi
nite if
fi
nitely many traces get a non-zero probability
A log can be seen as a
fi
nite stochastic language
(probabilities from frequencies)
Σ
ρ : Σ* → [0,1]
∑
τ∈Σ*
ρ(τ) = 1
140. Semantics of ProbDeclare
Stochastic language satis
fi
es ProbDeclare model if:
•for every crisp constraint and every trace with
non-zero probability, we have that
•for every probabilistic constraint , we have
ρ
φ τ ∈ Σ*
τ ⊧ φ
⟨φ, ⋈, p⟩
∑
τ∈Σ*,τ⊧φ
ρ(τ) ⋈ p
141. Semantics of ProbDeclare
Stochastic language satis
fi
es ProbDeclare model if:
•for every crisp constraint and every trace with
non-zero probability, we have that
•for every probabilistic constraint , we have
ρ
φ τ ∈ Σ*
τ ⊧ φ
⟨φ, ⋈, p⟩
∑
τ∈Σ*,τ⊧φ
ρ(τ) ⋈ p
Key challenge: again, interplay of constraints
142. Dealing with “n” probabilistic constraints
Constraint scenario
Declares which probabilistic constraints must hold, and which
are violated
• Constraint violated <-> its negated version holds
Denotes a “process variant”
• All in all: up to
scenarios, denoting
di
ff
erent variants
2n
8 scenarios
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
143. Reasoning over scenarios is tricky
Interplay between logic and probabilities
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1
8 scenarios
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
There cannot be traces that satisfy all
constraints at once
144. Reasoning over scenarios is tricky
Interplay between logic and probabilities
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1
8 scenarios
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
There cannot be traces that satisfy all
constraints at once
inconsistent
-> no satisfying trace
-> 0 probability!
145. Logical reasoning within scenarios
LTLf and automata to the rescue
To answer such questions, we provide a logical characterization of scenarios.
First and foremost, we introduce a characteristic LTLf formula for a scenario:
a trace belongs to a scenario if and only if the trace satisfies the characteristic
formula of the scenario.
Definition 15. Let M = 〈Σ, C, 〈〈ϕ1, ⊲⊳1, p1〉, . . . , 〈ϕn, ⊲⊳n, pn〉〉〉 be ProbDe-
clare model. The characteristic formula induced by a scenario SM
b1···bn
over
M, compactly called SM
b1···bn
-formula, is the LTLf formula
Φ(SM
b1···bn
) =
'
ψ∈C
ψ ∧
'
i∈{1,...,n}
(
)
*
)
+
ϕi if bi = 1
¬ϕi if bi = 0
(8)
⊳
Definition 16. A trace τ belongs to scenario SM
b1···bn
if τ |= Φ(SM
b1···bn
). Sce-
A scenario maps to an LTLf characteristic formula
• Conjunction of formulae, one per constraint…
• Does the constraint hold in the scenario?
• Y -> take its LTLf process condition
• N -> take its negation
Reasoning via automata, as for standard LTLf
147. Reasoning over scenarios is tricky
Interplay between logic and probabilities
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1
0.8+0.3 > 1
-> there must be traces where a closed
order is accepted and refused.
8 scenarios
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
148. Reasoning over scenarios is tricky
Interplay between logic and probabilities
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1
0.8+0.3 > 1
-> there must be traces where a closed
order is accepted and refused.
8 scenarios
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
there must be
traces where
accept and refuse
coexist
149. Reasoning over scenarios is tricky
Interplay between logic and probabilities
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
1
0.8+0.3 > 1
-> there must be traces where a closed
order is accepted and refused.
8 scenarios
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
there must be
traces where
accept and refuse
coexist
Should have a
non-zero probability
(if constraint values agree)
150. The true meaning of a ProbDeclare model
From probabilistic constraints to scenario probability distributions
With n scenarios: with denotes the probability that a trace
belongs to scenario
ProbDeclare model: constrains the legal probability distributions over scenarios
xi i ∈ {0,…,2n−1
}
i
whose solutions constitute all the probability distributions that are compati-
ble with the logical and probabilistic characterization of the probabilistic con-
straints in the ProbDeclare model of interest. To do so, we associate each
scenario to a probability variable, keeping the same naming convention. For
example, the probability mass of scenario S001 is represented by variable x001.
For M = 〈Σ, C, 〈〈ϕ1, ⊲⊳1, p1〉, . . . , 〈ϕn, ⊲⊳n, pn〉〉〉, we construct the system LM of
inequalities using probability variables xi, with i ranging from 0 to 2n
− 1 (in
binary format):
xi ≥ 0 0 ≤ i < 2n
(9)
% 2n
−1
"
i=0
xi
&
= 1 (10)
% "
i∈{0,...,2n−1},
jth position of i is 1
xi
&
⊲⊳j pj 0 ≤ j < n (11)
xi = 0 0 ≤ i < 2n
, scenario Si is inconsistent (12)
151. The true meaning of a ProbDeclare model
From probabilistic constraints to scenario probability distributions
With n scenarios: with denotes the probability that a trace
belongs to scenario
ProbDeclare model: constrains the legal probability distributions over scenarios
xi i ∈ {0,…,2n−1
}
i
whose solutions constitute all the probability distributions that are compati-
ble with the logical and probabilistic characterization of the probabilistic con-
straints in the ProbDeclare model of interest. To do so, we associate each
scenario to a probability variable, keeping the same naming convention. For
example, the probability mass of scenario S001 is represented by variable x001.
For M = 〈Σ, C, 〈〈ϕ1, ⊲⊳1, p1〉, . . . , 〈ϕn, ⊲⊳n, pn〉〉〉, we construct the system LM of
inequalities using probability variables xi, with i ranging from 0 to 2n
− 1 (in
binary format):
xi ≥ 0 0 ≤ i < 2n
(9)
% 2n
−1
"
i=0
xi
&
= 1 (10)
% "
i∈{0,...,2n−1},
jth position of i is 1
xi
&
⊲⊳j pj 0 ≤ j < n (11)
xi = 0 0 ≤ i < 2n
, scenario Si is inconsistent (12)
One solution
-> a
fi
xed probability distribution
(Possibly in
fi
nitely) many solutions
-> family of probability distributions
No solution
-> inconsistent speci
fi
cation
152. Computing probability distributions
1. check for consistency
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
scenario
consistent? probability
(1) (2) (3)
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
153. Computing probability distributions
1. check for consistency
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
scenario
consistent? probability
(1) (2) (3)
0 0 0 N
0 0 1 Y
0 1 0 N
0 1 1 Y
1 0 0 N
1 0 1 Y
1 1 0 Y
1 1 1 N
154. Computing probability distributions
1. check for consistency
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y
0 1 0 N 0
0 1 1 Y
1 0 0 N 0
1 0 1 Y
1 1 0 Y
1 1 1 N 0
158. Computing probability distributions
3. solve
close
order
1..1
accept
refuse
{0.8}
{0.3}
{0.9}
1
2
3
Scenario 011
Scenario 101
Scenario 110
Close and
refuse
Close and
accept
Close and get
a decision
change
159. Scenarios in action
Conformance checking
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y 0
0 1 0 N 0
0 1 1 Y 0.2
1 0 0 N 0
1 0 1 Y 0.7
1 1 0 Y 0.1
1 1 1 N 0
011
101
110
Close
and
refuse
Close
and
accept
Close
and get a
decision
change
close
order
accept
<close order>
close
order
refuse
accept refuse
160. Scenarios in action
Conformance checking
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y 0
0 1 0 N 0
0 1 1 Y 0.2
1 0 0 N 0
1 0 1 Y 0.7
1 1 0 Y 0.1
1 1 1 N 0
011
101
110
Close
and
refuse
Close
and
accept
Close
and get a
decision
change
close
order
accept
<close order>
close
order
refuse
accept refuse
0
0
1
NO
161. Scenarios in action
Conformance checking
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y 0
0 1 0 N 0
0 1 1 Y 0.2
1 0 0 N 0
1 0 1 Y 0.7
1 1 0 Y 0.1
1 1 1 N 0
011
101
110
Close
and
refuse
Close
and
accept
Close
and get a
decision
change
close
order
accept
<close order, accept, refuse>
close
order
refuse
accept refuse
162. Scenarios in action
Conformance checking
scenario
consistent? probability
(1) (2) (3)
0 0 0 N 0
0 0 1 Y 0
0 1 0 N 0
0 1 1 Y 0.2
1 0 0 N 0
1 0 1 Y 0.7
1 1 0 Y 0.1
1 1 1 N 0
011
101
110
Close
and
refuse
Close
and
accept
Close
and get a
decision
change
close
order
accept
<close order, accept, refuse>
close
order
refuse
accept refuse
1
1
0
YES
(outlier)
163. Scenarios in action
Probabilistic monitoring
• One global monitor per scenario
• Monitors used in parallel: if multiple return the same verdict, aggregate their
probability
• Interesting a-priori vs posterior reading of probabilities
164. Scenarios in action
Probabilistic monitoring
• One global monitor per scenario
• Monitors used in parallel: if multiple return the same verdict, aggregate their
probability
• Interesting a-priori vs posterior reading of probabilities
Human interpretability is an
interesting open challenge
165. From traces to logs
Stochastic conformance (granularity: scenario)
Log
ProbDeclare
speci
fi
cation
166. From traces to logs
Stochastic conformance (granularity: scenario)
ProbDeclare
speci
fi
cation
Consistent scenarios
Log
167. From traces to logs
Stochastic conformance (granularity: scenario)
ProbDeclare
speci
fi
cation
Consistent scenarios
Speci
fi
cation
distribution
Log
168. From traces to logs
Stochastic conformance (granularity: scenario)
ProbDeclare
speci
fi
cation
Consistent scenarios
Speci
fi
cation
distribution
Log
Log
distribution
169. From traces to logs
Stochastic conformance (granularity: scenario)
ProbDeclare
speci
fi
cation
Consistent scenarios
Speci
fi
cation
distribution
Log
Log
distribution
(Earth mover’s) distance
Can be re
fi
ned
through trace
alignments
171. All possible constraints grounded on
the activities in the log
The log
Which to keep?
Declare discovery
172. Template algorithm for ProbDeclare discovery
[____,PMHandbook2022]
1. Select templates of interest
2. Compute metrics for corresponding constraints (grounded on log activities)
3. Filter based on minimum thresholds
4. Redundant constraints?
• Keep the most liberal if metrics are better for it
• Keep the most restrictive in case of equal metrics
5. Incompatible constraints?
• Keep only the one with better metrics
6. Further processing to ensure consistency, minimality, …
trace support:
# traces that "interestingly" satisfy the constraint
total # traces in the log
Consistency guaranteed only for
100% trace-based support
173. Template algorithm for ProbDeclare discovery
[____,InfSys2022]
1. Select templates of interest
2. Compute metrics for corresponding constraints (grounded on log activities)
3. Filter based on minimum/maximum thresholds
4. Redundant constraints?
• Keep the most liberal if metrics are better for it
• Keep the most restrictive in case of equal metrics
5. Incompatible constraints?
• Keep only the one with better metrics
6. Further processing to ensure consistency, minimality, …
5. Use support as a basis for constraint probability
• Consistency guaranteed by construction
trace support:
# traces that "interestingly" satisfy the constraint
total # traces in the log
174. Idea: conversating on log skeletons
Log
Discovery
[Verbeek, STTT2021]
Log skeleton [Verbeek, STTT2021]
Declare-like speci
fi
cation
with frequencies
175. Idea: conversating on log skeletons
Log
Discovery
[Verbeek, STTT2021]
Reasoning on
constraints
and their frequencies
Log skeleton [Verbeek, STTT2021]
Declare-like speci
fi
cation
with frequencies
176. Processes are not flat
Package
1
p1
p2 p3
Figure 1: Structure of order, item, and package data objects in an order-to-delivery sce-
nario whereuv items from di↵erent orders are carried in several packages
event log for orders
timestamp overall log order o1 order o2 order o3
2019-09-22 10:00:00 create order o1 create order
2019-09-22 10:01:00 add item i1,1 to order o1 add item
2019-09-23 09:20:00 create order o2 create order
2019-09-23 09:34:00 add item i2,1 to order o2 add item
2019-09-23 11:33:00 create order o3 create order
2019-09-23 11:40:00 add item i3,1 to order o3 add item
2019-09-23 12:27:00 pay order o3 pay order
2019-09-23 12:32:00 add item i1,2 to order o1 add item
2019-09-23 13:03:00 pay order o1 pay order
2019-09-23 14:34:00 load item i1,1 into package p1 load item
2019-09-23 14:45:00 add item i2,2 to order o2 add item
2019-09-23 14:51:00 load item i3,1 into package p1 load item
2019-09-23 15:12:00 add item i2,3 to order o2 add item
2019-09-23 15:41:00 pay order o2 pay order
2019-09-23 16:23:00 load item i2,1 into package p2 load item
2019-09-23 16:29:00 load item i1,2 into package p2 load item
2019-09-23 16:33:00 load item i2,2 into package p2 load item
2019-09-23 17:01:00 send package p1 send package send package
2019-09-24 06:38:00 send package p2 send package send package
2019-09-24 07:33:00 load item i2,3 into package p3 load item
2019-09-24 08:46:00 send package p3 send package
2019-09-24 16:21:00 deliver package p1 deliver package deliver package
2019-09-24 17:32:00 deliver package p2 deliver package deliver package
2019-09-24 18:52:00 deliver package p3 deliver package
2019-09-24 18:57:00 accept delivery p3 accept delivery
2019-09-25 08:30:00 deliver package p1 deliver package deliver package
2019-09-25 08:32:00 accept delivery p1 accept delivery accept delivery
2019-09-25 09:55:00 deliver package p2 deliver package deliver package
2019-09-25 17:11:00 deliver package p2 deliver package deliver package
2019-09-25 17:12:00 accept delivery p2 accept delivery accept delivery
Table 1: Overall log of of an order-to-delivery scenario whose structure is illustrated in
Figure 1, and its flattening using the viewpoint of orders.
177. Processes are not flat
Package
1
p1
p2 p3
Figure 1: Structure of order, item, and package data objects in an order-to-delivery sce-
nario whereuv items from di↵erent orders are carried in several packages
event log for orders
timestamp overall log order o1 order o2 order o3
2019-09-22 10:00:00 create order o1 create order
2019-09-22 10:01:00 add item i1,1 to order o1 add item
2019-09-23 09:20:00 create order o2 create order
2019-09-23 09:34:00 add item i2,1 to order o2 add item
2019-09-23 11:33:00 create order o3 create order
2019-09-23 11:40:00 add item i3,1 to order o3 add item
2019-09-23 12:27:00 pay order o3 pay order
2019-09-23 12:32:00 add item i1,2 to order o1 add item
2019-09-23 13:03:00 pay order o1 pay order
2019-09-23 14:34:00 load item i1,1 into package p1 load item
2019-09-23 14:45:00 add item i2,2 to order o2 add item
2019-09-23 14:51:00 load item i3,1 into package p1 load item
2019-09-23 15:12:00 add item i2,3 to order o2 add item
2019-09-23 15:41:00 pay order o2 pay order
2019-09-23 16:23:00 load item i2,1 into package p2 load item
2019-09-23 16:29:00 load item i1,2 into package p2 load item
2019-09-23 16:33:00 load item i2,2 into package p2 load item
2019-09-23 17:01:00 send package p1 send package send package
2019-09-24 06:38:00 send package p2 send package send package
2019-09-24 07:33:00 load item i2,3 into package p3 load item
2019-09-24 08:46:00 send package p3 send package
2019-09-24 16:21:00 deliver package p1 deliver package deliver package
2019-09-24 17:32:00 deliver package p2 deliver package deliver package
2019-09-24 18:52:00 deliver package p3 deliver package
2019-09-24 18:57:00 accept delivery p3 accept delivery
2019-09-25 08:30:00 deliver package p1 deliver package deliver package
2019-09-25 08:32:00 accept delivery p1 accept delivery accept delivery
2019-09-25 09:55:00 deliver package p2 deliver package deliver package
2019-09-25 17:11:00 deliver package p2 deliver package deliver package
2019-09-25 17:12:00 accept delivery p2 accept delivery accept delivery
Table 1: Overall log of of an order-to-delivery scenario whose structure is illustrated in
Figure 1, and its flattening using the viewpoint of orders.
Item
Package
contains
carried
in
1
*
*
1
i1,1 i1,2 i2,1 i2,2 i2,3 i3,1
p1 p2 p3
Order o1 o2 o3
178. Processes are not flat
Package
1
p1
p2 p3
Figure 1: Structure of order, item, and package data objects in an order-to-delivery sce-
nario whereuv items from di↵erent orders are carried in several packages
event log for orders
timestamp overall log order o1 order o2 order o3
2019-09-22 10:00:00 create order o1 create order
2019-09-22 10:01:00 add item i1,1 to order o1 add item
2019-09-23 09:20:00 create order o2 create order
2019-09-23 09:34:00 add item i2,1 to order o2 add item
2019-09-23 11:33:00 create order o3 create order
2019-09-23 11:40:00 add item i3,1 to order o3 add item
2019-09-23 12:27:00 pay order o3 pay order
2019-09-23 12:32:00 add item i1,2 to order o1 add item
2019-09-23 13:03:00 pay order o1 pay order
2019-09-23 14:34:00 load item i1,1 into package p1 load item
2019-09-23 14:45:00 add item i2,2 to order o2 add item
2019-09-23 14:51:00 load item i3,1 into package p1 load item
2019-09-23 15:12:00 add item i2,3 to order o2 add item
2019-09-23 15:41:00 pay order o2 pay order
2019-09-23 16:23:00 load item i2,1 into package p2 load item
2019-09-23 16:29:00 load item i1,2 into package p2 load item
2019-09-23 16:33:00 load item i2,2 into package p2 load item
2019-09-23 17:01:00 send package p1 send package send package
2019-09-24 06:38:00 send package p2 send package send package
2019-09-24 07:33:00 load item i2,3 into package p3 load item
2019-09-24 08:46:00 send package p3 send package
2019-09-24 16:21:00 deliver package p1 deliver package deliver package
2019-09-24 17:32:00 deliver package p2 deliver package deliver package
2019-09-24 18:52:00 deliver package p3 deliver package
2019-09-24 18:57:00 accept delivery p3 accept delivery
2019-09-25 08:30:00 deliver package p1 deliver package deliver package
2019-09-25 08:32:00 accept delivery p1 accept delivery accept delivery
2019-09-25 09:55:00 deliver package p2 deliver package deliver package
2019-09-25 17:11:00 deliver package p2 deliver package deliver package
2019-09-25 17:12:00 accept delivery p2 accept delivery accept delivery
Table 1: Overall log of of an order-to-delivery scenario whose structure is illustrated in
Figure 1, and its flattening using the viewpoint of orders.
Item
Package
contains
carried
in
1
*
*
1
i1,1 i1,2 i2,1 i2,2 i2,3 i3,1
p1 p2 p3
Order o1 o2 o3
179. Processes are not flat
Package
1
p1
p2 p3
Figure 1: Structure of order, item, and package data objects in an order-to-delivery sce-
nario whereuv items from di↵erent orders are carried in several packages
event log for orders
timestamp overall log order o1 order o2 order o3
2019-09-22 10:00:00 create order o1 create order
2019-09-22 10:01:00 add item i1,1 to order o1 add item
2019-09-23 09:20:00 create order o2 create order
2019-09-23 09:34:00 add item i2,1 to order o2 add item
2019-09-23 11:33:00 create order o3 create order
2019-09-23 11:40:00 add item i3,1 to order o3 add item
2019-09-23 12:27:00 pay order o3 pay order
2019-09-23 12:32:00 add item i1,2 to order o1 add item
2019-09-23 13:03:00 pay order o1 pay order
2019-09-23 14:34:00 load item i1,1 into package p1 load item
2019-09-23 14:45:00 add item i2,2 to order o2 add item
2019-09-23 14:51:00 load item i3,1 into package p1 load item
2019-09-23 15:12:00 add item i2,3 to order o2 add item
2019-09-23 15:41:00 pay order o2 pay order
2019-09-23 16:23:00 load item i2,1 into package p2 load item
2019-09-23 16:29:00 load item i1,2 into package p2 load item
2019-09-23 16:33:00 load item i2,2 into package p2 load item
2019-09-23 17:01:00 send package p1 send package send package
2019-09-24 06:38:00 send package p2 send package send package
2019-09-24 07:33:00 load item i2,3 into package p3 load item
2019-09-24 08:46:00 send package p3 send package
2019-09-24 16:21:00 deliver package p1 deliver package deliver package
2019-09-24 17:32:00 deliver package p2 deliver package deliver package
2019-09-24 18:52:00 deliver package p3 deliver package
2019-09-24 18:57:00 accept delivery p3 accept delivery
2019-09-25 08:30:00 deliver package p1 deliver package deliver package
2019-09-25 08:32:00 accept delivery p1 accept delivery accept delivery
2019-09-25 09:55:00 deliver package p2 deliver package deliver package
2019-09-25 17:11:00 deliver package p2 deliver package deliver package
2019-09-25 17:12:00 accept delivery p2 accept delivery accept delivery
Table 1: Overall log of of an order-to-delivery scenario whose structure is illustrated in
Figure 1, and its flattening using the viewpoint of orders.
Item
Package
contains
carried
in
1
*
*
1
i1,1 i1,2 i2,1 i2,2 i2,3 i3,1
p1 p2 p3
Order o1 o2 o3
create order
(3)
add item
(6)
pay order
(3)
load item
(6)
send package
(5)
accept delivery
(5)
deliver package
(11)
3
3
3
3
3
4 1
1
2
3
2
5
6
3
181. Need of a 3D model
time
objects
activities
Object-centric behavioral
constraints
[____,DL2017] [____,BPM2019]
182. Object-centric behavioral constraints
Dimension 1: data model to classify and relate objects
• classes
• relationship types
• multiplicities (one-to-one, one-to-many, many-to-many)
10 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
183. Object-centric behavioral constraints
Dimension 2: activities
• activities
• activity-class
relationship types
• multiplicities
– green, italic sans-serif to point out relationships between tas
– blue, underlined italics to indicate co-reference relationships th
which instances of tasks are related by the constraint dependin
manipulate.
In particular, the relevant constraints for our job hiring example are
C.1 The register data task is about a Person.
C.2 A Job Offer is created by executing the post offer task.
C.3 A Job Offer is closed by determining the winner.
C.4 A Job Offer is stopped by canceling the hiring.
Enriching Data Models with Behaviora
C.5 An Application is created by executing the submit task.
C.6 An Application is promoted by marking it as eligible.
C.7 An Application can be submitted only if, beforehand, the da
date who made that Application have been registered.
C.8 A winner can be determined for a Job Offer only if at lea
responding to that Job Offer has been previously marked a
C.9 For each Application responding to a Job Offer, if the App
as eligible then a winner must be finally determined for
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be st
the cancel hiring task (and vice-versa).
10 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
184. Object-centric behavioral constraints
Dimension 2: activities
• activities
• activity-class
relationship types
• multiplicities
10 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
– green, italic sans-serif to point out relationships between tas
– blue, underlined italics to indicate co-reference relationships th
which instances of tasks are related by the constraint dependin
manipulate.
In particular, the relevant constraints for our job hiring example are
C.1 The register data task is about a Person.
C.2 A Job Offer is created by executing the post offer task.
C.3 A Job Offer is closed by determining the winner.
C.4 A Job Offer is stopped by canceling the hiring.
Enriching Data Models with Behaviora
C.5 An Application is created by executing the submit task.
C.6 An Application is promoted by marking it as eligible.
C.7 An Application can be submitted only if, beforehand, the da
date who made that Application have been registered.
C.8 A winner can be determined for a Job Offer only if at lea
responding to that Job Offer has been previously marked a
C.9 For each Application responding to a Job Offer, if the App
as eligible then a winner must be finally determined for
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be st
the cancel hiring task (and vice-versa).
185. Object-centric behavioral constraints
Emergent object lifecycles
10 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
Application Job
Offer
many one
186. Object-centric behavioral constraints
Dimension 3: the process
• constraints…
10 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
C.8 A winner can be determined for a Job Offer only if at least one Application
responding to that Job Offer has been previously marked as eligible.
C.9 For each Application responding to a Job Offer, if the Application is marked
as eligible then a winner must be finally determined for that Job Offer, and
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications responding to that
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be stopped by executing
the cancel hiring task (and vice-versa).
2.2 Capturing the Job Hiring Example with Case-Centric Notations
The most fundamental issue when trying to capture the job hiring example of Section 2.1
using case-centric notation is to identify what is the case. This, in turn, determines
what is the orchestration point for the process, that is, which participant coordinates
process instances corresponding to different case objects. This problem is apparent when
looking at BPMN, which specifies that each process should correspond to a single locus
of control, i.e., confined within a single pool.4
In our example, we have two participants: candidates (in turn responsible for man-
aging Applications), and the job hiring organisation (in turn responsible for the
management of JobOffers). However, we cannot use neither of the two to act as unique
locus of control for the process: on the one hand, candidates may simultaneously create
and manage different applications for different job offers; on the other hand, the organi-
Enriching Data Models with Behavioral Constraints 5
C.5 An Application is created by executing the submit task.
C.6 An Application is promoted by marking it as eligible.
C.7 An Application can be submitted only if, beforehand, the data about the Candi-
date who made that Application have been registered.
C.8 A winner can be determined for a Job Offer only if at least one Application
responding to that Job Offer has been previously marked as eligible.
C.9 For each Application responding to a Job Offer, if the Application is marked
as eligible then a winner must be finally determined for that Job Offer, and
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications responding to that
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be stopped by executing
the cancel hiring task (and vice-versa).
2.2 Capturing the Job Hiring Example with Case-Centric Notations
187. Object-centric behavioral constraints
Dimension 3: the process
• constraints…
• …with data co-
referencing
10 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
C.8 A winner can be determined for a Job Offer only if at least one Application
responding to that Job Offer has been previously marked as eligible.
C.9 For each Application responding to a Job Offer, if the Application is marked
as eligible then a winner must be finally determined for that Job Offer, and
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications responding to that
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be stopped by executing
the cancel hiring task (and vice-versa).
2.2 Capturing the Job Hiring Example with Case-Centric Notations
The most fundamental issue when trying to capture the job hiring example of Section 2.1
using case-centric notation is to identify what is the case. This, in turn, determines
what is the orchestration point for the process, that is, which participant coordinates
process instances corresponding to different case objects. This problem is apparent when
looking at BPMN, which specifies that each process should correspond to a single locus
of control, i.e., confined within a single pool.4
In our example, we have two participants: candidates (in turn responsible for man-
aging Applications), and the job hiring organisation (in turn responsible for the
management of JobOffers). However, we cannot use neither of the two to act as unique
locus of control for the process: on the one hand, candidates may simultaneously create
and manage different applications for different job offers; on the other hand, the organi-
Enriching Data Models with Behavioral Constraints 5
C.5 An Application is created by executing the submit task.
C.6 An Application is promoted by marking it as eligible.
C.7 An Application can be submitted only if, beforehand, the data about the Candi-
date who made that Application have been registered.
C.8 A winner can be determined for a Job Offer only if at least one Application
responding to that Job Offer has been previously marked as eligible.
C.9 For each Application responding to a Job Offer, if the Application is marked
as eligible then a winner must be finally determined for that Job Offer, and
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications responding to that
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be stopped by executing
the cancel hiring task (and vice-versa).
2.2 Capturing the Job Hiring Example with Case-Centric Notations
object co-referencing
relation co-referencing
188. If
Object co-referencing on response
12 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
A1 A2
O
R1 R2
Every time an instance a1 of A1 is executed
on some object o of type O (i.e., with R1(a1, o)),
then an instance a2 of A2 must be executed afterwards
on the same object o (i.e., with R2(a2, o))
(a) Co-reference of response over an object class
A1 A2
Every time an instance a1 of A1 is executed
8 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
A B
response
A B
unary-response
A B
non-response
A B
precedence
A B
unary-precedence
A B
non-precedence
A B
responded-existence
A B
non-coexistence
Fig. 3: Types of temporal constraints between activities
response(A, B) If A is executed, then B must be executed afterwards.
unary- response(A, B) If A is executed, then B must be executed exactly once after-
12 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
A1 A2
O
R1 R2
Every time an instance a1 of A1 is executed
on some object o of type O (i.e., with R1(a1, o)),
then an instance a2 of A2 must be executed afterwards
on the same object o (i.e., with R2(a2, o))
(a) Co-reference of response over an object class
A1 A2
Every time an instance a1 of A1 is executed
o:O
a:A
R1
then
later
b:B
R2
time
objects
189. Relation co-referencing on response
time
o1:O1
a:A
R1
If
b:B
R2
A1 A2
O
R1 R2
Every time an instance a1 of A1 is executed
on some object o of type O (i.e., with R1(a1, o)),
then an instance a2 of A2 must be executed afterwards
on the same object o (i.e., with R2(a2, o))
(a) Co-reference of response over an object class
A1 A2
O1 O2
R
R1 R2
Every time an instance a1 of A1 is executed
on some object o1 of type O1 (i.e., with R1(a1, o1)),
then an instance a2 of A2 must be executed afterwards
on some object o2 of type O2 (i.e., with R2(a2, o2))
that relates to o1 via R
(i.e., having R(o1, o2) at the moment of execution of a2).
(b) Co-reference of response over a relationship
A1 A2
Every time an instance a1 of A1 is executed
8 A. Artale, D. Calvanese, M. Montali, and W. van der Aalst
A B
response
A B
unary-response
A B
non-response
A B
precedence
A B
unary-precedence
A B
non-precedence
A B
responded-existence
A B
non-coexistence
Fig. 3: Types of temporal constraints between activities
response(A, B) If A is executed, then B must be executed afterwards.
unary- response(A, B) If A is executed, then B must be executed exactly once after-
o2:O2
R
then
later
objects
190. Object-centric behavioral constraints
Dimension 3: the process
• constraints…
• …with data co-
referencing
C.8 A winner can be determined for a Job Offer only if at least one Application
responding to that Job Offer has been previously marked as eligible.
C.9 For each Application responding to a Job Offer, if the Application is marked
as eligible then a winner must be finally determined for that Job Offer, and
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications responding to that
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be stopped by executing
the cancel hiring task (and vice-versa).
2.2 Capturing the Job Hiring Example with Case-Centric Notations
The most fundamental issue when trying to capture the job hiring example of Section 2.1
using case-centric notation is to identify what is the case. This, in turn, determines
what is the orchestration point for the process, that is, which participant coordinates
process instances corresponding to different case objects. This problem is apparent when
looking at BPMN, which specifies that each process should correspond to a single locus
of control, i.e., confined within a single pool.4
In our example, we have two participants: candidates (in turn responsible for man-
aging Applications), and the job hiring organisation (in turn responsible for the
management of JobOffers). However, we cannot use neither of the two to act as unique
locus of control for the process: on the one hand, candidates may simultaneously create
and manage different applications for different job offers; on the other hand, the organi-
Enriching Data Models with Behavioral Constraints 5
C.5 An Application is created by executing the submit task.
C.6 An Application is promoted by marking it as eligible.
C.7 An Application can be submitted only if, beforehand, the data about the Candi-
date who made that Application have been registered.
C.8 A winner can be determined for a Job Offer only if at least one Application
responding to that Job Offer has been previously marked as eligible.
C.9 For each Application responding to a Job Offer, if the Application is marked
as eligible then a winner must be finally determined for that Job Offer, and
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications responding to that
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be stopped by executing
the cancel hiring task (and vice-versa).
2.2 Capturing the Job Hiring Example with Case-Centric Notations
Enriching Data Models with Behavioral Constraints 13
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
191. Object-centric behavioral constraints
Dimension 3: the process
• constraints…
• …with data co-
referencing Enriching Data Models with Behavioral Constraints 13
is about
1
1
creates
1
promotes
1
creates
1
1
stops
1
closes
1
Person
Candidate Application Job Offer Job Profile
1
/ made by
1..⇤ ⇤
responds to
1 ⇤
refers to
1
register
data submit
mark as
eligible
post
offer
cancel
hiring
determine
winner
Enriching Data Models with Behavioral Constraints 5
C.5 An Application is created by executing the submit task.
C.6 An Application is promoted by marking it as eligible.
C.7 An Application can be submitted only if, beforehand, the data about the Candi-
date who made that Application have been registered.
C.8 A winner can be determined for a Job Offer only if at least one Application
responding to that Job Offer has been previously marked as eligible.
C.9 For each Application responding to a Job Offer, if the Application is marked
as eligible then a winner must be finally determined for that Job Offer, and
this is done only once for that Job Offer.
C.10 When a winner is determined for a Job Offer, Applications responding to that
Job Offer cannot be marked as eligible anymore.
C.11 A Job Offer closed by a determine winner task cannot be stopped by executing
the cancel hiring task (and vice-versa).
2.2 Capturing the Job Hiring Example with Case-Centric Notations
The most fundamental issue when trying to capture the job hiring example of Section 2.1
using case-centric notation is to identify what is the case. This, in turn, determines
what is the orchestration point for the process, that is, which participant coordinates
process instances corresponding to different case objects. This problem is apparent when
192. Semantics and formalization
Process execution: temporal knowledge graph
Data model: description logics
Object-centric constraints: temporal description logics
o1 : Order
ol1 : Order Line
ol2 : Order Line
ol3 : Order Line
d1 : Delivery
d2 : Delivery
.. .
.. .
.. .
.. .
.. .
.. .
co1 : Create Order
pi1 : Pick Item
pi2 : Pick Item
wi1 : Wrap Item
wi2 : Wrap Item
pi3 : Pick Item
wi3 : Wrap Item
po1 : Pay Order
di1 : Deliver Items
di2 : Deliver Items
t0 t1 t2 t3 t4 t5 t6 t7 t8 t9
creates
fills
contains
fills
contains
prepares
prepares
fills
contains
prepares
closes
refers to
results in
results in
refers to
results in
193. Achieved and ongoing results
Reasoning [____,BPM2019]
• Direct approach -> undecidable
• Careful “object-centric” reformulation
-> decidable in EXPTIME (same as reasoning on UML diagrams)
Monitoring (ongoing)
• Hybrid reasoning (closed on the past, open on the future)
Discovery (ongoing)
• Construction of trace views
• Standard discovery on views
• Object-centric reconciliation
194. Conclusions
Augmented BPM: a framework for the intelligent management
of processes at the intersection of AI and BPM
Central task: framing
Declarative approach: solid basis to framing with uncertainty,
data, objects and their interactions
•Reasoning via well-established formalisms and techniques
Foundations well understood, e
ff
ort needed towards
engineering
195. Thanks to Wil van der Aalst, Anti Alman, Alessandro Artale, Federico Chesani, Giuseppe De
Giacomo, Riccardo De Masellis, Claudio Di Ciccio, Marlon Dumas, Dirk Fahland, Paolo Felli,
Alessandro Gianola, Alisa Kovtunova, Fabrizio Maggi, Andrea Marrella, Paola Mello, Jan
Mendling, Fabio Patrizi, Rafael Penaloza, Maja Pesic, Andrey Rivkin, Michael Westergaard