Advanced course in logic and computation at ESSLLI 2017, by Calvanese and Montali, summarizing the main technical results obtained in our 6-year research on the verification of data-aware processes. Part 6/6: exploiting DCDSs - models, methods, concrete systems.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document summarizes a research paper that proposes a new approach called CBSW (Chernoff Bound based Sliding Window) for mining frequent itemsets from data streams. CBSW uses concepts from the Chernoff bound to dynamically determine the window size for mining frequent itemsets. It monitors boundary movements in a synopsis data structure to detect changes in the data stream and adjusts the window size accordingly. Experimental results demonstrate the effectiveness of CBSW in mining frequent itemsets from high-speed data streams.
The document discusses approaches for modeling processes and data, focusing on ensuring state-boundedness. It describes the concept of data-centric dynamic systems (DCDSs) and how subclasses with decidable state-boundedness correspond to variants of Petri nets. While checking state-boundedness is undecidable in general, the document outlines strategies like identifying syntactic conditions or designing methods that guarantee it. It also discusses how modeling languages can combine unboundedly many cases while retaining verification decidability through data isolation and relative boundedness.
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?INRIA-CEDAR
Data an processes are just two sides of the same coin, and for several activities related to the analysis and design of systems it is essential to capture both static and dynamic aspects in a uniform way. In recent years, we have seen various proposals that aim at marrying these two aspects, and that consider both the process controlling the dynamics and the manipulation of data
as equally central. We present Data-centric dynamic systems (DCDSs), which are a pristine model that abstracts from specific features of concrete formalisms proposed in the literature. We discuss recent results on decidadibility of verification of expressive (first-order) temporal properties over such systems.
We also present some variations and extensions of the model that make it attractive both as a theoretical tool and for concrete realizations.
This document discusses enabling technologies for interoperability between geographic information systems (GIS). It addresses problems at the syntactic, structural, and semantic levels of integration that must be solved to achieve fully interoperable GIS. At the syntactic level, standards like XML are used to integrate different data types. At the structural level, mediator systems use mapping rules to integrate heterogeneous data structures. The most difficult problem is semantic integration, where the meanings and contexts of concepts must be resolved. Ontologies and semantic modeling with XML and RDF can help describe information semantically and perform semantic translation between contexts to enable intelligent information integration.
Iaetsd a survey on one class clusteringIaetsd Iaetsd
This document presents a new method for performing one-to-many data linkage called the One Class Clustering Tree (OCCT). The OCCT builds a tree structure with inner nodes representing features of the first dataset and leaves representing similar features of the second dataset. It uses splitting criteria and pruning methods to perform the data linkage more accurately than existing indexing techniques. The OCCT approach induces a decision tree using a splitting criteria and performs prepruning to determine which branches to trim. It then compares entities to match them between the two datasets and produces a final result.
The document discusses big data opportunities and challenges. It begins with an introduction to the author and their research interests related to large scale data management. It then provides an overview of what big data is, how it has evolved, and some of the key opportunities it provides such as improved customer analytics and optimization. However, big data also presents challenges across the entire data workflow from collection to analysis to storage. These include issues of data heterogeneity, velocity, quality, as well as limitations of traditional relational databases for large scale data.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document summarizes a research paper that proposes a new approach called CBSW (Chernoff Bound based Sliding Window) for mining frequent itemsets from data streams. CBSW uses concepts from the Chernoff bound to dynamically determine the window size for mining frequent itemsets. It monitors boundary movements in a synopsis data structure to detect changes in the data stream and adjusts the window size accordingly. Experimental results demonstrate the effectiveness of CBSW in mining frequent itemsets from high-speed data streams.
The document discusses approaches for modeling processes and data, focusing on ensuring state-boundedness. It describes the concept of data-centric dynamic systems (DCDSs) and how subclasses with decidable state-boundedness correspond to variants of Petri nets. While checking state-boundedness is undecidable in general, the document outlines strategies like identifying syntactic conditions or designing methods that guarantee it. It also discusses how modeling languages can combine unboundedly many cases while retaining verification decidability through data isolation and relative boundedness.
Data and Processes: Can we Marry Them . . . and Make the Marriage Last?INRIA-CEDAR
Data an processes are just two sides of the same coin, and for several activities related to the analysis and design of systems it is essential to capture both static and dynamic aspects in a uniform way. In recent years, we have seen various proposals that aim at marrying these two aspects, and that consider both the process controlling the dynamics and the manipulation of data
as equally central. We present Data-centric dynamic systems (DCDSs), which are a pristine model that abstracts from specific features of concrete formalisms proposed in the literature. We discuss recent results on decidadibility of verification of expressive (first-order) temporal properties over such systems.
We also present some variations and extensions of the model that make it attractive both as a theoretical tool and for concrete realizations.
This document discusses enabling technologies for interoperability between geographic information systems (GIS). It addresses problems at the syntactic, structural, and semantic levels of integration that must be solved to achieve fully interoperable GIS. At the syntactic level, standards like XML are used to integrate different data types. At the structural level, mediator systems use mapping rules to integrate heterogeneous data structures. The most difficult problem is semantic integration, where the meanings and contexts of concepts must be resolved. Ontologies and semantic modeling with XML and RDF can help describe information semantically and perform semantic translation between contexts to enable intelligent information integration.
Iaetsd a survey on one class clusteringIaetsd Iaetsd
This document presents a new method for performing one-to-many data linkage called the One Class Clustering Tree (OCCT). The OCCT builds a tree structure with inner nodes representing features of the first dataset and leaves representing similar features of the second dataset. It uses splitting criteria and pruning methods to perform the data linkage more accurately than existing indexing techniques. The OCCT approach induces a decision tree using a splitting criteria and performs prepruning to determine which branches to trim. It then compares entities to match them between the two datasets and produces a final result.
The document discusses big data opportunities and challenges. It begins with an introduction to the author and their research interests related to large scale data management. It then provides an overview of what big data is, how it has evolved, and some of the key opportunities it provides such as improved customer analytics and optimization. However, big data also presents challenges across the entire data workflow from collection to analysis to storage. These include issues of data heterogeneity, velocity, quality, as well as limitations of traditional relational databases for large scale data.
[ADBIS 2021] - Optimizing Execution Plans in a MultistoreChiara Forresi
Multistores are data management systems that enable query processing across different database management systems (DBMSs); besides the distribution of data, complexity factors like schema heterogeneity and data replication must be resolved through integration and data fusion activities. In a recent work [2], we have proposed a multistore solution that relies on a dataspace to provide the user with an integrated view of the available data and enables the formulation and execution of GPSJ (generalized projection, selection and join) queries. In this paper, we propose a technique to optimize the execution of GPSJ queries by finding the most efficient execution plan on the multistore. In particular, we devise three different strategies to carry out joins and data fusion, and we build a cost model to enable the evaluation of different execution plans. Through the experimental evaluation, we are able to profile the suitability of each strategy to different multistore configurations, thus validating our multi-strategy approach and motivating further research on this topic.
Invited presentation on "Verification of Parameterized Data-Aware Dynamic Systems" at the First Workshop on Parameterized Verification (PV 2014), satellite event of the 25th International Conference on Concurrency Theory (CONCUR 2014).
Disaster relief organizations face challenges in efficiently distributing aid due to logistical obstacles and lack of communication networks. Mesh networks, which allow devices to connect directly and transmit messages without central nodes, could provide an alternative. Coupled with blockchain technology, which records transactions in a decentralized digital ledger, relief organizations could gain better visibility into demand and ensure communication remains possible for those in need during emergencies.
A Novel Integrated Framework to Ensure Better Data Quality in Big Data Analyt...IJECEIAES
With advent of Big Data Analytics, the healthcare system is increasingly adopting the analytical services that is ultimately found to generate massive load of highly unstructured data. We reviewed the existing system to find that there are lesser number of solutions towards addressing the problems of data variety, data uncertainty, and data speed. It is important that an errorfree data should arrive in analytics. Existing system offers single-hand solution towards single platform. Therefore, we introduced an integrated framework that has the capability to address all these three problems in one execution time. Considering the synthetic big data of healthcare, we carried out the investigation to find that our proposed system using deep learning architecture offers better optimization of computational resources. The study outcome is found to offer comparatively better response time and higher accuracy rate as compared to existing optimization technqiues that is found and practiced widely in literature.
On Tracking Behavior of Streaming Data: An Unsupervised ApproachWaqas Tariq
In the recent years, data streams have been in the gravity of focus of quite a lot number of researchers in different domains. All these researchers share the same difficulty when discovering unknown pattern within data streams that is concept change. The notion of concept change refers to the places where underlying distribution of data changes from time to time. There have been proposed different methods to detect changes in the data stream but most of them are based on an unrealistic assumption of having data labels available to the learning algorithms. Nonetheless, in the real world problems labels of streaming data are rarely available. This is the main reason why data stream communities have recently focused on unsupervised domain. This study is based on the observation that unsupervised approaches for learning data stream are not yet matured; namely, they merely provide mediocre performance specially when applied on multi-dimensional data streams. In this paper, we propose a method for Tracking Changes in the behavior of instances using Cumulative Density Function; abbreviated as TrackChCDF. Our method is able to detect change points along unlabeled data stream accurately and also is able to determine the trend of data called closing or opening. The advantages of our approach are three folds. First, it is able to detect change points accurately. Second, it works well in multi-dimensional data stream, and the last but not the least, it can determine the type of change, namely closing or opening of instances over the time which has vast applications in different fields such as economy, stock market, and medical diagnosis. We compare our algorithm to the state-of-the-art method for concept change detection in data streams and the obtained results are very promising.
This document outlines a proposed solution to improve the performance of alignment-based conformance checking for process mining by using a decomposition approach. It discusses decomposing large event logs and process models into smaller subcomponents to allow for conformance checking in parallel. The key challenges are ensuring the merged results from subcomponents are complete and correspond to the exact overall solution, improving performance, developing effective decomposition strategies, and extending the approach to other process modeling notations and data-aware models. The current state of the investigation is described, which involves further developing the recomposition framework, addressing issues with conformance metrics, and evaluating different decomposition strategies.
Professor Steve Roberts; The Bayesian Crowd: scalable information combinati...Ian Morgan
Professor Steve Roberts, Machine learning research group and Oxford-Man Institute + Alan Turing Institute. Steve gave this talk on the 24th January at the London Bayes Nets meetup.
Combining a co-occurrence-based and a semantic measure for entity linkingBesnik Fetahu
The document presents a novel approach for entity linking that combines a semantic connectivity score (SCS) and a co-occurrence-based measure (CBM). SCS measures relatedness of entity pairs in knowledge graphs using Katz index, while CBM approximates co-occurrence in web resources. Evaluation on news documents shows the combined approach improves precision and recall over individual methods. The authors conclude it provides a scalable entity linking technique that correctly links entities marked as unrelated by human evaluators.
Ontology Tutorial: Semantic Technology for Intelligence, Defense and SecurityBarry Smith
Dr. Barry Smith is the director of the National Center for Ontological Research. He discussed how semantic technology can help solve the problem of data silos by enabling data from different sources to be integrated and analyzed together. Ontologies, or controlled vocabularies, can be used to semantically enhance data by tagging it in an interoperable way. This allows the data to be retrieved, understood, and used by others even if they were not involved in creating the data. The semantic enhancement approach aims to break down silos incrementally by coordinating the creation of ontologies and linking datasets through shared terms.
Data Linkage is an important step that can provide valuable insights for evidence-based decision making, especially for crucial events. Performing sensible queries across heterogeneous databases containing millions of records is a complex task that requires a complete understanding of each contributing database’s schema to define the structure of its information. The key aim is to approximate the structure
and content of the induced data into a concise synopsis in order to extract and link meaningful data-driven facts. We identify such problems as four major research issues in Data Linkage: associated costs in pairwise matching, record matching overheads, semantic flow of information restrictions, and single order classification limitations. In this paper, we give a literature review of research in Data Linkage. The
purpose for this review is to establish a basic understanding of Data Linkage, and to discuss the
background in the Data Linkage research domain. Particularly, we focus on the literature related to the recent advancements in Approximate Matching algorithms at Attribute Level and Structure Level. Their efficiency, functionality and limitations are critically analysed and open-ended problems have been
exposed.
This document discusses techniques for detecting duplicate records from multiple web databases. It begins with an abstract describing an unsupervised approach that uses classifiers like the weighted component similarity summing classifier and support vector machine along with a Gaussian mixture model to iteratively identify duplicate records. The document then provides details on related work, including probabilistic matching models, supervised and unsupervised learning techniques, distance-based techniques, rule-based approaches, and methods for improving efficiency like blocking and the sorted neighborhood approach.
A systems engineering methodology for wide area network selectionAlexander Decker
This document describes a study that applies the Analytic Hierarchy Process (AHP) to help a company select the best wide area network (WAN) solution based on their requirements. The document provides background on AHP and reviews related literature on using multi-criteria decision making for selection problems. It then outlines the steps of AHP, including constructing a hierarchy, performing pairwise comparisons, and calculating weights and consistency. Finally, it describes how AHP could be applied to help the hypothetical company evaluate WAN alternatives and select the optimal solution.
This document summarizes research posters being presented at a computer science and electrical engineering department research review. It describes 8 posters presented by BS, MS, and PhD students. The posters cover topics such as identifying political affiliations in blogs, statistically weighted visualization hierarchies, voter verifiable optical-scan voting, predictive caching in mobile networks, generating statistical volume models, predicting appropriate semantic web terms, approximating online social network community structure, and utilizing semantic policies for managing BGP route dissemination.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The challenges with respect to mining frequent items over data streaming engaging variable window size
and low memory space are addressed in this research paper. To check the varying point of context change
in streaming transaction we have developed a window structure which will be in two levels and supports in
fixing the window size instantly and controls the heterogeneities and assures homogeneities among
transactions added to the window. To minimize the memory utilization, computational cost and improve the
process scalability, this design will allow fixing the coverage or support at window level. Here in this
document, an incremental mining of frequent item-sets from the window and a context variation analysis
approach are being introduced. The complete technology that we are presenting in this document is named
as Mining Frequent Item-sets using Variable Window Size fixed by Context Variation Analysis (MFI-VWSCVA).
There are clear boundaries among frequent and infrequent item-sets in specific item-sets. In this
design we have used window size change to represent the conceptual drift in an information stream. As it
were, whenever there is a problem in setting window size effectively the item-set will be infrequent. The
experiments that we have executed and documented proved that the algorithm that we have designed is
much efficient than that of existing.
BI-TEMPORAL IMPLEMENTATION IN RELATIONAL DATABASE MANAGEMENT SYSTEMS: MS SQ...lyn kurian
Traditional database management systems (DBMS) are the computation
storage and reservoir of large amounts of information. The data accumulated by these
database systems is the information valid at present time, valid now. It is the data that
is true at the present moment. Past data is the information that was kept in the
database at an earlier time, data that is hold to be existed in the past, were valid at
some point before now. Future data is the information supposed to be valid at a future
time instance, data that will be true in the near future, valid at some point after now.
The commercial DBMS of today used by organizations and individuals, such as MS
SQL Server, Oracle, DB2, Sybase, Postgres etc., do not provide models to support and
process (retrieving, modifying, inserting and removing) past and future data.
The implementation of bi-temporal modelling in Microsoft SQL Server is important
to know how relational database management system handles data the bi-temporal
property. In bi-temporal database, data saved is never deleted and additional values
are always appended. Therefore, the paper explores one of the way we can build bitemporal handling of data. The paper aims to build the core concepts of bi-temporal
data storage and querying techniques used in bi-temporal relational DBMS i.e., from
data structures to normalized storage, and to extraction or slicing of data.
The unlimited growth of data results relational data to become complicated in terms
of management and storage of data. Thus, the developers working in various
commercial and industrial applications should know how bi-temporal concepts apply to relational databases, especially due to their increased flexibility in the bi-temporal
storage as well as in analyzing data. Thereby, the paper demonstrates how bi-temporal
data structures and their operations are applied in Relational Database Management
System
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
More Related Content
Similar to Verification of Data-Aware Processes at ESSLLI 2017 6/6 - Exploiting DCDSs: Models, Methods, Concrete Systems
[ADBIS 2021] - Optimizing Execution Plans in a MultistoreChiara Forresi
Multistores are data management systems that enable query processing across different database management systems (DBMSs); besides the distribution of data, complexity factors like schema heterogeneity and data replication must be resolved through integration and data fusion activities. In a recent work [2], we have proposed a multistore solution that relies on a dataspace to provide the user with an integrated view of the available data and enables the formulation and execution of GPSJ (generalized projection, selection and join) queries. In this paper, we propose a technique to optimize the execution of GPSJ queries by finding the most efficient execution plan on the multistore. In particular, we devise three different strategies to carry out joins and data fusion, and we build a cost model to enable the evaluation of different execution plans. Through the experimental evaluation, we are able to profile the suitability of each strategy to different multistore configurations, thus validating our multi-strategy approach and motivating further research on this topic.
Invited presentation on "Verification of Parameterized Data-Aware Dynamic Systems" at the First Workshop on Parameterized Verification (PV 2014), satellite event of the 25th International Conference on Concurrency Theory (CONCUR 2014).
Disaster relief organizations face challenges in efficiently distributing aid due to logistical obstacles and lack of communication networks. Mesh networks, which allow devices to connect directly and transmit messages without central nodes, could provide an alternative. Coupled with blockchain technology, which records transactions in a decentralized digital ledger, relief organizations could gain better visibility into demand and ensure communication remains possible for those in need during emergencies.
A Novel Integrated Framework to Ensure Better Data Quality in Big Data Analyt...IJECEIAES
With advent of Big Data Analytics, the healthcare system is increasingly adopting the analytical services that is ultimately found to generate massive load of highly unstructured data. We reviewed the existing system to find that there are lesser number of solutions towards addressing the problems of data variety, data uncertainty, and data speed. It is important that an errorfree data should arrive in analytics. Existing system offers single-hand solution towards single platform. Therefore, we introduced an integrated framework that has the capability to address all these three problems in one execution time. Considering the synthetic big data of healthcare, we carried out the investigation to find that our proposed system using deep learning architecture offers better optimization of computational resources. The study outcome is found to offer comparatively better response time and higher accuracy rate as compared to existing optimization technqiues that is found and practiced widely in literature.
On Tracking Behavior of Streaming Data: An Unsupervised ApproachWaqas Tariq
In the recent years, data streams have been in the gravity of focus of quite a lot number of researchers in different domains. All these researchers share the same difficulty when discovering unknown pattern within data streams that is concept change. The notion of concept change refers to the places where underlying distribution of data changes from time to time. There have been proposed different methods to detect changes in the data stream but most of them are based on an unrealistic assumption of having data labels available to the learning algorithms. Nonetheless, in the real world problems labels of streaming data are rarely available. This is the main reason why data stream communities have recently focused on unsupervised domain. This study is based on the observation that unsupervised approaches for learning data stream are not yet matured; namely, they merely provide mediocre performance specially when applied on multi-dimensional data streams. In this paper, we propose a method for Tracking Changes in the behavior of instances using Cumulative Density Function; abbreviated as TrackChCDF. Our method is able to detect change points along unlabeled data stream accurately and also is able to determine the trend of data called closing or opening. The advantages of our approach are three folds. First, it is able to detect change points accurately. Second, it works well in multi-dimensional data stream, and the last but not the least, it can determine the type of change, namely closing or opening of instances over the time which has vast applications in different fields such as economy, stock market, and medical diagnosis. We compare our algorithm to the state-of-the-art method for concept change detection in data streams and the obtained results are very promising.
This document outlines a proposed solution to improve the performance of alignment-based conformance checking for process mining by using a decomposition approach. It discusses decomposing large event logs and process models into smaller subcomponents to allow for conformance checking in parallel. The key challenges are ensuring the merged results from subcomponents are complete and correspond to the exact overall solution, improving performance, developing effective decomposition strategies, and extending the approach to other process modeling notations and data-aware models. The current state of the investigation is described, which involves further developing the recomposition framework, addressing issues with conformance metrics, and evaluating different decomposition strategies.
Professor Steve Roberts; The Bayesian Crowd: scalable information combinati...Ian Morgan
Professor Steve Roberts, Machine learning research group and Oxford-Man Institute + Alan Turing Institute. Steve gave this talk on the 24th January at the London Bayes Nets meetup.
Combining a co-occurrence-based and a semantic measure for entity linkingBesnik Fetahu
The document presents a novel approach for entity linking that combines a semantic connectivity score (SCS) and a co-occurrence-based measure (CBM). SCS measures relatedness of entity pairs in knowledge graphs using Katz index, while CBM approximates co-occurrence in web resources. Evaluation on news documents shows the combined approach improves precision and recall over individual methods. The authors conclude it provides a scalable entity linking technique that correctly links entities marked as unrelated by human evaluators.
Ontology Tutorial: Semantic Technology for Intelligence, Defense and SecurityBarry Smith
Dr. Barry Smith is the director of the National Center for Ontological Research. He discussed how semantic technology can help solve the problem of data silos by enabling data from different sources to be integrated and analyzed together. Ontologies, or controlled vocabularies, can be used to semantically enhance data by tagging it in an interoperable way. This allows the data to be retrieved, understood, and used by others even if they were not involved in creating the data. The semantic enhancement approach aims to break down silos incrementally by coordinating the creation of ontologies and linking datasets through shared terms.
Data Linkage is an important step that can provide valuable insights for evidence-based decision making, especially for crucial events. Performing sensible queries across heterogeneous databases containing millions of records is a complex task that requires a complete understanding of each contributing database’s schema to define the structure of its information. The key aim is to approximate the structure
and content of the induced data into a concise synopsis in order to extract and link meaningful data-driven facts. We identify such problems as four major research issues in Data Linkage: associated costs in pairwise matching, record matching overheads, semantic flow of information restrictions, and single order classification limitations. In this paper, we give a literature review of research in Data Linkage. The
purpose for this review is to establish a basic understanding of Data Linkage, and to discuss the
background in the Data Linkage research domain. Particularly, we focus on the literature related to the recent advancements in Approximate Matching algorithms at Attribute Level and Structure Level. Their efficiency, functionality and limitations are critically analysed and open-ended problems have been
exposed.
This document discusses techniques for detecting duplicate records from multiple web databases. It begins with an abstract describing an unsupervised approach that uses classifiers like the weighted component similarity summing classifier and support vector machine along with a Gaussian mixture model to iteratively identify duplicate records. The document then provides details on related work, including probabilistic matching models, supervised and unsupervised learning techniques, distance-based techniques, rule-based approaches, and methods for improving efficiency like blocking and the sorted neighborhood approach.
A systems engineering methodology for wide area network selectionAlexander Decker
This document describes a study that applies the Analytic Hierarchy Process (AHP) to help a company select the best wide area network (WAN) solution based on their requirements. The document provides background on AHP and reviews related literature on using multi-criteria decision making for selection problems. It then outlines the steps of AHP, including constructing a hierarchy, performing pairwise comparisons, and calculating weights and consistency. Finally, it describes how AHP could be applied to help the hypothetical company evaluate WAN alternatives and select the optimal solution.
This document summarizes research posters being presented at a computer science and electrical engineering department research review. It describes 8 posters presented by BS, MS, and PhD students. The posters cover topics such as identifying political affiliations in blogs, statistically weighted visualization hierarchies, voter verifiable optical-scan voting, predictive caching in mobile networks, generating statistical volume models, predicting appropriate semantic web terms, approximating online social network community structure, and utilizing semantic policies for managing BGP route dissemination.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The challenges with respect to mining frequent items over data streaming engaging variable window size
and low memory space are addressed in this research paper. To check the varying point of context change
in streaming transaction we have developed a window structure which will be in two levels and supports in
fixing the window size instantly and controls the heterogeneities and assures homogeneities among
transactions added to the window. To minimize the memory utilization, computational cost and improve the
process scalability, this design will allow fixing the coverage or support at window level. Here in this
document, an incremental mining of frequent item-sets from the window and a context variation analysis
approach are being introduced. The complete technology that we are presenting in this document is named
as Mining Frequent Item-sets using Variable Window Size fixed by Context Variation Analysis (MFI-VWSCVA).
There are clear boundaries among frequent and infrequent item-sets in specific item-sets. In this
design we have used window size change to represent the conceptual drift in an information stream. As it
were, whenever there is a problem in setting window size effectively the item-set will be infrequent. The
experiments that we have executed and documented proved that the algorithm that we have designed is
much efficient than that of existing.
BI-TEMPORAL IMPLEMENTATION IN RELATIONAL DATABASE MANAGEMENT SYSTEMS: MS SQ...lyn kurian
Traditional database management systems (DBMS) are the computation
storage and reservoir of large amounts of information. The data accumulated by these
database systems is the information valid at present time, valid now. It is the data that
is true at the present moment. Past data is the information that was kept in the
database at an earlier time, data that is hold to be existed in the past, were valid at
some point before now. Future data is the information supposed to be valid at a future
time instance, data that will be true in the near future, valid at some point after now.
The commercial DBMS of today used by organizations and individuals, such as MS
SQL Server, Oracle, DB2, Sybase, Postgres etc., do not provide models to support and
process (retrieving, modifying, inserting and removing) past and future data.
The implementation of bi-temporal modelling in Microsoft SQL Server is important
to know how relational database management system handles data the bi-temporal
property. In bi-temporal database, data saved is never deleted and additional values
are always appended. Therefore, the paper explores one of the way we can build bitemporal handling of data. The paper aims to build the core concepts of bi-temporal
data storage and querying techniques used in bi-temporal relational DBMS i.e., from
data structures to normalized storage, and to extraction or slicing of data.
The unlimited growth of data results relational data to become complicated in terms
of management and storage of data. Thus, the developers working in various
commercial and industrial applications should know how bi-temporal concepts apply to relational databases, especially due to their increased flexibility in the bi-temporal
storage as well as in analyzing data. Thereby, the paper demonstrates how bi-temporal
data structures and their operations are applied in Relational Database Management
System
Similar to Verification of Data-Aware Processes at ESSLLI 2017 6/6 - Exploiting DCDSs: Models, Methods, Concrete Systems (20)
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
Slides of the keynote speech on "Constraints for process framing in Augmented BPM" at the AI4BPM 2022 International Workshop, co-located with BPM 2022. The keynote focuses on the problem of "process framing" in the context of the new vision of "Augmented BPM", where BPM systems are augmented with AI capabilities. This vision is described in a manifesto, available here: https://arxiv.org/abs/2201.12855
Keynote speech at KES 2022 on "Intelligent Systems for Process Mining". I introduce process mining, discuss why process mining tasks should be approached by using intelligent systems, and show a concrete example of this combination, namely (anticipatory) monitoring of evolving processes against temporal constraints, using techniques from knowledge representation and formal methods (in particular, temporal logics over finite traces and their automata-theoretic characterization).
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
1. The document discusses representing business processes with uncertainty using ProbDeclare, an extension of Declare that allows constraints to have uncertain probabilities.
2. ProbDeclare models contain both crisp constraints that must always hold and probabilistic constraints that hold with some probability. This leads to multiple possible "scenarios" depending on which constraints are satisfied.
3. Reasoning involves determining which scenarios are logically consistent using LTLf, and computing the probability distribution over scenarios by solving a system of inequalities defined by the constraint probabilities.
Presentation on "From Case-Isolated to Object-Centric Processes - A Tale of Two Models" as part of the Hasselt University BINF Research Seminar Series (see https://www.uhasselt.be/en/onderzoeksgroepen-en/binf/research-seminar-series).
Invited seminar on "Modeling and Reasoning over Declarative Data-Aware Processes" as part of the KRDB Summer Online Seminars 2020 (https://www.inf.unibz.it/krdb/sos-2020/).
Presentation of the paper "Soundness of Data-Aware Processes with Arithmetic Conditions" at the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022). Paper available here: https://doi.org/10.1007/978-3-031-07472-1_23
Abstract:
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
Presentation of the paper "Probabilistic Trace Alignment" at the 3rd International Conference on Process Mining (ICPM 2021). Paper available here: https://doi.org/10.1109/ICPM53251.2021.9576856
Abstract:
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model. Alignment-based approaches for conformance checking have so far used crisp process models as a reference. Recent probabilistic conformance checking approaches check the degree of conformance of an event log as a whole with respect to a stochastic process model, without providing alignments. For the first time, we introduce a conformance checking approach based on trace alignments using stochastic Workflow nets. This requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
Presentation of the paper "Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors" at the 7th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). Paper available here: https://proceedings.kr.org/2020/32/
Abstract: The integrated modeling and analysis of dynamic systems and the data they manipulate has been long advocated, on the one hand, to understand how data and corresponding decisions affect the system execution, and on the other hand to capture how actions occurring in the systems operate over data. KR techniques proved successful in handling a variety of tasks over such integrated models, ranging from verification to online monitoring. In this paper, we consider a simple, yet relevant model for data-aware dynamic systems (DDSs), consisting of a finite-state control structure defining the executability of actions that manipulate a finite set of variables with an infinite domain. On top of this model, we consider a data-aware version of reactive synthesis, where execution strategies are built by guaranteeing the satisfaction of a desired linear temporal property that simultaneously accounts for the system dynamics and data evolution.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the 18th Int. Conference on Business Process Management (BPM 2020). Paper available here: https://doi.org/10.1007/978-3-030-58666-9_3
Abstract: Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the CAiSE2020 Forum. The paper is available here: https://link.springer.com/chapter/10.1007/978-3-030-58135-0_8
Abstract: Conformance checking is a fundamental task to detect deviations between the actual and the expected courses of execution of a business process. In this context, temporal business constraints have been extensively adopted to declaratively capture the expected behavior of the process. However, traditionally, these constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, we equip business constraints with a natural, probabilistic notion of uncertainty. We discuss the semantic implications of the resulting framework and show how probabilistic conformance checking and constraint entailment can be tackled therein.
Presentation of the paper "Modeling and Reasoning over Declarative Data-Aware Processes with Object-Centric Behavioral Constraints" at the 17th Int. Conference on Business Process Management (BPM 2019). Paper available here: https://link.springer.com/chapter/10.1007/978-3-030-26619-6_11
Abstract
Existing process modeling notations ranging from Petri nets to BPMN have difficulties capturing the data manipulated by processes. Process models often focus on the control flow, lacking an explicit, conceptually well-founded integration with real data models, such as ER diagrams or UML class diagrams. To overcome this limitation, Object-Centric Behavioral Constraints (OCBC) models were recently proposed as a new notation that combines full-fledged data models with control-flow constraints inspired by declarative process modeling notations such as DECLARE and DCR Graphs. We propose a formalization of the OCBC model using temporal description logics. The obtained formalization allows us to lift all reasoning services defined for constraint-based process modeling notations without data, to the much more sophisticated scenario of OCBC. Furthermore, we show how reasoning over OCBC models can be reformulated into decidable, standard reasoning tasks over the corresponding temporal description logic knowledge base.
Keynote speech at the Belgian Process Mining Research Day 2021. I discuss the open, critical challenge of data preparation in process mining, considering the case where the original event data are implicitly stored in (legacy) relational databases. This case covers the common situation where event data are stored inside the data layer of an ERP or CRM system. This is usually handled using manual, ad-hoc, error-prone ETL procedures. I propose instead to adopt a pipeline based on semantic technologies, in particular the framework of ontology-based data access (also known as virtual knowledge graph). The approach is code-less, and relies on three main conceptual steps: (1) the creation of a data model capturing the relevant classes, attributes, and associations in the domain of interest (2) the definition of declarative mappings from the source database to the data model, following the ontology-based data access paradigm (3) the annotation of the data model with indications on which classes/associations/attributes provide the relevant notions of case, events, event attributes, and event-to-case relation. Once this is done, the framework automatically extracts the event log from the legacy data. This makes extremely smooth to generate logs by taking multiple perspectives on the same reality. The approach has been operationalized in the onprom tool, which employs semantic web standard languages for the various steps, and the XES standard as the target format for the event logs.
Keynote speech at the 7th International Workshop on DEClarative, DECision and Hybrid approaches to processes ( DEC2H 2019) In conjunction with BPM 2019.
This is a talk about the combined modeling and reasoning techniques for decisions, background knowledge, and work processes.
The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.
In this keynote, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and re
Presentation at "Ontology Make Sense", an event in honor of Nicola Guarino, on how to integrate data models with behavioral constraints, an essential problem when modeling multi-case real-life work processes evolving multiple objects at once. I propose to combine UML class diagrams with temporal constraints on finite traces, linked to the data model via co-referencing constraints on classes and associations.
The document discusses representing and querying norm states using temporal ontology-based data access (OBDA). It presents the QUEN framework which models norms and their state transitions declaratively on top of a relational database. QUEN has three layers: 1) an ontological layer representing norms, 2) a specification of norm state transitions in response to database events, and 3) a legacy relational database storing events. It demonstrates QUEN on an example of patient data access consent, modeling authorizations and their lifecycles. Norm state queries are answered directly over the database using the declarative specifications without materializing states.
Presentation ad EDOC 2019 on monitoring multi-perspective business constraints accounting for time and data, with a specific focus on the (unsolvable in general) problem of conflict detection.
1) The document discusses business process management and how conceptual modeling and process mining can help understand and improve digital enterprises.
2) Process mining techniques like process discovery from event logs, decision mining, and social network mining can provide insights into how processes are executed in reality.
3) Replay techniques can enhance process models with timing information and detect deviations to help align actual behaviors with expected behaviors.
More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Verification of Data-Aware Processes at ESSLLI 2017 6/6 - Exploiting DCDSs: Models, Methods, Concrete Systems
1. Verification of Data-Aware Processes
Exploiting DCDSs: models, methods, concrete systems
Diego Calvanese, Marco Montali
Research Centre for Knowledge and Data (KRDB)
Free University of Bozen-Bolzano, Italy
KRDB
1
29th European Summer School in Logic, Language, and Information
(ESSLLI 2017)
Toulouse, France – 17–28 July 2017
2. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Outline
1 The story so far
2 Checking/ensuring state boundedness
3 Boundedness and resources
4 Unbounded systems
5 Towards concrete systems
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (1/20)
3. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Outline
1 The story so far
2 Checking/ensuring state boundedness
3 Boundedness and resources
4 Unbounded systems
5 Towards concrete systems
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (2/20)
4. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
The story so far, with main references
The need of combining (business) processes and data.
[Calvanese, De Giacomo, and Montali 2013]
A pristine formalism for data-aware business processes: DCDS.
[Bagheri Hariri, Calvanese, De Giacomo, et al. 2013; Montali and Calvanese 2016]
Suitable verification logics for data-aware processes.
[Bagheri Hariri, Calvanese, De Giacomo, et al. 2013; Calvanese, De Giacomo,
Montali, and Patrizi 2017]
Corresponding characterization theorems.
[Calvanese, De Giacomo, Montali, and Patrizi 2017]
A decidability map, with an unexpected dichotomy between
µLA and LTL-FOA.
[Bagheri Hariri, Calvanese, De Giacomo, et al. 2013; Calvanese, De Giacomo,
Montali, and Patrizi 2017]
Note: Incorrect results in [Bagheri Hariri, Calvanese, De Giacomo, et al. 2013;
Okamoto 2010] fixed in [Calvanese, De Giacomo, Montali, and Patrizi 2017].
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (3/20)
5. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Outline
1 The story so far
2 Checking/ensuring state boundedness
3 Boundedness and resources
4 Unbounded systems
5 Towards concrete systems
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (4/20)
6. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
How to check/ensure state boundedness?
Theorem
Checking whether a DCDS is state-/run-bounded is:
Decidable for a given bound.
Undecidable for an unknown bound.
Three possible strategies:
Single out classes of DCDSs for which checking state-/run-boundedness
is decidable.
Identify sufficient syntactic conditions that are decidable to check, and
that guarantee state-/run-boundedness
cf. syntactic conditions for chase termination in data exchange.
Devise modeling methodologies that guarantee state boundedness.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (5/20)
7. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
DCDSs with decidable state-boundedness
Fact
DCDSs using only unary relations correspond to variants of Petri nets.
The specific variant depends on the features used in the DCDS.
Note: State-boundedness relate to boundedness in Petri nets.
Petri nets with name management
Decidable boundedness.
[Rosa-Velardo and Frutos-Escrig 2011]
t
p2 c e
p1
a
a c
p4
p3
p5
y
xxy xxν1
ν1ν2
[Montali and Rivkin 2016]
Translation to DCDSs and µLP verification.
Reset-Transfer Nets
Undecidable boundedness.
[Dufourd, Jancar, and Schnoebelen 1999]
p0
t1
p1
p2
t2
p3
t3
p4
p2
p2
p2
[Bagheri Hariri, Calvanese, Deutsch, et al. 2014]
“Lossy” correspondence with DCDSs.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (6/20)
8. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Attacking state-boundedness
The class of DCDSs with decidable state-boundedness very restrictive
These variants of Petri nets corresponds to DCDSs with only unary relations,
limited use of negation, no or limited joins, . . .
How to check/guarantee that a DCDS is state-bounded?
Sufficient, syntactic conditions:
Extract a data flow graph from
the DCDS.
Check sources of unboundedness
through this graph.
See [Bagheri Hariri, Calvanese, De Giacomo,
et al. 2013] and [Bagheri Hariri, Calvanese,
Deutsch, et al. 2014].
State-boundedness by design:
Design methods for state-bounded
DCDSs. In [Solomakhin et al. 2013]:
Processes are bound to evolving
business objects (artifacts).
Each business object manipulate
boundedly many data.
(New) business objects pick their
names from a fixed pool of ids.
More sophisticated techniques in
[Montali and Calvanese 2016; Calvanese,
Montali, et al. 2014].
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (7/20)
9. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
State-boundedness in concrete process modeling languages
Classical BPM languages/suites
Central notion of case representing a process instance.
Each case carries its own case data, in isolation to the other cases (e.g.,
order details, customer address, . . . ).
Cases interact by accessing a central, persistent data storage.
Artifact-centric approaches:
Central notion of business object gluing data and behaviour together.
All data relevant to a business object are attached to it.
Processes may query multiple business objects at once, to determine the
possible next steps.
External and internal stakeholders. . .
New cases/business objects are created upon events issued by external
stakeholders (e.g., new order request).
But then they are bound to internal resources, responsible for progressing
the corresponding process instances.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (8/20)
10. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Outline
1 The story so far
2 Checking/ensuring state boundedness
3 Boundedness and resources
4 Unbounded systems
5 Towards concrete systems
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (9/20)
11. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
RIAW-nets [Montali and Rivkin 2016]
tg
i
check
in-house repair
do repair
write summary
external repair
start
shipping
write report
prepare package
assemble
print receipt
o tr
ν x x
x
x
x x x x
x
x x
x
x
x
x
x
x
x
x
x
x x
x x x
HW expert
shipping clerk secretary
RIAW-nets = ν-PNs + workflow nets
Emitter transition generating a new process id when fired.
Control-flow name matching to selectively spawn/synch tokens using their id.
Resource places to bound the number of simultaneously coexisting active
process instances! (but unboundedly many over time).
Decidability of model checking via translation to state-bounded DCDSs.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (10/20)
12. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Outline
1 The story so far
2 Checking/ensuring state boundedness
3 Boundedness and resources
4 Unbounded systems
5 Towards concrete systems
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (11/20)
13. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Data isolation and case unboundedness
What if the number of simultaneously active cases cannot be bounded?
In [Montali and Calvanese 2016; Calvanese, Montali, et al. 2014], we show that
decidability of model checking can be retained, if the system obeys to:
relative boundedness (each case manipulates boundedly many data);
data isolation (cases interact very weakly).
State Group MarryM
group state id id combatLevel group
12 out • • 12 76 pro null
4 in • • 4 • • 19 basic 4
431 running • • 431 • • 56 ok 431
. . . • 3 basic 431
• 98 ok 431
Modeling guidelines to guarantee data isolation and relative boundedness:
1 Queries must be navigational (no arbitrary access to relations).
2 1-to-many relations require a number restriction on the “many” side.
3 Each case cannot create a chain of tuples of unbounded lenght.
4 Cases can share tuples only in a controlled way (no construction of chains).
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (12/20)
14. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Beyond State-Boundedness
Question
Are there classes of DCDSs that are unbounded, but still amenable to
verification?
Key result in [Abdulla et al. 2016].
Recency-bounded data-aware processes
Unbounded DB, but only the latest inserted/accessed values can bound to
parameters.
Verification via under-approximation
Decidability by focusing only on runs that are k-recency-bounded for an
explicitly given key.
Open problem
Investigate the relationships between all such results and those where the initial
DB is not fixed, and verification is studied for every possible initial DB.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (13/20)
15. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Incorporation of datatypes
Databases have datatypes
Numeric domains, domain-specific predicates, arithmetic.
Many coordination algorithms and auctions require dense orders.
Processes with costs and payment policies require integers and arithmetic.
Dense orders combine well with state-boundedness
Data-aware, state-bounded distributed systems with reals [Calvanese, Delzanno,
and Montali 2015]:
OK to include dense linear orders: minor extension to the standard
DCDS abstraction technique. Intuition. . .
Rigid > relation Non-rigid GreaterThan relation
over the entire domain −→ over active domain elements.
No hope to include the successor relation (or integers):
2 data slots are sufficient to encode two counters.
Discrete orders and arithmetic combine well with run-boundedness
Ongoing work. . .
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (14/20)
16. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Outline
1 The story so far
2 Checking/ensuring state boundedness
3 Boundedness and resources
4 Unbounded systems
5 Towards concrete systems
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (15/20)
17. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Relational multiagent systems and commitments
Relational MAS [Montali, Calvanese, and De Giacomo 2014]
Agents have names and hold/manipulate local, state-bounded DBs.
Agents exchange data using their names for addressing.
An institutional agent manages agent creation and deletion.
Due to state-boundedness: unboundedly many agents can dynamically enter
into the system, but at each moment only boundedly many are active.
Seller John
Customer Alice
Name
MyCust
Alice
Bob
ID
Item
i1
i2
Item
Paid
Cust
Institutional agent D.
DeliveryCC
C.
DeliveryC
Item
ACCEPT-REG
JohnAlice
Item
Owns
PAY-CC(i1)
Item
Paid
Cust
i1 Alice
D. C. State
D.
DeliveryCC
C.
DeliveryC
Item
JohnAlice
D. C. State
i1JohnAlice active
PAY-BT(Alice, i2)
Item
Paid
Cust
i1 Alice
D.
DeliveryCC
C.
DeliveryC
Item
JohnAlice
D. C. State
i1JohnAlice active
Alice's Bank
i2JohnAlice active
i2 Alice
deliver(i1,...)
Item
Paid
Cust
i1 Alice
D.
DeliveryCC
C.
DeliveryC
Item
JohnAlice
D. C. State
i1JohnAlice sat
Carrier
i2JohnAlice active
i2 Alice
Item
Owns
i1
Item
Owns
Item
Owns
Relational commitments
In the same work: first
proposal for modeling and
verifying interaction
protocols based on
relational commitments,
i.e., commitments with
data payload and multiple
instances.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (16/20)
18. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
daphne: implementing DCDSs with relational technology
DB
Engine
Flow
Engine
Service Manager
Persistent Storage
daphne
DCDS
state
DCDS Spec.
RDBMS
Native modeling and execution of DCDSs using relational DBMSs:
SQL-like syntax for DCDSs with datatypes.
Automated translation into relational DBMSs, as (temporal) tables,
constraints, and stored procedures.
Java APIs to support enactment and integration with concrete services.
Native explicit model checking of DCDSs using relational DBMSs:
Same model for execution and verification!
Special tables for storing the RTS induced by a DCDSs.
Factoring of tables into temporal and atemporal parts.
Computation of equality commitments and value recycling in services.
Java APIs for RTS construction and search.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (17/20)
19. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
BAUML: artifact-centric processes with UML
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. XXX, NO. YYY, [MONTH-YEAR]
id : Natural
title : String
Submission
id : String
name : String
affiliation : String
Author name : String
beginning : Date
end : Date
country : String
Conference
id : String
date : Date
time : Time
room : String
Session
submissionDate : Date
PendingReviewSub
reviewDate : Date
comments : String
evaluation : Integer
ReviewedSub
withdrawalDate : Date
WithdrawnSub
AcceptedSubmission
reason : String
RejectedSubmission
email : String
UserNonUser
status
result
*1..*
writes
1
0..*
1*
*
1
*
1
1*
registered by
sends
{disjoint, complete}registered
is presented in
is divided into
{disjoint,complete}
{disjoint, complete}
is sent to
Fig. 1. Class diagram showing the artifacts and objects involved in the submission of articles to conferences.
WithdrawnSubmission
RejectedSubmission
AcceptedSubmission
PendingReviewSubmission
Withdraw Submission
Review Submission [failure]
Review Submission [success]
Submit Paper
Visual Paradigm for UML Community Edition [not for commercial use]
Fig. 2. State machine diagram showing the evolution of artifact Submis-
sion.
NonUser
User
Promote to User
Create New Author as NonUser
Create New Author as User
Fig. 3. State machine diagram showing the evolution of artifac
Fig. 4. Activity diagram of Submit Paper.
reason : String
0..* is presented in
Fig. 1. Class diagram showing the artifacts and objects involved in the submission of articles to conferences.
NonUser
User
WithdrawnSubmission
RejectedSubmission
AcceptedSubmission
PendingReviewSubmission
Promote to User
Create New Author as NonUser
Create New Author as User
Withdraw Submission
Review Submission [failure]
Review Submission [success]
Submit Paper
Visual Paradigm for UML Community Edition [not for commercial use]
Fig. 2. State machine diagram showing the evolution of artifact Submis-
sion.
have its own. Figure 2 shows the lifecycle for Submission.
When a paper is submitted to a conference, the correspond-
ing Submission is created in state PendingReviewSubmission.
When it is reviewed, it changes to state AcceptedSubmission,
if the reviewers consider it is appropriate to be presented
at the conference (event-dependent condition success), or
RejectedSubmission, if they decide it is not (event-dependent
condition failure). Before the submission is accepted or re-
jected, one of its authors may decide to withdraw it: then it
changes its state to WithdrawnSubmission. Notice that all of
the transitions in the state machine diagram correspond to
external events.
Similarly, as shown in Figure 3, authors can be created as
a User or a NonUser. A NonUser will become a User when the
system receives additional information by means of external
event Promote to User.
Each external event in the state machine diagram(s) will
be refined by means of an activity diagram. In particular, we
will show the details of Submit Paper and Review Submission
in the state machine diagram of Submission.
Figure 4 shows the activity diagram of event Submit
Paper. The first task registers a new submission in the
system (Register New Submission), and afterwards an author
NonUser
User
Promote to User
Create New Author as NonUser
Create New Author as User
Fig. 3. State machine diagram showing the evolution of artifact Author.
Fig. 4. Activity diagram of Submit Paper.
Submit Paper
Register New
Submission
Add Author to
Submission
[no more authors to add}
[add more authors]
is added to it. If more authors need to be added (see decision
node at the end), this process is repeated. Otherwise, the
activity diagram ends.
Figure 5 shows the activity diagram for event Review
Submission. To begin with, the reviewers evaluate the sub-
mission and decide whether it is good enough to be pre-
sented at the conference. If it is not, the reviewers add a
comment and the activity diagram finishes in failure. This
corresponds to the transition that leads to state RejectedSub-
mission in the state machine diagram. On the other hand, if
the paper is accepted, it is assigned to a certain session and
the activity diagram finishes in success. It corresponds to the
transition that leads to state AcceptedSubmission in the state
machine diagram.
Notice that all the activities in the activity diagram
correspond to tasks: atomic units of work within the process.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. XXX, NO. YYY, [MONTH-YEAR] 5
Fig. 5. Activity diagram of Review Submission.
Each of them, therefore, will have an operation contract with
pre and a postcondition. The contracts corresponding to the
tasks in activity diagram SubmitPaper are shown below, in
Listings 1 and 2.
Listing 1. Code for service RegisterNewSubmission
operation RegisterNewSubmission(subId: Natural, title:
String, conf: String)
pre: Conference.allInstances()->exists(c | c.name=conf)
and not Submission.allInstances()->exists(s |
s.id=subId and s.conference.name=conf)
post: PendingReviewSubmission.allInstances()->exists(s |
s.oclIsNew() and s.id=subId and s.title=title and
s.submissionDate=today() and s.conference.name=conf
and result=s)
those combinations of artifact instances where
all the picked instances are in a proper state
(i.e., a state where the same type of transition
is enabled).
iii) If there is at least one executable transition, non-
deterministically pick one.
iv) Fire the transition, depending on the correspond-
ing label.
A)
4 REASONING ON BAUML MODELS THROUGH
DCDSS
5 RELATED WORK
6 CONCLUSIONS
ACKNOWLEDGMENTS
This work has been partially supported by the Ministerio
de Ciencia e Innovaci´on under project TIN2011-24747 and
by UPC
REFERENCES
BAUML approach
Business objects, states, associations and attributes: UML class diagrams.
Business object lifecycle: UML statechart diagram.
Complex event triggering a lifecycle transition: UML activity diagram.
Tasks modeled as OCL operation contracts.
In [Calvanese, Montali, et al. 2014]: methodology to guarantee decidability of
model checking (see before). Estanol PhD thesis: BAUML to DCDS!
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (18/20)
20. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
raw-sys: marrying workflow nets and databases
Task
i
o
local
Case
Task
i
o
local
Case
globalread
write
Task
i
o
local
Case
raw-sys model [De Masellis et al. 2017]:
Data-aware processes using well-known formalisms:
Data: global and local relational databases.
Process control-flow: workflow nets, enriched with:
Guards (queries over the DBs).
STRIPS-like actions with external inputs from an infinite domain, invoked
upon firing net transitions.
raw-sys verification [De Masellis et al. 2017]:
Map of (un)decidability, exploiting translation to DCDSs.
Encoding into planning systems to handle reachability problems.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (19/20)
21. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
db-nets: marrying colored Petri nets and databases
...
Proceed
To Booking
Reserve
(tid, pn)
Create
Booking
FreeDrivers
Reserved
Taxi
Leave
Pickup Data
Leave
Phone Number
Pickup
Data
Pnone
Number
AddBooking
(sid, tid, νpdid, n, a, t)
Finalize
Booking
...
sid sid
sid, tid
sid
sid
sid
sid
sid, νa, νt
sid, νn
sid,a,t
sid, tid
sid, n
tid, pn
TAXI
TID: int PlateNum : string IsFree : bool
BOOKING
BID : int TaxiID : int PickupID : int PhoneID : int
PHONE
PID : int Phone : string
PICKUP DATA
PDID : int Address : string Time : date
db-net model [Montali and Rivkin 2017], three layers:
1 Persistence: relational database with constraints.
2 Data logic: queries and actions over the persistence layer.
3 Control: colored Petri net with ν-variables, enriched with view places and
transition-action bindings to inspect/update the persistence layer.
Note: Natural formalization of contemporary process modeling suites!
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (20/20)
22. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
db-nets: marrying colored Petri nets and databases
...
Proceed
To Booking
Reserve
(tid, pn)
Create
Booking
FreeDrivers
Reserved
Taxi
Leave
Pickup Data
Leave
Phone Number
Pickup
Data
Pnone
Number
AddBooking
(sid, tid, νpdid, n, a, t)
Finalize
Booking
...
sid sid
sid, tid
sid
sid
sid
sid
sid, νa, νt
sid, νn
sid,a,t
sid, tid
sid, n
tid, pn
TAXI
TID: int PlateNum : string IsFree : bool
BOOKING
BID : int TaxiID : int PickupID : int PhoneID : int
PHONE
PID : int Phone : string
PICKUP DATA
PDID : int Address : string Time : date
db-nets execution, simulation, verification [Montali and Rivkin 2017]:
Foundational results thanks to translation to DCDSs.
Ongoing implementation effort inside www.cpntools.org.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (20/20)
23. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Acknowledgements
Thanks to the many people who contributed interesting ideas, suggestions,
discussions, and collaborated to the presented results.
Giuseppe De Giacomo
Fabio Patrizi
Babak Bagheri Hariri
Riccardo De Masellis
Alin Deutsch
Paolo Felli
Rick Hull
Maurizio Lenzerini
Alessio Lomuscio
Andy Rivkin
Ario Santoso
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (21/20)
24. The story so far State-boundedness Boundedness and resources Unbounded systems Concrete systems
Thank you for your attention!
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (22/20)
25. References References
References I
[1] Diego Calvanese, Giuseppe De Giacomo, and Marco Montali.
“Foundations of Data-Aware Process Analysis: A Database Theory
Perspective”. In: Proc. of the 32nd ACM SIGACT SIGMOD SIGAI Symp.
on Principles of Database Systems (PODS). ACM Press, 2013, pp. 1–12.
[2] Babak Bagheri Hariri, Diego Calvanese, Giuseppe De Giacomo, et al.
“Verification of Relational Data-Centric Dynamic Systems with External
Services”. In: Proc. of the 32nd ACM SIGACT SIGMOD SIGAI Symp. on
Principles of Database Systems (PODS). Extended version available at
http://arxiv.org/abs/1203.0024. 2013, pp. 163–174.
[3] Marco Montali and Diego Calvanese. “Soundness of Data-Aware,
Case-Centric Processes”. In: Int. J. on Software Tools for Technology
Transfer (2016). doi: 10.1007/s10009-016-0417-2.
[4] Diego Calvanese, Giuseppe De Giacomo, Marco Montali, and
Fabio Patrizi. “First-Order mu-Calculus over Generic Transition Systems
and Applications to the Situation Calculus”. In: Information and
Computation (2017). To appear.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (23/20)
26. References References
References II
[5] Keishi Okamoto. “Comparing Expressiveness of First-Order Modal
µ-calculus and First-Order CTL*”. In: RIMS Kokyuroku 1708 (2010),
pp. 1–14.
[6] Fernando Rosa-Velardo and David de Frutos-Escrig. “Decidability and
Complexity of Petri Nets with Unordered Data”. In: Theoretical
Computer Science 412.34 (2011), pp. 4439–4451.
[7] Marco Montali and Andrey Rivkin. “Model Checking Petri Nets with
Names Using Data-Centric Dynamic Systems”. In: Formal Aspects of
Computing (2016), pp. 1–27.
[8] Catherine Dufourd, Petr Jancar, and Ph. Schnoebelen. “Boundedness of
Reset P/T Nets”. In: Proc. of the 26th Int. Coll. on Automata,
Languages and Programming (ICALP). Vol. 1644. Lecture Notes in
Computer Science. Springer, 1999, pp. 301–310.
[9] Babak Bagheri Hariri, Diego Calvanese, Alin Deutsch, et al.
“State-Boundedness in Data-Aware Dynamic Systems”. In: Proc. of the
14th Int. Conf. on the Principles of Knowledge Representation and
Reasoning (KR). AAAI Press, 2014.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (24/20)
27. References References
References III
[10] Dmitry Solomakhin et al. “Verification of Artifact-Centric Systems:
Decidability and Modeling Issues”. In: vol. 8274. Lecture Notes in
Computer Science. Springer, 2013, pp. 252–266.
[11] Diego Calvanese, Marco Montali, et al. “Verifiable UML Artifact-Centric
Business Process Models”. In: Proc. of the 23rd Int. Conf. on
Information and Knowledge Management (CIKM). 2014, pp. 1289–1298.
doi: 10.1145/2661829.2662050.
[12] Parosh Aziz Abdulla et al. “Recency-Bounded Verification of Dynamic
Database-Driven Systems”. In: Proc. of the 35th ACM SIGACT
SIGMOD SIGAI Symp. on Principles of Database Systems (PODS). ACM
Press, 2016.
[13] Diego Calvanese, Giorgio Delzanno, and Marco Montali. “Verification of
Relational Multiagent Systems with Data Types”. In: Proc. of the 29th
AAAI Conf. on Artificial Intelligence (AAAI). AAAI Press, 2015,
pp. 2031–2037.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (25/20)
28. References References
References IV
[14] Marco Montali, Diego Calvanese, and Giuseppe De Giacomo.
“Verification of Data-Aware Commitment-Based Multiagent System”. In:
Proc. of the 13th Int. Conf. on Autonomous Agents and Multiagent
Systems (AAMAS). IFAAMAS, 2014, pp. 157–164.
[15] Riccardo De Masellis et al. “Add Data into Business Process Verification:
Bridging the Gap between Theory and Practice”. In: Proc. of the 31st
AAAI Conf. on Artificial Intelligence (AAAI). AAAI Press, 2017,
pp. 1091–1099. url:
http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14627.
[16] Marco Montali and Andrey Rivkin. “DB-Nets: on The Marriage of
Colored Petri Nets and Relational Databases”. In: LNCS Transactions on
Petri Nets and Other Models of Concurrency (2017). To appear.
Calvanese, Montali (FUB) Verification of Data-Aware Processes ESSLLI 2017 – 24–28/07/2017 (26/20)