The document discusses representing and querying norm states using temporal ontology-based data access (OBDA). It presents the QUEN framework which models norms and their state transitions declaratively on top of a relational database. QUEN has three layers: 1) an ontological layer representing norms, 2) a specification of norm state transitions in response to database events, and 3) a legacy relational database storing events. It demonstrates QUEN on an example of patient data access consent, modeling authorizations and their lifecycles. Norm state queries are answered directly over the database using the declarative specifications without materializing states.
Discrete sequential prediction of continuous actions for deep RLJie-Han Chen
This paper proposes a method called SDQN (Sequential Deep Q-Network) to solve continuous action problems using a value-based reinforcement learning approach. SDQN discretizes continuous actions into sequential discrete steps. It transforms the original MDP into an "inner MDP" between consecutive discrete steps and an "outer MDP" between states. SDQN uses two Q-networks - an inner Q-network to estimate state-action values for each discrete step, and an outer Q-network to estimate values between states. It updates the networks using Q-learning for the inner networks and regression to match the last inner Q to the outer Q. The method is tested on a multimodal environment and several MuJoCo tasks, outperform
This document contains a biography and credentials for Dr. Tyrone W A Grandison. It lists his educational background which includes a BSc, MSc, and PhD from various universities. It outlines his work experience including 10 years at IBM and current work for the White House. It provides details on his recognition and awards in computer science and engineering. It notes his publications record of over 100 papers and 47 patents.
The document proposes a framework called Fast Forward With Degradation (FFWD) to handle load peaks in streaming applications using load shedding techniques. FFWD uses a load manager, load shedding filter, and policies to monitor resource usage, determine when load shedding is needed, and minimize output quality degradation. The load manager computes the throughput needed for stability based on the arrival and service rates. It leverages queuing theory and models the system's utilization and queue size to determine the required throughput to avoid overloading. FFWD aims to mitigate high resource usage during peaks while avoiding uncontrolled event loss and degradation of output quality.
There are no systems that are connected to the Internet that are completely safe. Cyber-attacks are the norm. Everyone with a web presence is attacked multiple times each week. To further complicate this scenario, government entities have been found to be weakening Web security protocols and compromising business systems in the interest of national security, and hyper-competitive companies have been caught engaging in cyber-espionage. Detection of these attacks in real-time is difficult due to a number of reasons. The primary ones being the dynamism and ingenuity of the attacker and the nature of contemporary real-time attack detection systems. In this talk, I will share insights on an alternative, i.e. quickly recognizing attacks in a short period of time after the incident using audit analysis.
To detect network intrusions protects a computer network from unauthorized users, including perhaps insiders. The intrusion detector learning task is to build a predictive model (i.e. a classifier) capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections
1. Real-time systems are systems where the correctness depends on both the logical result and the time at which the results are produced.
2. Real-time systems have performance deadlines where computations and actions must be completed. Deadlines can be time-driven or event-driven.
3. Real-time systems are classified as hard, firm, or soft depending on how critical meeting deadlines are. They are used in applications like medical equipment, automotive systems, and avionics.
The audit report summarizes an audit of the smart contracts for Strips Finance. The auditors identified 6 issues total, including 1 high severity vulnerability allowing improved logic, 3 medium severity issues around reentrancy, price calculations and admin keys, and 1 low severity possible reentrancy. The report provides detailed descriptions of each issue and their statuses.
- The document discusses verification of Data-Centric Dynamic Systems (DCDSs), which combine data and processes. DCDSs have a data layer consisting of a relational database and constraints, and a process layer consisting of condition-action rules that can call external services.
- The verification problem involves checking if a DCDS satisfies a temporal/dynamic property, given as a formula in first-order μ-calculus. However, unrestricted first-order quantification and even simple temporal properties can make verification undecidable.
- The talk proposes restricting quantification and properties to obtain decidable verification, by constructing a finite-state abstraction of the DCDS transition system that soundly and completely represents its behavior.
Discrete sequential prediction of continuous actions for deep RLJie-Han Chen
This paper proposes a method called SDQN (Sequential Deep Q-Network) to solve continuous action problems using a value-based reinforcement learning approach. SDQN discretizes continuous actions into sequential discrete steps. It transforms the original MDP into an "inner MDP" between consecutive discrete steps and an "outer MDP" between states. SDQN uses two Q-networks - an inner Q-network to estimate state-action values for each discrete step, and an outer Q-network to estimate values between states. It updates the networks using Q-learning for the inner networks and regression to match the last inner Q to the outer Q. The method is tested on a multimodal environment and several MuJoCo tasks, outperform
This document contains a biography and credentials for Dr. Tyrone W A Grandison. It lists his educational background which includes a BSc, MSc, and PhD from various universities. It outlines his work experience including 10 years at IBM and current work for the White House. It provides details on his recognition and awards in computer science and engineering. It notes his publications record of over 100 papers and 47 patents.
The document proposes a framework called Fast Forward With Degradation (FFWD) to handle load peaks in streaming applications using load shedding techniques. FFWD uses a load manager, load shedding filter, and policies to monitor resource usage, determine when load shedding is needed, and minimize output quality degradation. The load manager computes the throughput needed for stability based on the arrival and service rates. It leverages queuing theory and models the system's utilization and queue size to determine the required throughput to avoid overloading. FFWD aims to mitigate high resource usage during peaks while avoiding uncontrolled event loss and degradation of output quality.
There are no systems that are connected to the Internet that are completely safe. Cyber-attacks are the norm. Everyone with a web presence is attacked multiple times each week. To further complicate this scenario, government entities have been found to be weakening Web security protocols and compromising business systems in the interest of national security, and hyper-competitive companies have been caught engaging in cyber-espionage. Detection of these attacks in real-time is difficult due to a number of reasons. The primary ones being the dynamism and ingenuity of the attacker and the nature of contemporary real-time attack detection systems. In this talk, I will share insights on an alternative, i.e. quickly recognizing attacks in a short period of time after the incident using audit analysis.
To detect network intrusions protects a computer network from unauthorized users, including perhaps insiders. The intrusion detector learning task is to build a predictive model (i.e. a classifier) capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections
1. Real-time systems are systems where the correctness depends on both the logical result and the time at which the results are produced.
2. Real-time systems have performance deadlines where computations and actions must be completed. Deadlines can be time-driven or event-driven.
3. Real-time systems are classified as hard, firm, or soft depending on how critical meeting deadlines are. They are used in applications like medical equipment, automotive systems, and avionics.
The audit report summarizes an audit of the smart contracts for Strips Finance. The auditors identified 6 issues total, including 1 high severity vulnerability allowing improved logic, 3 medium severity issues around reentrancy, price calculations and admin keys, and 1 low severity possible reentrancy. The report provides detailed descriptions of each issue and their statuses.
- The document discusses verification of Data-Centric Dynamic Systems (DCDSs), which combine data and processes. DCDSs have a data layer consisting of a relational database and constraints, and a process layer consisting of condition-action rules that can call external services.
- The verification problem involves checking if a DCDS satisfies a temporal/dynamic property, given as a formula in first-order μ-calculus. However, unrestricted first-order quantification and even simple temporal properties can make verification undecidable.
- The talk proposes restricting quantification and properties to obtain decidable verification, by constructing a finite-state abstraction of the DCDS transition system that soundly and completely represents its behavior.
Credit Default Swap (CDS) Rate Construction by Machine Learning TechniquesZhongmin Luo
1. Financial institutions need to construct proxy CDS rates for counterparties lacking liquid CDS quotes, which are required for CVA pricing, CVA risk charge calculation, etc;
2. Existing CDS Proxy Methods do not meet regulatory requirements and are vulnerable to arbitrage;
3. After investigating 8 most popular Machine Learning algorithms, we show that Machine Learning techniques can be used to construct reliable CDS proxies that meet regulatory regulations while free from the above problem
4. Feature variable selection can be critical for performance of CDS-proxy construction methods
5. Effects of feature variable correlations on classification performances have to be investigated in the case of financial data
This document provides an overview and agenda for a presentation on creating a "Hello World" program with Cisco's Data in Motion (DMo) software. The presentation introduces DMo and how it can manage and analyze data at the edge. It discusses how DMo represents a paradigm shift with edge intelligence and provides examples of railway and utilities use cases. The document explains DMo's programming model involving dynamic data definitions, patterns, conditions, and actions. It also demonstrates how to set up a DMo instance, create timer and event rules to read a light sensor and control an LED based on the sensor readings.
Basil is a scalable Byzantine fault-tolerant (BFT) database that introduces transactions and concurrency control to BFT systems. It addresses limitations of prior BFT systems such as leader bottlenecks, limited scalability, and lack of transactional semantics. Basil employs a client-driven commit approach and sharding to enable leaderless operation and linear scalability. It guarantees Byzantine serializability for safety and Byzantine independence for liveness. Experiments show Basil outperforms prior BFT databases and scales linearly with the number of shards.
Self-adaptive container monitoring with performance-aware Load-Shedding policies, by Rolando Brondolin, PhD student in System Architecture at Politecnico di Milano
Many customers often switch or unsubscribe (churn) from their telecom providers for a variety of reasons. These could range from unsatisfactory service, better pricing from competitors, customers moving to different cities etc. Therefore, telecom companies are interested in analyzing the patterns for customers who churn from their services and use the resultant analysis to determine in the future which customers are more likely to unsubscribe from their services. One such company is Telco Systems. Telco Systems is interested in identifying the precise patterns for their churning customers and have provided the customer data for this project.
Data Quality Challenges & Solution Approaches in Yahoo!’s Massive DataDATAVERSITY
Data is Yahoo!'s most strategic assets - from user engagement and insights data to revenue and billing data. Three years ago, Yahoo! invested in a Data Quality program.
By applying industry principles and techniques the Data Quality program has provided proactive and reactive system solutions to Audience data issues and root causes by addressing technical challenges of data quality at scale and engaging and leveraging the rest of the organization in the solution: from product teams all through the data stack (data sourcing, ETL, aggs and analytics) to analysts and sciences teams who consume the data. This methodology is now being scaled to the all data across Yahoo! including Search and Display Advertising.
The document discusses container monitoring and proposes a framework called FFWD (Fast Forward With Degradation) to address load peaks in container-based streaming applications. FFWD uses load-shedding techniques to mitigate high resource usage. It consists of a Load Manager that computes the required throughput to ensure stability, a load shedding (LS) filter that determines where to shed load, and policies that specify how much load to shed to minimize quality degradation while avoiding uncontrolled loss of events. The Load Manager monitors utilization and queue size based on queuing theory to dynamically adjust the throughput. FFWD aims to control resource usage while maximizing accuracy of monitored metrics during overloading conditions.
BigDansing presentation slides for SIGMOD 2015Zuhair khayyat
BigDansing is a system for cleansing big data. It takes a declarative approach using logical operators to detect violations of quality rules and generate repairs. The system is optimized for scalability using techniques like shared scans, fast inequality joins, and distributed computation of equivalent classes. It aims to provide an abstraction layer that hides complexity while enabling scalability and portability across centralized and distributed environments. Experiments showed it can cleanse data several orders of magnitude faster than alternative approaches.
The document discusses various techniques for managing the scope of a software project, including defining the system scope, establishing a requirements baseline, prioritizing requirements and use cases, and managing stakeholder expectations. It emphasizes the importance of scope management in selecting requirements for each development iteration and controlling what features and functions will be included in the project. It also outlines the roles of a requirements manager and product champion in helping to define and maintain an appropriate project scope.
This document discusses quality by design (QbD) approaches for biopharmaceutical development. QbD focuses on designing quality into the product and process based on an understanding of critical quality attributes and critical process parameters. Key aspects of QbD include identifying critical attributes and parameters, using tools like design of experiments to understand their impact, defining a design space, and ensuring robustness through continuous monitoring and improvement. Statistical tools and multidisciplinary teams are important for successful QbD implementation.
Disaster and RecoveryBusiness Impact AnalysisSystem .docxduketjoy27252
Disaster and Recovery
Business Impact Analysis
System Description/Purpose
Impact to business if degradation
Estimated Downtime
Resource Requirements.
Business Contingency Plan
Incident Response Policy
Purpose
Identifying and Reporting Incidents
Mitigation and Containment
Questions?
Overview
Shawn Kirkland
Purpose
Determine mission/business processes and recovery criticality.
Identify resource requirements.
Identify recovery priorities for system resources.
System Description/Purpose
Impact to business if degradation
Estimated Downtime
Resource Requirements.
Business Impact Analysis
Shawn Kirkland
Determine mission/business processes and recovery criticality. Mission/business processes supported by the system are identified and the impact of a system disruption to those processes is determined along with outage impacts and estimated downtime. The downtime should reflect the maximum that an organization can tolerate while still maintaining the mission.
Identify resource requirements. Realistic recovery efforts require a thorough evaluation of the resources required to resume mission/business processes and related interdependencies as quickly as possible. Examples of resources that should be identified include facilities, personnel, equipment, software, data files, system components, and vital records.
Identify recovery priorities for system resources. Based upon the results from the previous activities, system resources can more clearly be linked to critical mission/business processes. Priority levels can be established for sequencing recovery activities and resources.
This document is used to build the Dream Landing’s Database Server Information System Contingency Plan (ISCP) and is included as a key component of the ISCP. It also may be used to support the development of other contingency plans associated with the system, including, but not limited to, the Disaster Recovery Plan (DRP) or Cyber Incident Response Plan.
3
Operating System
Microsoft Windows Server 2008 R2
Application
Microsoft SQL Server 2008 Enterprise Edition
Hardware
Dell R720
Location
Server Rack on second floor server room.
Connection
System Administrator connects via local area network.
Other users connect remotely
DR Method
1 Full backup weekly and dailies every day.
3 hours after close of business.
System Description
Shawn Kirkland
The Dream Landing’s database server is comprised of Microsoft SQL Server 2008 Enterprise Edition installed and running on Microsoft Windows Server 2008 R2; this platform is housed on a Dell R720 server-class system. The database server is located in the server rack located on the second floor server room. Local administrators connect directly through the local area network; other users connect indirectly through the web server. Daily snapshot backup operations are conducted every day 3 hours after close of business.
4
ImpactMission/Business ProcessDescriptionQuery customer recordDatabase retrieval of customer.
Apoorva Javadekar -Role of Reputation For Mutual Fund FlowsApoorva Javadekar
As per apoorva javadekar From this ppt
we can conclude that 3.Some 2 nd half risk-sfiting for bad repute funds .Fund Flow heterogeniety could be explained through presence of loss-averse investors
Data Protection Compliance Check - Outsourcing - Part 2 "Paper" (C2P relation...Tommy Vandepitte
Outsourcing data processing operations entails specific risks and requirements under the law and under sound risk management.
Therefore a set of three templates is developed to look at outsourcing of data processing operations:
(1) the (internal) organisation of the controller including policies and procedures,
(2) the relationship between the controller and the processor, mainly via the agreement and
(3) the (internal) organisation of the processor.
This template aims to give guidance to a check on a specific relationship between a controller and a processor, thus limiting the scope.
The DPCC contains checklists. They aim to provide some guidance in the check. However, be aware that some (parts of) checklists may not apply and that no checklist ever includes all possible relevant questions. So check with open eyes.
This template addresses that relationship looking at several stages from the controller side
(a) in the selection,
(b) in the agreement and
(c) in (the follow-up of) the performance.
This template should be used in a risk-based fashion. Therefore it is expected that critical, key, and/or high-risk outsourced data processing operations of the controller are submitted to a check with priority.
The result of this check hopefully is a certain comfort in the application of the controller’s procedures and rules with regard to outsourcing data processing operations. If such comfort is not found, it should be determined whether amends can be made, through an amendment to the agreement or the follow-up mechanisms, or a better discipline in applying them. Also, lessons may be learnt with regard to the effectiveness of the controller’s procedures and rules.
The document discusses the Standard Transfer Specification (STS), which is a secure message protocol for prepayment meters. It has been an industry standard since 1994 and is now also an IEC standard. There are approximately 5 million STS meters installed worldwide. The STS Association oversees the STS standard and provides key management and product certification services to support its use. The association is working to enhance the STS and integrate it further into the family of IEC 62055 standards for prepayment metering.
Multiple objectives in Collaborative Filtering (RecSys 2010)Tamas Jambor
This document discusses using multiple objectives in collaborative filtering recommender systems. It proposes a framework that optimizes for an accuracy baseline objective while also considering additional user and system objectives. Specifically, it explores promoting less popular, "long tail" items from a user perspective, and incorporating item availability constraints from a system perspective to reduce waiting times. Experiments show the approach can improve these other objectives with only minor losses to recommendation accuracy. The framework provides a flexible way to optimize collaborative filtering for multiple goals.
This document summarizes a PhD research project that developed a system dynamics model to evaluate transport safety policies for commercial motorcycle operation in Nigeria. The research aimed to (1) identify factors contributing to safety problems and their relationships, and (2) develop a dynamic model to understand how driver behavior develops and is influenced over time. The research involved interviews, data collection and analysis in Nigeria to develop a causal loop diagram and stock and flow model. The model was used to test scenarios such as increasing enforcement capacity, removing expensive vehicle ownership options, and increasing prosecution rates. Key findings indicated that system dynamics modeling was useful, entry methods into the trade significantly impacted problems, and combining measures had more leverage than single interventions.
There are several challenges in evaluating information retrieval systems, including the subjectivity and dynamic nature of relevancy judgments. Common evaluation metrics include precision, recall, F-measure, mean average precision, discounted cumulative gain, and normalized discounted cumulative gain. These metrics are calculated using test collections consisting of queries, relevant documents, and system rankings to measure how closely systems can match human relevance assessments at different levels of a ranked list.
Semantic Integration of Patient Data and Quality Indicators based on openEHR ...Kathrin Dentler
The document discusses using openEHR archetypes to semantically integrate patient data and quality indicators. It describes mapping clinical data from a Dutch hospital database onto openEHR archetypes and SNOMED CT codes. Quality indicators were then formalized using the archetypes and constraints to query the structured patient data and calculate indicator results. The approach demonstrated the potential for open archetypes to enable sharable, computable quality indicator definitions and integration of heterogeneous clinical data sources.
Analysis the Privacy preserving and content protecting location based on querieskavidhapr
This document proposes a two-stage solution for secure location-based queries that improves performance. The first stage uses oblivious transfer to privately determine the user's location within a public grid. The second stage uses private information retrieval for the user to efficiently retrieve an appropriate data block from the private grid. The solution introduces a formal security model and analyzes the security of the novel protocol. It aims to achieve privacy protection for both the user and server in location-based services.
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
More Related Content
Similar to Representing and querying norm states using temporal ontology-based data access
Credit Default Swap (CDS) Rate Construction by Machine Learning TechniquesZhongmin Luo
1. Financial institutions need to construct proxy CDS rates for counterparties lacking liquid CDS quotes, which are required for CVA pricing, CVA risk charge calculation, etc;
2. Existing CDS Proxy Methods do not meet regulatory requirements and are vulnerable to arbitrage;
3. After investigating 8 most popular Machine Learning algorithms, we show that Machine Learning techniques can be used to construct reliable CDS proxies that meet regulatory regulations while free from the above problem
4. Feature variable selection can be critical for performance of CDS-proxy construction methods
5. Effects of feature variable correlations on classification performances have to be investigated in the case of financial data
This document provides an overview and agenda for a presentation on creating a "Hello World" program with Cisco's Data in Motion (DMo) software. The presentation introduces DMo and how it can manage and analyze data at the edge. It discusses how DMo represents a paradigm shift with edge intelligence and provides examples of railway and utilities use cases. The document explains DMo's programming model involving dynamic data definitions, patterns, conditions, and actions. It also demonstrates how to set up a DMo instance, create timer and event rules to read a light sensor and control an LED based on the sensor readings.
Basil is a scalable Byzantine fault-tolerant (BFT) database that introduces transactions and concurrency control to BFT systems. It addresses limitations of prior BFT systems such as leader bottlenecks, limited scalability, and lack of transactional semantics. Basil employs a client-driven commit approach and sharding to enable leaderless operation and linear scalability. It guarantees Byzantine serializability for safety and Byzantine independence for liveness. Experiments show Basil outperforms prior BFT databases and scales linearly with the number of shards.
Self-adaptive container monitoring with performance-aware Load-Shedding policies, by Rolando Brondolin, PhD student in System Architecture at Politecnico di Milano
Many customers often switch or unsubscribe (churn) from their telecom providers for a variety of reasons. These could range from unsatisfactory service, better pricing from competitors, customers moving to different cities etc. Therefore, telecom companies are interested in analyzing the patterns for customers who churn from their services and use the resultant analysis to determine in the future which customers are more likely to unsubscribe from their services. One such company is Telco Systems. Telco Systems is interested in identifying the precise patterns for their churning customers and have provided the customer data for this project.
Data Quality Challenges & Solution Approaches in Yahoo!’s Massive DataDATAVERSITY
Data is Yahoo!'s most strategic assets - from user engagement and insights data to revenue and billing data. Three years ago, Yahoo! invested in a Data Quality program.
By applying industry principles and techniques the Data Quality program has provided proactive and reactive system solutions to Audience data issues and root causes by addressing technical challenges of data quality at scale and engaging and leveraging the rest of the organization in the solution: from product teams all through the data stack (data sourcing, ETL, aggs and analytics) to analysts and sciences teams who consume the data. This methodology is now being scaled to the all data across Yahoo! including Search and Display Advertising.
The document discusses container monitoring and proposes a framework called FFWD (Fast Forward With Degradation) to address load peaks in container-based streaming applications. FFWD uses load-shedding techniques to mitigate high resource usage. It consists of a Load Manager that computes the required throughput to ensure stability, a load shedding (LS) filter that determines where to shed load, and policies that specify how much load to shed to minimize quality degradation while avoiding uncontrolled loss of events. The Load Manager monitors utilization and queue size based on queuing theory to dynamically adjust the throughput. FFWD aims to control resource usage while maximizing accuracy of monitored metrics during overloading conditions.
BigDansing presentation slides for SIGMOD 2015Zuhair khayyat
BigDansing is a system for cleansing big data. It takes a declarative approach using logical operators to detect violations of quality rules and generate repairs. The system is optimized for scalability using techniques like shared scans, fast inequality joins, and distributed computation of equivalent classes. It aims to provide an abstraction layer that hides complexity while enabling scalability and portability across centralized and distributed environments. Experiments showed it can cleanse data several orders of magnitude faster than alternative approaches.
The document discusses various techniques for managing the scope of a software project, including defining the system scope, establishing a requirements baseline, prioritizing requirements and use cases, and managing stakeholder expectations. It emphasizes the importance of scope management in selecting requirements for each development iteration and controlling what features and functions will be included in the project. It also outlines the roles of a requirements manager and product champion in helping to define and maintain an appropriate project scope.
This document discusses quality by design (QbD) approaches for biopharmaceutical development. QbD focuses on designing quality into the product and process based on an understanding of critical quality attributes and critical process parameters. Key aspects of QbD include identifying critical attributes and parameters, using tools like design of experiments to understand their impact, defining a design space, and ensuring robustness through continuous monitoring and improvement. Statistical tools and multidisciplinary teams are important for successful QbD implementation.
Disaster and RecoveryBusiness Impact AnalysisSystem .docxduketjoy27252
Disaster and Recovery
Business Impact Analysis
System Description/Purpose
Impact to business if degradation
Estimated Downtime
Resource Requirements.
Business Contingency Plan
Incident Response Policy
Purpose
Identifying and Reporting Incidents
Mitigation and Containment
Questions?
Overview
Shawn Kirkland
Purpose
Determine mission/business processes and recovery criticality.
Identify resource requirements.
Identify recovery priorities for system resources.
System Description/Purpose
Impact to business if degradation
Estimated Downtime
Resource Requirements.
Business Impact Analysis
Shawn Kirkland
Determine mission/business processes and recovery criticality. Mission/business processes supported by the system are identified and the impact of a system disruption to those processes is determined along with outage impacts and estimated downtime. The downtime should reflect the maximum that an organization can tolerate while still maintaining the mission.
Identify resource requirements. Realistic recovery efforts require a thorough evaluation of the resources required to resume mission/business processes and related interdependencies as quickly as possible. Examples of resources that should be identified include facilities, personnel, equipment, software, data files, system components, and vital records.
Identify recovery priorities for system resources. Based upon the results from the previous activities, system resources can more clearly be linked to critical mission/business processes. Priority levels can be established for sequencing recovery activities and resources.
This document is used to build the Dream Landing’s Database Server Information System Contingency Plan (ISCP) and is included as a key component of the ISCP. It also may be used to support the development of other contingency plans associated with the system, including, but not limited to, the Disaster Recovery Plan (DRP) or Cyber Incident Response Plan.
3
Operating System
Microsoft Windows Server 2008 R2
Application
Microsoft SQL Server 2008 Enterprise Edition
Hardware
Dell R720
Location
Server Rack on second floor server room.
Connection
System Administrator connects via local area network.
Other users connect remotely
DR Method
1 Full backup weekly and dailies every day.
3 hours after close of business.
System Description
Shawn Kirkland
The Dream Landing’s database server is comprised of Microsoft SQL Server 2008 Enterprise Edition installed and running on Microsoft Windows Server 2008 R2; this platform is housed on a Dell R720 server-class system. The database server is located in the server rack located on the second floor server room. Local administrators connect directly through the local area network; other users connect indirectly through the web server. Daily snapshot backup operations are conducted every day 3 hours after close of business.
4
ImpactMission/Business ProcessDescriptionQuery customer recordDatabase retrieval of customer.
Apoorva Javadekar -Role of Reputation For Mutual Fund FlowsApoorva Javadekar
As per apoorva javadekar From this ppt
we can conclude that 3.Some 2 nd half risk-sfiting for bad repute funds .Fund Flow heterogeniety could be explained through presence of loss-averse investors
Data Protection Compliance Check - Outsourcing - Part 2 "Paper" (C2P relation...Tommy Vandepitte
Outsourcing data processing operations entails specific risks and requirements under the law and under sound risk management.
Therefore a set of three templates is developed to look at outsourcing of data processing operations:
(1) the (internal) organisation of the controller including policies and procedures,
(2) the relationship between the controller and the processor, mainly via the agreement and
(3) the (internal) organisation of the processor.
This template aims to give guidance to a check on a specific relationship between a controller and a processor, thus limiting the scope.
The DPCC contains checklists. They aim to provide some guidance in the check. However, be aware that some (parts of) checklists may not apply and that no checklist ever includes all possible relevant questions. So check with open eyes.
This template addresses that relationship looking at several stages from the controller side
(a) in the selection,
(b) in the agreement and
(c) in (the follow-up of) the performance.
This template should be used in a risk-based fashion. Therefore it is expected that critical, key, and/or high-risk outsourced data processing operations of the controller are submitted to a check with priority.
The result of this check hopefully is a certain comfort in the application of the controller’s procedures and rules with regard to outsourcing data processing operations. If such comfort is not found, it should be determined whether amends can be made, through an amendment to the agreement or the follow-up mechanisms, or a better discipline in applying them. Also, lessons may be learnt with regard to the effectiveness of the controller’s procedures and rules.
The document discusses the Standard Transfer Specification (STS), which is a secure message protocol for prepayment meters. It has been an industry standard since 1994 and is now also an IEC standard. There are approximately 5 million STS meters installed worldwide. The STS Association oversees the STS standard and provides key management and product certification services to support its use. The association is working to enhance the STS and integrate it further into the family of IEC 62055 standards for prepayment metering.
Multiple objectives in Collaborative Filtering (RecSys 2010)Tamas Jambor
This document discusses using multiple objectives in collaborative filtering recommender systems. It proposes a framework that optimizes for an accuracy baseline objective while also considering additional user and system objectives. Specifically, it explores promoting less popular, "long tail" items from a user perspective, and incorporating item availability constraints from a system perspective to reduce waiting times. Experiments show the approach can improve these other objectives with only minor losses to recommendation accuracy. The framework provides a flexible way to optimize collaborative filtering for multiple goals.
This document summarizes a PhD research project that developed a system dynamics model to evaluate transport safety policies for commercial motorcycle operation in Nigeria. The research aimed to (1) identify factors contributing to safety problems and their relationships, and (2) develop a dynamic model to understand how driver behavior develops and is influenced over time. The research involved interviews, data collection and analysis in Nigeria to develop a causal loop diagram and stock and flow model. The model was used to test scenarios such as increasing enforcement capacity, removing expensive vehicle ownership options, and increasing prosecution rates. Key findings indicated that system dynamics modeling was useful, entry methods into the trade significantly impacted problems, and combining measures had more leverage than single interventions.
There are several challenges in evaluating information retrieval systems, including the subjectivity and dynamic nature of relevancy judgments. Common evaluation metrics include precision, recall, F-measure, mean average precision, discounted cumulative gain, and normalized discounted cumulative gain. These metrics are calculated using test collections consisting of queries, relevant documents, and system rankings to measure how closely systems can match human relevance assessments at different levels of a ranked list.
Semantic Integration of Patient Data and Quality Indicators based on openEHR ...Kathrin Dentler
The document discusses using openEHR archetypes to semantically integrate patient data and quality indicators. It describes mapping clinical data from a Dutch hospital database onto openEHR archetypes and SNOMED CT codes. Quality indicators were then formalized using the archetypes and constraints to query the structured patient data and calculate indicator results. The approach demonstrated the potential for open archetypes to enable sharable, computable quality indicator definitions and integration of heterogeneous clinical data sources.
Analysis the Privacy preserving and content protecting location based on querieskavidhapr
This document proposes a two-stage solution for secure location-based queries that improves performance. The first stage uses oblivious transfer to privately determine the user's location within a public grid. The second stage uses private information retrieval for the user to efficiently retrieve an appropriate data block from the private grid. The solution introduces a formal security model and analyzes the security of the novel protocol. It aims to achieve privacy protection for both the user and server in location-based services.
Similar to Representing and querying norm states using temporal ontology-based data access (20)
The document discusses challenges with modeling processes that involve multiple interacting objects. Conventional process modeling approaches encourage separating objects and focusing on one object type per process, which can lead to issues when objects interact. The document proposes modeling objects as first-class citizens and capturing relationships between objects to better represent real-world processes where objects corelate and influence each other. It provides examples of how conventional case-centric modeling can struggle to accurately capture a hiring process that involves interacting candidate, application, job offer and other objects.
Slides of our BPM 2022 paper on "Reasoning on Labelled Petri Nets and Their Dynamics in a Stochastic Setting", which received the best paper award at the conference. Paper available here: https://link.springer.com/chapter/10.1007/978-3-031-16103-2_22
Slides of the keynote speech on "Constraints for process framing in Augmented BPM" at the AI4BPM 2022 International Workshop, co-located with BPM 2022. The keynote focuses on the problem of "process framing" in the context of the new vision of "Augmented BPM", where BPM systems are augmented with AI capabilities. This vision is described in a manifesto, available here: https://arxiv.org/abs/2201.12855
Keynote speech at KES 2022 on "Intelligent Systems for Process Mining". I introduce process mining, discuss why process mining tasks should be approached by using intelligent systems, and show a concrete example of this combination, namely (anticipatory) monitoring of evolving processes against temporal constraints, using techniques from knowledge representation and formal methods (in particular, temporal logics over finite traces and their automata-theoretic characterization).
Presentation (jointly with Claudio Di Ciccio) on "Declarative Process Mining", as part of the 1st Summer School in Process Mining (http://www.process-mining-summer-school.org). The Presentation summarizes 15 years of research in declarative process mining, covering declarative process modeling, reasoning on declarative process specifications, discovery of process constraints from event logs, conformance checking and monitoring of process constraints at runtime. This is done without ad-hoc algorithms, but relying on well-established techniques at the intersection of formal methods, artificial intelligence, and data science.
1. The document discusses representing business processes with uncertainty using ProbDeclare, an extension of Declare that allows constraints to have uncertain probabilities.
2. ProbDeclare models contain both crisp constraints that must always hold and probabilistic constraints that hold with some probability. This leads to multiple possible "scenarios" depending on which constraints are satisfied.
3. Reasoning involves determining which scenarios are logically consistent using LTLf, and computing the probability distribution over scenarios by solving a system of inequalities defined by the constraint probabilities.
Presentation on "From Case-Isolated to Object-Centric Processes - A Tale of Two Models" as part of the Hasselt University BINF Research Seminar Series (see https://www.uhasselt.be/en/onderzoeksgroepen-en/binf/research-seminar-series).
Invited seminar on "Modeling and Reasoning over Declarative Data-Aware Processes" as part of the KRDB Summer Online Seminars 2020 (https://www.inf.unibz.it/krdb/sos-2020/).
Presentation of the paper "Soundness of Data-Aware Processes with Arithmetic Conditions" at the 34th International Conference on Advanced Information Systems Engineering (CAiSE 2022). Paper available here: https://doi.org/10.1007/978-3-031-07472-1_23
Abstract:
Data-aware processes represent and integrate structural and behavioural constraints in a single model, and are thus increasingly investigated in business process management and information systems engineering. In this spectrum, Data Petri nets (DPNs) have gained increasing popularity thanks to their ability to balance simplicity with expressiveness. The interplay of data and control-flow makes checking the correctness of such models, specifically the well-known property of soundness, crucial and challenging. A major shortcoming of previous approaches for checking soundness of DPNs is that they consider data conditions without arithmetic, an essential feature when dealing with real-world, concrete applications. In this paper, we attack this open problem by providing a foundational and operational framework for assessing soundness of DPNs enriched with arithmetic data conditions. The framework comes with a proof-of-concept implementation that, instead of relying on ad-hoc techniques, employs off-the-shelf established SMT technologies. The implementation is validated on a collection of examples from the literature, and on synthetic variants constructed from such examples.
Presentation of the paper "Probabilistic Trace Alignment" at the 3rd International Conference on Process Mining (ICPM 2021). Paper available here: https://doi.org/10.1109/ICPM53251.2021.9576856
Abstract:
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model. Alignment-based approaches for conformance checking have so far used crisp process models as a reference. Recent probabilistic conformance checking approaches check the degree of conformance of an event log as a whole with respect to a stochastic process model, without providing alignments. For the first time, we introduce a conformance checking approach based on trace alignments using stochastic Workflow nets. This requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
Presentation of the paper "Strategy Synthesis for Data-Aware Dynamic Systems with Multiple Actors" at the 7th International Conference on Principles of Knowledge Representation and Reasoning (KR 2020). Paper available here: https://proceedings.kr.org/2020/32/
Abstract: The integrated modeling and analysis of dynamic systems and the data they manipulate has been long advocated, on the one hand, to understand how data and corresponding decisions affect the system execution, and on the other hand to capture how actions occurring in the systems operate over data. KR techniques proved successful in handling a variety of tasks over such integrated models, ranging from verification to online monitoring. In this paper, we consider a simple, yet relevant model for data-aware dynamic systems (DDSs), consisting of a finite-state control structure defining the executability of actions that manipulate a finite set of variables with an infinite domain. On top of this model, we consider a data-aware version of reactive synthesis, where execution strategies are built by guaranteeing the satisfaction of a desired linear temporal property that simultaneously accounts for the system dynamics and data evolution.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the 18th Int. Conference on Business Process Management (BPM 2020). Paper available here: https://doi.org/10.1007/978-3-030-58666-9_3
Abstract: Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Presentation of the paper "Extending Temporal Business Constraints with Uncertainty" at the CAiSE2020 Forum. The paper is available here: https://link.springer.com/chapter/10.1007/978-3-030-58135-0_8
Abstract: Conformance checking is a fundamental task to detect deviations between the actual and the expected courses of execution of a business process. In this context, temporal business constraints have been extensively adopted to declaratively capture the expected behavior of the process. However, traditionally, these constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, we equip business constraints with a natural, probabilistic notion of uncertainty. We discuss the semantic implications of the resulting framework and show how probabilistic conformance checking and constraint entailment can be tackled therein.
Presentation of the paper "Modeling and Reasoning over Declarative Data-Aware Processes with Object-Centric Behavioral Constraints" at the 17th Int. Conference on Business Process Management (BPM 2019). Paper available here: https://link.springer.com/chapter/10.1007/978-3-030-26619-6_11
Abstract
Existing process modeling notations ranging from Petri nets to BPMN have difficulties capturing the data manipulated by processes. Process models often focus on the control flow, lacking an explicit, conceptually well-founded integration with real data models, such as ER diagrams or UML class diagrams. To overcome this limitation, Object-Centric Behavioral Constraints (OCBC) models were recently proposed as a new notation that combines full-fledged data models with control-flow constraints inspired by declarative process modeling notations such as DECLARE and DCR Graphs. We propose a formalization of the OCBC model using temporal description logics. The obtained formalization allows us to lift all reasoning services defined for constraint-based process modeling notations without data, to the much more sophisticated scenario of OCBC. Furthermore, we show how reasoning over OCBC models can be reformulated into decidable, standard reasoning tasks over the corresponding temporal description logic knowledge base.
Keynote speech at the Belgian Process Mining Research Day 2021. I discuss the open, critical challenge of data preparation in process mining, considering the case where the original event data are implicitly stored in (legacy) relational databases. This case covers the common situation where event data are stored inside the data layer of an ERP or CRM system. This is usually handled using manual, ad-hoc, error-prone ETL procedures. I propose instead to adopt a pipeline based on semantic technologies, in particular the framework of ontology-based data access (also known as virtual knowledge graph). The approach is code-less, and relies on three main conceptual steps: (1) the creation of a data model capturing the relevant classes, attributes, and associations in the domain of interest (2) the definition of declarative mappings from the source database to the data model, following the ontology-based data access paradigm (3) the annotation of the data model with indications on which classes/associations/attributes provide the relevant notions of case, events, event attributes, and event-to-case relation. Once this is done, the framework automatically extracts the event log from the legacy data. This makes extremely smooth to generate logs by taking multiple perspectives on the same reality. The approach has been operationalized in the onprom tool, which employs semantic web standard languages for the various steps, and the XES standard as the target format for the event logs.
Keynote speech at the 7th International Workshop on DEClarative, DECision and Hybrid approaches to processes ( DEC2H 2019) In conjunction with BPM 2019.
This is a talk about the combined modeling and reasoning techniques for decisions, background knowledge, and work processes.
The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.
In this keynote, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and re
Presentation at "Ontology Make Sense", an event in honor of Nicola Guarino, on how to integrate data models with behavioral constraints, an essential problem when modeling multi-case real-life work processes evolving multiple objects at once. I propose to combine UML class diagrams with temporal constraints on finite traces, linked to the data model via co-referencing constraints on classes and associations.
Presentation ad EDOC 2019 on monitoring multi-perspective business constraints accounting for time and data, with a specific focus on the (unsolvable in general) problem of conflict detection.
1) The document discusses business process management and how conceptual modeling and process mining can help understand and improve digital enterprises.
2) Process mining techniques like process discovery from event logs, decision mining, and social network mining can provide insights into how processes are executed in reality.
3) Replay techniques can enhance process models with timing information and detect deviations to help align actual behaviors with expected behaviors.
Presentation at BPM 2019, focused on a data-aware extension of BPMN encompassing read-write and read-only data, and on SMT-techniques for effectively tackling parameterized verification of the resulting integrated models.
More from Faculty of Computer Science - Free University of Bozen-Bolzano (20)
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
This presentation by Professor Alex Robson, Deputy Chair of Australia’s Productivity Commission, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...SkillCertProExams
• For a full set of 760+ questions. Go to
https://skillcertpro.com/product/databricks-certified-data-engineer-associate-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...
Representing and querying norm states using temporal ontology-based data access
1. Representing and Querying Norm States
Using Temporal Ontology-Based Data Access
Evellin Cardoso, Marco Montali, Diego Calvanese
Free University of Bozen-Bolzano, Italy
14. Example:
access consent to patient data
patient health vault provider third-parties
allow
disclosure token
Allowed(pid,hid,discid,tpid,t)
15. Example:
access consent to patient data
patient health vault provider third-parties
send creds
disclosure token
allow
SentCred(hid,tpid,discid,t)
Allowed(pid,hid,discid,tpid,t)
16. Example:
access consent to patient data
patient health vault provider third-parties
send creds
disclosure token
allow
SentCred(hid,tpid,discid,t)
Allowed(pid,hid,discid,tpid,t)
ReqData(tpid,hid,reqid,discid,t)
request
17. Example:
access consent to patient data
patient health vault provider third-parties
send creds
disclosure token
allow
SentCred(hid,tpid,discid,t)
Allowed(pid,hid,discid,tpid,t)
request
ReqData(tpid,hid,reqid,discid,t)
Accessed(tpid,hid,reqid,discid,t)
access
18. Example:
access consent to patient data
SentCred(hid,tpid,discid,t)
Allowed(pid,hid,discid,tpid,t)
ReqData(tpid,hid,reqid,discid,t)
Accessed(tpid,hid,reqid,discid,t)
Who are the involved agents?
Where is the notion of “disclose authorization”? How many
authorizations have been created? What is their current status?
Is third-party xyz authorised now to access certain data?
19. QUEN in a nutshell
• Full relational (first-order) modeling of norms.
• Three conceptual layers:
1. Ontological layer of norms: norms as explicit relations.
2. Norm state evolution: declarative specification of norm
state transitions as induced by the raw database facts.
3. Legacy relational database: tuples with timestamps as
implicit events.
• “Virtual norm store”:
• data;
• queries and answers.
20. QUEN in a nutshell
• Full relational (first-order) modeling of norms.
• Three conceptual layers:
1. Ontological layer of norms: norms as explicit relations.
2. Norm state evolution: declarative specification of norm
state transitions as induced by the raw facts in the data
3. Legacy relational database: tuples with timestamps as
implicit events.
• “Virtual norm store”:
• data;
• queries and answers.
21. Upper knowledge
- crt: timestamp
- ext: timestamp [0..1]
- det: timestamp [0..1]
- dit: timestamp [0..1]
Normative
Primitive
Authorization
Prohibition
Power
is expector for
Commitment
Agent
is expectee for
Thing
1
1
*
* * 1
targets
- vit: timestamp [0..1]
Violable Normative
Primitive
Static Dynamic
22. Upper knowledge
- crt: timestamp
- ext: timestamp [0..1]
- det: timestamp [0..1]
- dit: timestamp [0..1]
Normative
Primitive
Authorization
Prohibition
Power
is expector for
Commitment
Agent
is expectee for
Thing
1
1
*
* * 1
targets
- vit: timestamp [0..1]
Violable Normative
Primitive
Static Dynamic
23. created
detachedexpired
dischargedviolated
create
when ante detach
when never ante
expire
when cons
discharge
when cons dischargewhen never cons violate
Figure 1: Lifecycle of norm types (from [9]). The violated state e
only for prohibition and commitment.
a timestamp column, and whose tuples record the diffe
instances recorded in the system for that event.
Example 1 (Inspired from [9]). The following informa
schema captures three event types related to the request
access to patient data within a sanitary organization, where
Upper knowledge
- crt: timestamp
- ext: timestamp [0..1]
- det: timestamp [0..1]
- dit: timestamp [0..1]
Normative
Primitive
Authorization
Prohibition
Power
is expector for
Commitment
Agent
is expectee for
Thing
1
1
*
* * 1
targets
- vit: timestamp [0..1]
Violable Normative
Primitive
Static Dynamic
24. Step 1/3: Domain-specific norm types
- crt: timestamp
- ext: timestamp [0..1]
- det: timestamp [0..1]
- dit: timestamp [0..1]
Normative
Primitive
Authorization
Prohibition
Power
Third
Party
HealthVault
Provider
is expector for
Commitment
Disclosure
Auth
Agent
is expectee for
given by
used by attached to
Disclosure
Token
Thing
1
1
*
* * 1
targets
- vit: timestamp [0..1]
Violable Normative
Primitive
1
1
*
*
0..1 1
Patient
emits1
*
25. Step 1/3: Domain-specific norm types
- crt: timestamp
- ext: timestamp [0..1]
- det: timestamp [0..1]
- dit: timestamp [0..1]
Normative
Primitive
Authorization
Prohibition
Power
Third
Party
HealthVault
Provider
is expector for
Commitment
Disclosure
Auth
Agent
is expectee for
given by
used by attached to
Disclosure
Token
Thing
1
1
*
* * 1
targets
- vit: timestamp [0..1]
Violable Normative
Primitive
1
1
*
*
0..1 1
Patient
emits1
*
- crt: timestamp
- ext: timestamp [0..1]
- det: timestamp [0..1]
- dit: timestamp [0..1]
Normative
Primitive
Authorization
Prohibition
Power
Third
Party
HealthVault
Provider
is expector for
Commitment
Disclosure
Auth
Agent
is expectee for
given by
used by attached to
Disclosure
Token
Thing
1
1
*
* * 1
targets
- vit: timestamp [0..1]
Violable Normative
Primitive
1
1
*
*
0..1 1
Patient
emits1
*
26. Step 2/3: Norm State Transitions
We take inspiration from Custard [Chopra and Singh, AAMAS 2016].
Each norm type N in the lower ontology comes with a
corresponding QUEN specification:
and is a sub-relation of the expector relation in On (thus
qualifying the domain-specific expector for N);
• Rc be a domain-specific relation that is attached to N
and is a sub-relation of the expectee relation in On (thus
qualifying the domain-specific expectee for N);
• Rt be a domain-specific relationship that is attached to
N and is a sub-relation of the target relation in On (thus
qualifying the domain-specific target for N).
A QUEN lifecycle specification for this combination of ele-
ments has the following form:
T N Rd d Rc c Rt o
create Qcr
(d, c, o, tcr)
expire Qex
d,c,o,tcr
(tex)
detach Qde
d,c,o,tcr
(tde)
discharge Qdi
d,c,o,tcr,tde
(tdi)
[violate Qvi
d,c,o,tcr,tde
(tvi) ]
where the last line is only present if T ∈
27. We take inspiration from Custard [Chopra and Singh, AAMAS 2016].
Each norm type N in the lower ontology comes with a
corresponding QUEN specification:
and is a sub-relation of the expector relation in On (thus
qualifying the domain-specific expector for N);
• Rc be a domain-specific relation that is attached to N
and is a sub-relation of the expectee relation in On (thus
qualifying the domain-specific expectee for N);
• Rt be a domain-specific relationship that is attached to
N and is a sub-relation of the target relation in On (thus
qualifying the domain-specific target for N).
A QUEN lifecycle specification for this combination of ele-
ments has the following form:
T N Rd d Rc c Rt o
create Qcr
(d, c, o, tcr)
expire Qex
d,c,o,tcr
(tex)
detach Qde
d,c,o,tcr
(tde)
discharge Qdi
d,c,o,tcr,tde
(tdi)
[violate Qvi
d,c,o,tcr,tde
(tvi) ]
where the last line is only present if T ∈
Static/dynamic KB
Relational DB
Step 2/3: Norm Lifecycle
28. Third
Party
HealthVault
Provider
Disclosure
Auth
given by
used by attached to
Disclosure
Token
1
1
*
*
0..1 1
Patient
emits1
*
SentCred(hid,tpid,discid,t)
Allowed(pid,hid,discid,tpid,t)
ReqData(tpid,hid,reqid,discid,t)
Accessed(tpid,hid,reqid,discid,t)
Step 2/3: Norm Lifecycle
29. Third
Party
HealthVault
Provider
Disclosure
Auth
given by
used by attached to
Disclosure
Token
1
1
*
*
0..1 1
Patient
emits1
*
authorization DisclosureAuth used by tp given by h attached to d
create SELECT c.tpid AS tp, c.hid AS h, c.discid AS d, c.t AS tcr
FROM SentCred c, Allowed a WHERE c.discid = a.discid AND c.tpid = a.tpid AND c.hid = a.hid
detach SELECT r.t AS tde FROM ReqData r WHERE r.discid = d AND r.t > tcr
discharge SELECT a.t AS tdi FROM Accessed a WHERE a.discid = d AND a.t ≥ tde + 1 AND a.t ≤ tde + 10
Figure 4: QUEN lifecycle specification of the disclosure authorization on top of the database schema of Example 1.
where object constructors simply use (abbreviations of) the
names of the corresponding endpoint classes. Notice that
this mapping also implicitly populate the Patient class with
pat(pid), given that the domain of emits is Patient as dictated
by the ontology. ▹
D. Putting Everything Together
A. From Lifecycle Specifications to Mappings
As a preliminary step for the translation, we need to d
how a query with parameters can be suitably merge w
query providing those parameters, so as to obtain a stan
SQL query as result. This is done by simply computing
join (in the standard SQL sense).
Specifically, let Q1
(⃗x, t1) be a query without paramete
SentCred(hid,tpid,discid,t)
Allowed(pid,hid,discid,tpid,t)
ReqData(tpid,hid,reqid,discid,t)
Accessed(tpid,hid,reqid,discid,t)
Step 2/3: Norm Lifecycle
30. Third
Party
HealthVault
Provider
Disclosure
Auth
given by
used by attached to
Disclosure
Token
1
1
*
*
0..1 1
Patient
emits1
*
authorization DisclosureAuth used by tp given by h attached to d
create SELECT c.tpid AS tp, c.hid AS h, c.discid AS d, c.t AS tcr
FROM SentCred c, Allowed a WHERE c.discid = a.discid AND c.tpid = a.tpid AND c.hid = a.hid
detach SELECT r.t AS tde FROM ReqData r WHERE r.discid = d AND r.t > tcr
discharge SELECT a.t AS tdi FROM Accessed a WHERE a.discid = d AND a.t ≥ tde + 1 AND a.t ≤ tde + 10
Figure 4: QUEN lifecycle specification of the disclosure authorization on top of the database schema of Example 1.
where object constructors simply use (abbreviations of) the
names of the corresponding endpoint classes. Notice that
this mapping also implicitly populate the Patient class with
pat(pid), given that the domain of emits is Patient as dictated
by the ontology. ▹
D. Putting Everything Together
A. From Lifecycle Specifications to Mappings
As a preliminary step for the translation, we need to d
how a query with parameters can be suitably merge w
query providing those parameters, so as to obtain a stan
SQL query as result. This is done by simply computing
join (in the standard SQL sense).
Specifically, let Q1
(⃗x, t1) be a query without paramete
SentCred(hid,tpid,discid,t)
Allowed(pid,hid,discid,tpid,t)
ReqData(tpid,hid,reqid,discid,t)
Accessed(tpid,hid,reqid,discid,t)
Step 2/3: Norm Lifecycle
31. Step 3/3: Add explicit mappings
Third
Party
HealthVault
Provider
Disclosure
Auth
given by
used by attached to
Disclosure
Token
1
1
*
*
0..1 1
Patient
emits1
*
Allowed(pid,hid,discid,tpid,t)
32. Step 3/3: Add explicit mappings
Third
Party
HealthVault
Provider
Disclosure
Auth
given by
used by attached to
Disclosure
Token
1
1
*
*
0..1 1
Patient
emits1
*
Allowed(pid,hid,discid,tpid,t)
33. Step 3/3: Add explicit mappings
Third
Party
HealthVault
Provider
Disclosure
Auth
given by
used by attached to
Disclosure
Token
1
1
*
*
0..1 1
Patient
emits1
*
Allowed(pid,hid,discid,tpid,t)
nd the
t-based
eries, it
viola-
scharge
ned.
cations
Exam-
one in
fication
Figure 4 focuses on the DisclosureAuth class and sur
relations (which implicitly includes also the endpoin
attached to those relations, given that UML univocall
the endpoint classes to each binary relation). Howeve
not mention directly the Patient class, nor the corre
emits relation. The underlying database schema intro
Example 1 actually provides us the raw data to cha
the extension of such elements: it is enough to in
Allowed relation and filter it by retaining the pid an
fields. We can then construct the following mapping
SELECT pid,discid FROM Allowed
emits(pat(pid), dtoken(discid))
35. QUEN components
temporal
SPARQL
DB schema
Static upper KB
(agents/norms) Dynamic upper KB
(norm states)Domain-specific
KB
Mappings
Norm state transitions
specification
DiscloseAuth(x)
^ Detached(x)[t1, t2)
36. QUEN components
temporal
SPARQL
DB schema
Static upper KB
(agents/norms) Dynamic upper KB
(norm states)Domain-specific
KB
Mappings
Norm state transitions
specification
DiscloseAuth(x)
^ Detached(x)[t1, t2)
?
37. Motivation Semantic Web OBDA Framework References
Query answering by rewriting (conceptual framework)
Ontology
Mappings
Data
Sources
. . .
. . .
. . .
. . .
qresult
Ontology-based data access
38. Motivation Semantic Web OBDA Framework References
Query answering by rewriting (conceptual framework)
Ontology
Mappings
Data
Sources
. . .
. . .
. . .
. . .
Ontological Query q
Rewritten Query
SQLRelational Answer
Ontological Answer
Rewriting
Unfolding
Evaluation
Result Translation
Ontology-based data access
39. Temporal OBDA
Extension of the classical OBDA paradigm with (metric) time
• Facts have an attached time interval.
• Static ontology: OWL 2 QL.
• Temporal ontology: non-recursive Datalog extended with
metric temporal logic operators.
• Temporal mappings indicate how to extract facts and their
interval extreme timestamps from the underlying database.
• Support for temporal SPARQL.
• Ongoing implementation effort inside Ontop.
40. Making QUEN Operational
temporal
SPARQL
DB schema
Static upper KB
(agents/norms) Dynamic upper KB
(norm states)Domain-specific
KB
Mappings
Norm state transitions
specification
45. Debugging
QUEN specification of norm lifecycle may be wrong:
• Ambiguous transition: multiple timestamps.
Violates functionality on timestamp attribute.
• State superposition: norm in two states at the same time.
We cannot reason on this in general, but we can debug
whether such issues arise given a database:
• Transform these checks into queries.
• If answers returned -> issue.
Example: fetch norms that are simultaneously discharged
and violated…
TOBDA framework to have a fine-grained understanding of
such a root cause, using standard techniques [13]. Specifically,
it is possible to automatically construct a SQL query that,
once submitted to the underlying database, returns those norm
instances that have at least two creation times (and similarly
for the other time attributes).
The case of state superposition can instead be simply
handled by formulating suitable semantic queries that retrieve
those norm instances that are simultaneously present in two
states. By inspecting the temporal mappings, a case of state
superposition can only arise if the norm instance simultane-
ously undergoes a transition to two different states. Hence, to
retrieve all norm instances that experienced a superposition
of state violated and discharged (and when this undesired
superposition arose), we can issue the following query:
Qdv(n, t) = violated(n)@[t, t1) ∧ discharged(n)@[t, t2)
46. Conclusion
QUEN framework:
• Relational modeling of norms and their evolution at
the ontological level.
• Conceptual link to underlying legacy DB.
• Operational thanks to automated encoding to
temporal OBDA: “virtual norm state store”.
• Example of OBDA with a fixed target ontology.
Future work:
• Implementation (ongoing effort).
• From offline to online: streaming and operational
support!