This document provides an overview of a learning package on formal specifications for services and service compositions. It discusses the need for formal service specifications to promote reusability. It addresses issues like the frame problem in fully specifying how a service affects the world. It provides an example of a money withdrawal service to illustrate these issues and shows how specifications of composite services can be automatically derived from specifications of the underlying component services. The goal is to represent services in a logic-based way that facilitates reasoning about their inputs, outputs, preconditions and effects.
S-CUBE LP: SOA Migration: Study of Theory and Practicevirtual-campus
This document summarizes key findings from a learning package on SOA migration. It discusses what is known from academic literature and industry surveys. From the systematic literature review, 8 families of SOA migration approaches were identified. Interviews with industry architects found that migrations in practice typically follow a "bowl-shaped" approach focused on forward engineering and wrapping existing assets. Migrations aim to enable reuse or integration and are driven by enterprise architecture.
S-CUBE LP: Online Testing for Proactive Adaptationvirtual-campus
This document discusses online testing for proactive adaptation of service-based applications. It describes how online testing can be used to predict failures through monitoring services and applications during operation. This allows issues to be detected early and adaptations to be made proactively before failures occur externally. Two approaches are discussed: PROSA predicts violations of quality of service by testing stateless services, while JITO predicts violations of interaction protocols for conversational services. Online testing extends traditional testing into the operational phase to improve failure prediction accuracy and allow more proactive adaptation for service-based applications.
S-CUBE LP: Mining Lifecycle Event Logs for Enhancing SBAsvirtual-campus
The document discusses using process mining techniques to analyze service-based application (SBA) event logs to extract knowledge that can improve SBA analysis. It describes applying sequential pattern mining algorithms PrefixSpan and MiSTA to a real-world SBA event log from VRESCo to find frequently occurring sequences of invoked services. The extracted patterns showed services that are often invoked together and this inferred knowledge could be used to enhance SBAs through tools like service recommendation.
S-CUBE LP: Multi-layer Monitoring and Adaptation of Service Based Applicationsvirtual-campus
This document provides an overview of a learning package on cross-layer adaptation and monitoring of service-based applications from a multi-layered perspective. It presents a framework that integrates monitoring techniques from different layers and identifies adaptation strategies across layers. The key components of the framework include monitoring and correlating events, analyzing adaptation needs, identifying multi-layer adaptation strategies, enacting adaptations, and evaluating adaptations using a medical imaging case study. The approach aims to enable holistic reasoning and coordinated adaptation across the software and infrastructure layers of service-based applications.
S-CUBE LP: Techniques for design for adaptationvirtual-campus
This document describes a learning package on designing and migrating service-based applications. It discusses techniques for designing applications to enable self-adaptation. It presents three motivating scenarios involving supply chains, wine production, and mobile users that require different types of adaptation. The key aspects of adaptable service-based applications are life cycles, adaptation strategies, triggers, and the association between strategies and triggers. Guidelines are provided for modeling triggers, realizing strategies, and relating them through various design approaches like built-in, abstraction-based, and dynamic adaptation.
S-CUBE LP: Service Discovery and Task Modelsvirtual-campus
The document describes a learning package on service discovery and task models. It discusses using task models to help select services that fit with a user's goals and constraints. A two-stage approach to task-based service discovery is presented: 1) specifying a user task model with a description, ConcurTaskTree diagram, and associated services; and 2) discovering services using the task model. The task model captures the task hierarchy, types, and temporal relationships. Services are matched based on analyzing subtasks and associated service classes.
S-CUBE LP: Process Performance Monitoring in Service Compositionsvirtual-campus
This document describes process performance monitoring in service compositions. It discusses monitoring a single BPEL process using a resource event model and complex event definitions to calculate performance metrics. It also covers monitoring across partner processes by specifying a monitoring agreement based on a BPEL4Chor choreography model. Key events are correlated using identifiers. A prototype implements monitoring using an Apache ODE BPEL engine and ESPER CEP engine.
S-CUBE LP: Monitoring Adaptation of Service-based Applicationsvirtual-campus
This document describes a framework for monitoring and adapting service-based applications based on user context. It discusses different types of user context that are important to consider, including direct contexts like role and cognition as well as related contexts like time and location. An ontology is presented for representing user context models. The framework uses annotations in service specifications to identify parts related to context, and event calculus rules to specify application behavior. It aims to select, modify or generate new monitoring rules based on user context to check application behavior.
S-CUBE LP: SOA Migration: Study of Theory and Practicevirtual-campus
This document summarizes key findings from a learning package on SOA migration. It discusses what is known from academic literature and industry surveys. From the systematic literature review, 8 families of SOA migration approaches were identified. Interviews with industry architects found that migrations in practice typically follow a "bowl-shaped" approach focused on forward engineering and wrapping existing assets. Migrations aim to enable reuse or integration and are driven by enterprise architecture.
S-CUBE LP: Online Testing for Proactive Adaptationvirtual-campus
This document discusses online testing for proactive adaptation of service-based applications. It describes how online testing can be used to predict failures through monitoring services and applications during operation. This allows issues to be detected early and adaptations to be made proactively before failures occur externally. Two approaches are discussed: PROSA predicts violations of quality of service by testing stateless services, while JITO predicts violations of interaction protocols for conversational services. Online testing extends traditional testing into the operational phase to improve failure prediction accuracy and allow more proactive adaptation for service-based applications.
S-CUBE LP: Mining Lifecycle Event Logs for Enhancing SBAsvirtual-campus
The document discusses using process mining techniques to analyze service-based application (SBA) event logs to extract knowledge that can improve SBA analysis. It describes applying sequential pattern mining algorithms PrefixSpan and MiSTA to a real-world SBA event log from VRESCo to find frequently occurring sequences of invoked services. The extracted patterns showed services that are often invoked together and this inferred knowledge could be used to enhance SBAs through tools like service recommendation.
S-CUBE LP: Multi-layer Monitoring and Adaptation of Service Based Applicationsvirtual-campus
This document provides an overview of a learning package on cross-layer adaptation and monitoring of service-based applications from a multi-layered perspective. It presents a framework that integrates monitoring techniques from different layers and identifies adaptation strategies across layers. The key components of the framework include monitoring and correlating events, analyzing adaptation needs, identifying multi-layer adaptation strategies, enacting adaptations, and evaluating adaptations using a medical imaging case study. The approach aims to enable holistic reasoning and coordinated adaptation across the software and infrastructure layers of service-based applications.
S-CUBE LP: Techniques for design for adaptationvirtual-campus
This document describes a learning package on designing and migrating service-based applications. It discusses techniques for designing applications to enable self-adaptation. It presents three motivating scenarios involving supply chains, wine production, and mobile users that require different types of adaptation. The key aspects of adaptable service-based applications are life cycles, adaptation strategies, triggers, and the association between strategies and triggers. Guidelines are provided for modeling triggers, realizing strategies, and relating them through various design approaches like built-in, abstraction-based, and dynamic adaptation.
S-CUBE LP: Service Discovery and Task Modelsvirtual-campus
The document describes a learning package on service discovery and task models. It discusses using task models to help select services that fit with a user's goals and constraints. A two-stage approach to task-based service discovery is presented: 1) specifying a user task model with a description, ConcurTaskTree diagram, and associated services; and 2) discovering services using the task model. The task model captures the task hierarchy, types, and temporal relationships. Services are matched based on analyzing subtasks and associated service classes.
S-CUBE LP: Process Performance Monitoring in Service Compositionsvirtual-campus
This document describes process performance monitoring in service compositions. It discusses monitoring a single BPEL process using a resource event model and complex event definitions to calculate performance metrics. It also covers monitoring across partner processes by specifying a monitoring agreement based on a BPEL4Chor choreography model. Key events are correlated using identifiers. A prototype implements monitoring using an Apache ODE BPEL engine and ESPER CEP engine.
S-CUBE LP: Monitoring Adaptation of Service-based Applicationsvirtual-campus
This document describes a framework for monitoring and adapting service-based applications based on user context. It discusses different types of user context that are important to consider, including direct contexts like role and cognition as well as related contexts like time and location. An ontology is presented for representing user context models. The framework uses annotations in service specifications to identify parts related to context, and event calculus rules to specify application behavior. It aims to select, modify or generate new monitoring rules based on user context to check application behavior.
Romanov moscow-boston-22.03, Business rules for profit incresing in mobile co...Victor Romanov
Mobile company's profit increasing by mean fitting its services to customer consumption profile using business rules and automatic rules extraction from client history
1) The document proposes using formal concept analysis and non-monotonic business rules to create a customer relationship simulation model for telecommunications companies.
2) It would analyze customer data to discover uniform customer profiles and extract business rules for matching specific service offerings to each customer category.
3) This approach aims to increase profits and customer retention by better fitting services to individual customer consumption profiles based on personal data and usage patterns.
L {M,s s ∈ L(M), L(M) = 2}. Prove that L ∉ SD by a reduc.docxcroysierkathey
L {<M,s> : s ∈ L(M), |L(M)| = 2}. Prove that L ∉ SD by a reduction from ¬H}
R(<M,w>) =
1. Define M#(x):
1.a If x = a or x = b accept
1.b Save x
1.b Replace x with w
1.c Run M on w
1.d Restore x
1.e Accept x
2. Return <M#,a>
If there were an Oracle Mₒ that could semidecide L, then C = Mₒ(R(<M,w>)) =
Mₒ(<M#,a>) could semidecide ¬H:
<M,w> ∈ ¬H: M# would accept a and b at 1.a, and then loop forever at 1.c. Thus
L(M#) = {a,b}, and Mₒ would accept <M#,a> because a ∈ {a,b}, and |{a,b}| = 2
<M,w> ∉ ¬H: M# would accept a and b at 1.a, proceed through 1.c, and accept
everything else at 1.e. Thus L(M#) = ∑*, and Mₒ would not accept <M#,a> because
|∑*| != 2.
But no TM could semidecide ¬H, so Mₒ could not possibly exist.
BSA/520 v4
Gail Industries Case Study
BSA/520 v4
Page 6 of 6
Gail Industries: Smallville Collections Processing Entity Case Study
This case study will be used to complete your assignments throughout the course. Some sections of the case study will be necessary in multiple assignments. See the assignment instructions for specific assignment requirements.Introduction to Gail Industries
Gail Industries is a partner to many Fortune 1000 companies and governments around the world. Gail Industries’ role is to manage essential aspects of their clients’ operations while interacting with and supporting the people their clients serve. They manage millions of digital transactions every day for various back office processing contracts.
One of Gail Industries’ clients is the city of Smallville. Smallville, despite its name, is a metropolis seated in the heart of the nation. The city has 2.5 million residents, and the greater Smallville metropolitan area has a population of about 4 million people.Overview of the Operations of Smallville Collections Processing Entity (SCOPE) Summary of Services Provided
Collections Processing
The Smallville Collections Processing Entity (SCOPE) provides collections processing services to the city of Smallville. SCOPE receives tax payments, licensing fees, parking tickets, and court costs for this major municipality. The city of Smallville sends out invoices and other collections notices, and SCOPE processes payments received through the mail, through an online payment website, and through an interactive voice response (IVR) system. Payments are in the form of checks, debit cards, and credit cards. After processing invoices, SCOPE deposits the monies into the bank account for the city.
SCOPE is responsible for ensuring the security of the mail that comes into the possession of all employees, subcontractors, and agents at its processing facility, located within Smallville. Controls and procedures for money and mail handling are established by SCOPE to ensure payments are accounted for, from the earliest point received through processing and deposit. These controls and procedures provide:
1. Assurances for proper segregation of duties
2. The design and use ...
The document describes a method for identifying services from requirements. It involves two steps: 1) Identifying business services from business process and data models. This can be done top-down from requirements or bottom-up from existing systems. 2) Decomposing business services into candidate services and operations using techniques like activity diagrams, use cases, and class diagrams. The goal is to discover reusable services that encapsulate business logic and data.
The presentation summarizes an approach for analyzing requirements and developing a solution for moving an economic region's payment system from 3-day clearing to real-time clearing. The hybrid waterfall/agile methodology includes strategic analysis, requirements engineering, iterative development, and change management. Key aspects are validating requirements through stakeholder engagement, prototyping, and delivering functionality incrementally based on priorities. Risks of the hybrid approach include potential loss of flexibility or control over requirements.
The document describes several SQL projects including a banking database application called the Piggy Bank Project, an Adventure Works repair job database, and an SSIS/SSRS project to import data into a database and create reports. Key tasks discussed include stored procedures to make withdrawals and pay interest from bank accounts, queries to generate billing reports for repair jobs, and SSIS packages to import product, vendor, order and other data along with SSRS reports on top sales and sales by year.
1. The document describes various configuration settings related to tax determination, output determination, revenue account determination, credit management, route determination, availability check, and transfer of requirement in SAP.
2. It provides step-by-step instructions on configuring these areas using transaction codes and IMG paths.
3. Key areas covered include defining tax categories, regional codes, and tax rules; maintaining condition techniques for output determination; defining rules for revenue account and GL account assignment; and configuring credit control areas, risk categories, and credit checks.
The art of the event streaming application: streams, stream processors and sc...confluent
The document discusses event streaming applications and microservices. It introduces event streaming as an architectural style where applications are composed of loosely coupled services that communicate asynchronously through streams of events. Key aspects covered include handling state using event streams and Kafka Streams, building applications as bounded contexts with choreography and orchestration, and establishing pillars for instrumentation, control and operations. Overall the document promotes event streaming as a paradigm that addresses complexity by providing simplicity and scalability through convergent data and logic processing.
Kafka summit SF 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed realtime database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS). Building upon this, I explain how to build common business functionality by stepping through patterns for Scalable payment processing Run it on rails: Instrumentation and monitoring Control flow patterns (start, stop, pause) Finally, all of these concepts are combined in a solution architecture that can be used at enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Accounting is very easy to understand. This tutorial is design for them who does not know about accounting or has limited accounting knowledge. We have described it with hands on example. After viewing this students or business man can understand how reports look like and what important information can bring by structural accounting system. At the end we have shown, how to generate reports using accounting software.
As part of our team's enrollment for Data Science Super Specialization course under UpX Academy, we submitted many projects for our final assessments, one of them was Telecom Churn Analysis Model.
The input data was provided by UpX academy and language we used is R. As part of the project, our main objective was :-
-> To predict Customer Churn.
-> To Highlight the main variables/factors influencing Customer Churn.
-> To Use various ML algorithms to build prediction models, evaluate the accuracy and performance of these models.
-> Finding out the best model for our business case & providing executive Summary.
To address the mentioned business problem, we tried to follow a thorough approach. We did a detailed level Exploratory Data Analysis which consists of various Box Plots, Bar Plots etc..
Further we tried our best to build as many Classification models possible which fits our business case (Logistic Regression/kNN/Decision Trees/Random Forest/SVM) and also tried to touch Cox Hazard Survival analysis Model. Later for every model we tried to boost their performances by applying various performance tuning techniques.
As we all are still into our learning mode w.r.t these concepts & starting new, please feel free to provide feedback on our work. Any suggestions are most welcome... :)
Thanks!!
Presentation of the paper "Specification and Verification of Commitment-Regulated Data-Aware Multiagent Systems" at the 29th Italian Conference on Computational Logic (CILC 2014)
This document provides a system test script for testing the payables process integration with the target application system. It includes 16 test sequences that cover key payables functions like opening payable periods, defining banks, invoice entry, payments, refunds, month-end processing and more. Each sequence includes the test step, expected results, and status. Century date compliance testing is also recommended. The document aims to ensure all customizations, interfaces and extensions are functioning correctly for the payables process.
1. A process is a set of activities and tasks performed by roles to produce deliverables, while a model is a representation of a system.
2. Processes are important as they impose consistency, structure activities to reduce errors, and facilitate large teamwork. Models are also important representations.
3. Iterations involve updating the same entity gradually, while increments extend current models by adding new entities. Both are used in projects through planned iterations and increments.
4. The document discusses process maps for requirements, design, implementation, and testing which form the basis for project work.
5. UML diagrams like use case diagrams are used within iterations to
SAP BPC Concepts
SAP Business process consolidation
SAP Business objects overview
SAP Consolidation overview
from Verity Solutions
http://www.verity-sol.com
The document discusses several new features in Oracle R12 related to order management. It covers cascading attributes from headers to lines, customer acceptance tracking, deferred cost of goods sold, exception management, multi-org access, sales agreements, actual ship dates, and parallel pick release. Key points include how each feature works and how to set them up in the R12 system.
Online Cloud based Accounting Software for Personal or Small BusinessAshim Sikder
This document provides an introduction to accounting concepts and accounting software. It explains why accounting data is kept for business management and lists common accounting reports like the ledger, income/expense statement, trial balance, profit and loss report, and balance sheet. It also covers accounting transactions, the chart of accounts, and provides 10 sample transactions to demonstrate how accounting entries are made.
Acceptance tests are created to test a system from the user's perspective and ensure business requirements are met. They examine inputs, outputs, and state changes of the external system interfaces without relying on implementation details. Creating acceptance tests early in the development process and coding with the tests provides quick feedback to prevent rework when tests fail. The tests are created collaboratively by customers, testers, and developers and can be automated for regression testing to improve quality.
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...virtual-campus
Here are the key types of conflicts that can occur within temporal-aware WS-Agreement documents:
- Inconsistencies between terms, parts of terms, or creation constraints that are defined in overlapping time periods, making it impossible to satisfy all constraints simultaneously.
- Dead terms, where a guarantee term's qualifying condition can never be satisfied within the specified time periods due to contradictions with other terms or constraints.
- Ludicrous terms, where a guarantee term's service level objective cannot be fulfilled even when its qualifying condition is met, again due to contradictions arising from overlapping time periods.
The approach is to detect these three types of conflicts if and only if the involved terms or constraints are defined within overlapping time
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphorvirtual-campus
This document provides an overview of a chemical metaphor for workflow enactment in large-scale heterogeneous environments. It discusses problems with current workflow enactment approaches and requirements for improvement. Specifically, it proposes modeling workflow enactment like chemical reactions, which are autonomous, distributed, concurrent and adaptive to local conditions. Resources are represented as "resource quantums" and a coordination model is formalized using the pi-calculus. This approach aims to provide more autonomy, adaptation and distribution for workflow enactment in complex environments.
More Related Content
Similar to S-CUBE LP: Formal Specifications for Services and Service Compositions
Romanov moscow-boston-22.03, Business rules for profit incresing in mobile co...Victor Romanov
Mobile company's profit increasing by mean fitting its services to customer consumption profile using business rules and automatic rules extraction from client history
1) The document proposes using formal concept analysis and non-monotonic business rules to create a customer relationship simulation model for telecommunications companies.
2) It would analyze customer data to discover uniform customer profiles and extract business rules for matching specific service offerings to each customer category.
3) This approach aims to increase profits and customer retention by better fitting services to individual customer consumption profiles based on personal data and usage patterns.
L {M,s s ∈ L(M), L(M) = 2}. Prove that L ∉ SD by a reduc.docxcroysierkathey
L {<M,s> : s ∈ L(M), |L(M)| = 2}. Prove that L ∉ SD by a reduction from ¬H}
R(<M,w>) =
1. Define M#(x):
1.a If x = a or x = b accept
1.b Save x
1.b Replace x with w
1.c Run M on w
1.d Restore x
1.e Accept x
2. Return <M#,a>
If there were an Oracle Mₒ that could semidecide L, then C = Mₒ(R(<M,w>)) =
Mₒ(<M#,a>) could semidecide ¬H:
<M,w> ∈ ¬H: M# would accept a and b at 1.a, and then loop forever at 1.c. Thus
L(M#) = {a,b}, and Mₒ would accept <M#,a> because a ∈ {a,b}, and |{a,b}| = 2
<M,w> ∉ ¬H: M# would accept a and b at 1.a, proceed through 1.c, and accept
everything else at 1.e. Thus L(M#) = ∑*, and Mₒ would not accept <M#,a> because
|∑*| != 2.
But no TM could semidecide ¬H, so Mₒ could not possibly exist.
BSA/520 v4
Gail Industries Case Study
BSA/520 v4
Page 6 of 6
Gail Industries: Smallville Collections Processing Entity Case Study
This case study will be used to complete your assignments throughout the course. Some sections of the case study will be necessary in multiple assignments. See the assignment instructions for specific assignment requirements.Introduction to Gail Industries
Gail Industries is a partner to many Fortune 1000 companies and governments around the world. Gail Industries’ role is to manage essential aspects of their clients’ operations while interacting with and supporting the people their clients serve. They manage millions of digital transactions every day for various back office processing contracts.
One of Gail Industries’ clients is the city of Smallville. Smallville, despite its name, is a metropolis seated in the heart of the nation. The city has 2.5 million residents, and the greater Smallville metropolitan area has a population of about 4 million people.Overview of the Operations of Smallville Collections Processing Entity (SCOPE) Summary of Services Provided
Collections Processing
The Smallville Collections Processing Entity (SCOPE) provides collections processing services to the city of Smallville. SCOPE receives tax payments, licensing fees, parking tickets, and court costs for this major municipality. The city of Smallville sends out invoices and other collections notices, and SCOPE processes payments received through the mail, through an online payment website, and through an interactive voice response (IVR) system. Payments are in the form of checks, debit cards, and credit cards. After processing invoices, SCOPE deposits the monies into the bank account for the city.
SCOPE is responsible for ensuring the security of the mail that comes into the possession of all employees, subcontractors, and agents at its processing facility, located within Smallville. Controls and procedures for money and mail handling are established by SCOPE to ensure payments are accounted for, from the earliest point received through processing and deposit. These controls and procedures provide:
1. Assurances for proper segregation of duties
2. The design and use ...
The document describes a method for identifying services from requirements. It involves two steps: 1) Identifying business services from business process and data models. This can be done top-down from requirements or bottom-up from existing systems. 2) Decomposing business services into candidate services and operations using techniques like activity diagrams, use cases, and class diagrams. The goal is to discover reusable services that encapsulate business logic and data.
The presentation summarizes an approach for analyzing requirements and developing a solution for moving an economic region's payment system from 3-day clearing to real-time clearing. The hybrid waterfall/agile methodology includes strategic analysis, requirements engineering, iterative development, and change management. Key aspects are validating requirements through stakeholder engagement, prototyping, and delivering functionality incrementally based on priorities. Risks of the hybrid approach include potential loss of flexibility or control over requirements.
The document describes several SQL projects including a banking database application called the Piggy Bank Project, an Adventure Works repair job database, and an SSIS/SSRS project to import data into a database and create reports. Key tasks discussed include stored procedures to make withdrawals and pay interest from bank accounts, queries to generate billing reports for repair jobs, and SSIS packages to import product, vendor, order and other data along with SSRS reports on top sales and sales by year.
1. The document describes various configuration settings related to tax determination, output determination, revenue account determination, credit management, route determination, availability check, and transfer of requirement in SAP.
2. It provides step-by-step instructions on configuring these areas using transaction codes and IMG paths.
3. Key areas covered include defining tax categories, regional codes, and tax rules; maintaining condition techniques for output determination; defining rules for revenue account and GL account assignment; and configuring credit control areas, risk categories, and credit checks.
The art of the event streaming application: streams, stream processors and sc...confluent
The document discusses event streaming applications and microservices. It introduces event streaming as an architectural style where applications are composed of loosely coupled services that communicate asynchronously through streams of events. Key aspects covered include handling state using event streams and Kafka Streams, building applications as bounded contexts with choreography and orchestration, and establishing pillars for instrumentation, control and operations. Overall the document promotes event streaming as a paradigm that addresses complexity by providing simplicity and scalability through convergent data and logic processing.
Kafka summit SF 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed realtime database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS). Building upon this, I explain how to build common business functionality by stepping through patterns for Scalable payment processing Run it on rails: Instrumentation and monitoring Control flow patterns (start, stop, pause) Finally, all of these concepts are combined in a solution architecture that can be used at enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Accounting is very easy to understand. This tutorial is design for them who does not know about accounting or has limited accounting knowledge. We have described it with hands on example. After viewing this students or business man can understand how reports look like and what important information can bring by structural accounting system. At the end we have shown, how to generate reports using accounting software.
As part of our team's enrollment for Data Science Super Specialization course under UpX Academy, we submitted many projects for our final assessments, one of them was Telecom Churn Analysis Model.
The input data was provided by UpX academy and language we used is R. As part of the project, our main objective was :-
-> To predict Customer Churn.
-> To Highlight the main variables/factors influencing Customer Churn.
-> To Use various ML algorithms to build prediction models, evaluate the accuracy and performance of these models.
-> Finding out the best model for our business case & providing executive Summary.
To address the mentioned business problem, we tried to follow a thorough approach. We did a detailed level Exploratory Data Analysis which consists of various Box Plots, Bar Plots etc..
Further we tried our best to build as many Classification models possible which fits our business case (Logistic Regression/kNN/Decision Trees/Random Forest/SVM) and also tried to touch Cox Hazard Survival analysis Model. Later for every model we tried to boost their performances by applying various performance tuning techniques.
As we all are still into our learning mode w.r.t these concepts & starting new, please feel free to provide feedback on our work. Any suggestions are most welcome... :)
Thanks!!
Presentation of the paper "Specification and Verification of Commitment-Regulated Data-Aware Multiagent Systems" at the 29th Italian Conference on Computational Logic (CILC 2014)
This document provides a system test script for testing the payables process integration with the target application system. It includes 16 test sequences that cover key payables functions like opening payable periods, defining banks, invoice entry, payments, refunds, month-end processing and more. Each sequence includes the test step, expected results, and status. Century date compliance testing is also recommended. The document aims to ensure all customizations, interfaces and extensions are functioning correctly for the payables process.
1. A process is a set of activities and tasks performed by roles to produce deliverables, while a model is a representation of a system.
2. Processes are important as they impose consistency, structure activities to reduce errors, and facilitate large teamwork. Models are also important representations.
3. Iterations involve updating the same entity gradually, while increments extend current models by adding new entities. Both are used in projects through planned iterations and increments.
4. The document discusses process maps for requirements, design, implementation, and testing which form the basis for project work.
5. UML diagrams like use case diagrams are used within iterations to
SAP BPC Concepts
SAP Business process consolidation
SAP Business objects overview
SAP Consolidation overview
from Verity Solutions
http://www.verity-sol.com
The document discusses several new features in Oracle R12 related to order management. It covers cascading attributes from headers to lines, customer acceptance tracking, deferred cost of goods sold, exception management, multi-org access, sales agreements, actual ship dates, and parallel pick release. Key points include how each feature works and how to set them up in the R12 system.
Online Cloud based Accounting Software for Personal or Small BusinessAshim Sikder
This document provides an introduction to accounting concepts and accounting software. It explains why accounting data is kept for business management and lists common accounting reports like the ledger, income/expense statement, trial balance, profit and loss report, and balance sheet. It also covers accounting transactions, the chart of accounts, and provides 10 sample transactions to demonstrate how accounting entries are made.
Acceptance tests are created to test a system from the user's perspective and ensure business requirements are met. They examine inputs, outputs, and state changes of the external system interfaces without relying on implementation details. Creating acceptance tests early in the development process and coding with the tests provides quick feedback to prevent rework when tests fail. The tests are created collaboratively by customers, testers, and developers and can be automated for regression testing to improve quality.
Similar to S-CUBE LP: Formal Specifications for Services and Service Compositions (20)
S-CUBE LP: Analysis Operations on SLAs: Detecting and Explaining Conflicting ...virtual-campus
Here are the key types of conflicts that can occur within temporal-aware WS-Agreement documents:
- Inconsistencies between terms, parts of terms, or creation constraints that are defined in overlapping time periods, making it impossible to satisfy all constraints simultaneously.
- Dead terms, where a guarantee term's qualifying condition can never be satisfied within the specified time periods due to contradictions with other terms or constraints.
- Ludicrous terms, where a guarantee term's service level objective cannot be fulfilled even when its qualifying condition is met, again due to contradictions arising from overlapping time periods.
The approach is to detect these three types of conflicts if and only if the involved terms or constraints are defined within overlapping time
S-CUBE LP: Chemical Modeling: Workflow Enactment based on the Chemical Metaphorvirtual-campus
This document provides an overview of a chemical metaphor for workflow enactment in large-scale heterogeneous environments. It discusses problems with current workflow enactment approaches and requirements for improvement. Specifically, it proposes modeling workflow enactment like chemical reactions, which are autonomous, distributed, concurrent and adaptive to local conditions. Resources are represented as "resource quantums" and a coordination model is formalized using the pi-calculus. This approach aims to provide more autonomy, adaptation and distribution for workflow enactment in complex environments.
S-CUBE LP: Quality of Service-Aware Service Composition: QoS optimization in ...virtual-campus
This document discusses quality of service (QoS) optimization in service-based processes. It describes how to select and optimize composed web services to satisfy QoS constraints. The key aspects covered are QoS definition for web services, optimization at both the local service selection level and global process level, and rebinding services to maintain QoS as processes execute.
S-CUBE LP: The Chemical Computing model and HOCL Programmingvirtual-campus
This document provides an overview of the Chemical Computing model and the Higher Order Chemical Language (HOCL). It describes the vision of chemical computing using multiset rewriting to express inherently parallel problems. The Gamma language is presented as the first to capture chemical programming. The γ-calculus improved on Gamma by making it higher order and modeling reaction rules as active molecules. HOCL is then presented as a language based on γ-calculus, allowing active molecules to capture and produce other active molecules. Examples are given to demonstrate the chemical approach.
S-CUBE LP: Executing the HOCL: Concept of a Chemical Interpretervirtual-campus
The document describes an interpreter for a chemical language called Higher Order Chemical Language (HOCL) based on the chemical computing model. The interpreter uses a production system approach with RETE pattern matching to enable efficient execution of the chemical language. Key constructs of the language include passive molecules to represent facts, active molecules to represent rules, and solutions to represent independent computational threads. The interpreter was implemented using Jess rule engine and experiences showed the importance of random conflict resolution and intelligent compilation for chemical modeling applications.
S-CUBE LP: SLA-based Service Virtualization in distributed, heterogenious env...virtual-campus
The document describes SLA-based service virtualization (SSV) in distributed, heterogeneous environments. SSV uses a meta-negotiation component for SLA management, a meta-broker for diverse broker management, and automatic service deployment for virtualizing resources on clouds. It presents the SSV architecture and how it can be extended to Federated Cloud Management using a two-level brokering approach for cloud selection and optimal VM placement. The SSV and FCM architectures aim to provide a unified system for managing different service infrastructures through SLA-based user interaction and an autonomic system for inner interactions.
S-CUBE LP: Impact of SBA design on Global Software Developmentvirtual-campus
This document provides an overview of a learning package about designing and migrating service-based applications and the impact of service-based application design on global software development. It discusses how service-oriented architecture (SOA), cloud computing, and agile service networks can help address challenges with global software development by facilitating collaboration across geographic boundaries. Specifically, it outlines how SOA can support increased modularity, clear work division, and standards adoption to help distribute development tasks.
S-CUBE LP: Self-healing in Mixed Service-oriented Systemsvirtual-campus
This document provides an overview of self-healing in mixed service-oriented systems. It describes self-healing research from IBM on autonomic computing and self-adaptive systems. The key aspects of self-healing covered include the self-healing loop, requirements, states (normal, broken, degraded), failure classification, and policies for detection and recovery. The goal of self-healing is to maintain system health by detecting disruptions, diagnosing causes, and applying recovery strategies in a closed feedback loop.
S-CUBE LP: Analyzing and Adapting Business Processes based on Ecologically-aw...virtual-campus
The document describes a learning package on analyzing and adapting business processes based on ecologically-aware indicators. It discusses using green business process reengineering to optimize an auto finishing process to reduce its environmental impact by considering additional dimensions like water consumption and carbon emissions. A key part of green BPR is extending the traditional BPR architecture to include defining key ecological indicators, monitoring environmental impacts during process execution, and analyzing the data to identify opportunities for process adaptation and improvement.
S-CUBE LP: Preventing SLA Violations in Service Compositions Using Aspect-Bas...virtual-campus
This document discusses an approach to preventing violations of service level agreements (SLAs) in composite services using aspect-based fragment substitution. The approach defines checkpoints in the service composition and uses machine learning to generate predictions of SLA violations at checkpoints. If a violation is predicted, the service composition is adapted by substituting an alternative process fragment that is expected to prevent the predicted SLA violation. Background information is provided on related work in S-Cube on runtime prediction of SLA violations using machine learning on event logs, and on aspect-oriented programming concepts used in the fragment substitution approach.
S-CUBE LP: Analyzing Business Process Performance Using KPI Dependency Analysisvirtual-campus
This document describes a method for analyzing dependencies between Key Performance Indicators (KPIs) and lower-level metrics in business processes. It involves defining KPIs and metrics, monitoring process instances, and using classification algorithms like decision trees to learn relationships between metrics and KPI classes from historical data. The approach automates dependency analysis, is efficient compared to manual methods, and produces understandable decision tree models. Potential limitations include needing historical event logs to train models and ensuring all relevant data can be monitored.
S-CUBE LP: Service Level Agreement based Service infrastructures in the conte...virtual-campus
This document describes a learning package on SLA-aware service infrastructures that aim to 1) hide differences between service infrastructures, 2) support higher layers of service-based applications through SLA-constrained autonomous decisions, and 3) allow for SLA-oriented self-adaptation and violation propagation across layers through monitoring and adaptation mechanisms. The research focuses on autonomous behavior in service infrastructures while considering constraints from SLAs agreed to at higher composition and business process layers.
S-CUBE LP: Runtime Prediction of SLA Violations Based on Service Event Logsvirtual-campus
This document describes an approach for predicting violations of service level agreements (SLAs) based on analyzing event logs from a service composition runtime. It discusses defining checkpoints during service execution to collect monitoring data on factors that influence performance. Missing or future data can be estimated. Machine learning techniques are then used to generate predictions at checkpoints based on historical monitoring data. The accuracy of predictions is evaluated by comparing predictions to actual outcomes. Prediction error is found to decrease as execution progresses, showing the potential for early warning of possible SLA violations to allow corrective actions.
This document discusses proactive service level agreement (SLA) negotiation. It defines SLA and SLA negotiation, and describes two types of negotiation: reactive and proactive. It outlines scenarios that could trigger proactive SLA negotiation, and describes a two-phase proactive negotiation process involving identification of potential providers and pre-agreement/final agreement. The document also presents an architecture and process for proactive SLA negotiation and evaluates the approach through a case study.
S-CUBE LP: A Soft-Constraint Based Approach to QoS-Aware Service Selectionvirtual-campus
The document discusses service selection and quality of service (QoS) considerations. It proposes extending the soft constraint satisfaction problem (SCSP) approach to handle penalties. Specifically, it defines a soft service level agreement (SSLA) model that includes user preferences and penalties defined in terms of QoS variables. If a selected service fails, the approach aims to automatically switch to another service that fits the agreed upon QoS levels while applying any defined penalties. The key points are mapping the SSLA definitions to the SCSP framework and extending the SCSP constraints and operations to incorporate the defined penalties.
S-CUBE LP: Variability Modeling and QoS Analysis of Web Services Orchestrationsvirtual-campus
This document summarizes research on using pairwise testing to model variability and analyze quality of service (QoS) for web service orchestrations. Feature diagrams are used to explicitly represent variability in composite services, and pairwise testing is applied to select configurations covering all pairwise feature interactions. QoS distributions are computed for these configurations to predict overall orchestration QoS in a way that accounts for variability. The approach provides more realistic service level agreements than considering only worst-case scenarios.
S-CUBE LP: Run-time Verification for Preventive Adaptationvirtual-campus
The document describes an approach called SPADE for preventive adaptation of service-based applications using runtime verification. SPADE uses monitoring data from service executions, assumptions about service response times, and formalized requirements to predict if the application will violate requirements. If a violation is predicted, SPADE identifies the need for adaptation to prevent an actual failure. SPADE was designed as part of the S-Cube project to enable service-based applications to adapt preventively based on runtime monitoring and verification.
S-CUBE LP: Using Data Properties in Quality Predictionvirtual-campus
The document discusses using data properties in quality prediction for service compositions. It notes that the quality of service (QoS) of a composition depends on factors like the QoS of component services, composition structure, and data. An automotive scenario example is provided where a parts provider composition selects among multiple part makers. The computation cost of the provider composition depends on the number of parts and characteristics of the chosen maker. Data properties like the number of parts can thus impact QoS predictions for service compositions.
S-CUBE LP: Dynamic Privacy Model for Web Servicevirtual-campus
The document describes a learning package on a dynamic privacy model for web services, which proposes formalizing privacy agreements using a finite state machine to model the private data use flow and define events and negotiation actions to handle changes over time while ensuring user privacy is maintained according to the agreement terms. It provides an example of modeling a purchase service private data use as a state machine to demonstrate how the proposed model works.
The document discusses Service Level Agreements (SLAs) and SLA negotiation. It defines an SLA as a formal contract between a service provider and client specifying service quality guarantees and penalties. SLA negotiation is the process where providers and clients agree on desired service levels. There are two types: reactive after decisions are made or violations occur, and proactive prior to service binding or violations. The document outlines triggers for proactive negotiation, approaches to handling violations, a two-phase negotiation process, and an architecture and rules for proactive and reactive negotiation. It also describes a case study to evaluate proactive negotiation.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
AppSec PNW: Android and iOS Application Security with MobSF
S-CUBE LP: Formal Specifications for Services and Service Compositions
1. S-Cube Learning Package
Service Specifications:
Formal Specifications for Services
and Service Compositions
University of Crete (UoC),
Universidad Politecnica de Madrid (UPM)
George Baryannis and Dimitris Plexousakis, UoC
Manuel Carro, UPM
www.s-cube-network.eu
2. Learning Package Categorization
S-Cube
Formal Models and Languages for
QoS-Aware Service Compositions
Models and Mechanisms for
Coordinated Service Compositions
Formal Specifications for Services
and Service Compositions
3. Learning Package Overview
Problem Description
Addressing the Frame Problem in Service Specifications
Automatic Derivation of Composite Specifications
Discussion and Summary
4. The Need for Service Specifications (1)
Formal specifications promote and facilitate the reusability of
services and service compositions
– Especially important in the case of legacy services, where it is crucial
to have a formal representation of the service that can be reasoned
with
– In the case of service compositions, specifications abstract away the
composition method and use all composite services in the same way
regardless of whether they’ve been composed using Java, C#, BPEL,
MS Workflow etc.
5. The Need for Service Specifications (2)
Service specifications also enable the following:
– construction of a service based on a set of requirements agreed upon
by the parties involved
– checking that a service meets a set of requirements
– service verification techniques
– detection of inconsistencies to decide whether a set of services is
composable
Complete formal specifications are crucial for both
– Service providers, to advertise more effectively the offered services
– Service consumers, to make informed choices by knowing the exact
way in which a service is expected to perform
6. Issues involving Service Specifications
(1)
Service Specifications are usually based on logic-based
knowledge representation for the description of service
functionality (Inputs, Outputs, Preconditions and Effects,
collectively known as IOPEs) (e.g. in OWL-S or WSMO)
This makes them vulnerable to three well-known problems
– The Frame Problem: how to state in a succinct way that nothing else
changes, except when stated otherwise
– The Ramification Problem: how to represent and infer information
about the knock-on and indirect effects of an action or an event
– The Qualification Problem: how to list all preconditions that must be
satisfied for an action to have a desired effect and how to update them
when new statements become part of our knowledge
7. Issues involving Service Specifications
(2)
A composite service should be delivered to consumers in the
same way as an atomic one
– Specifications for composite services should be available, based on
the specifications of the services that take part in the composition
No service description frameworks or service composition
approaches attempt to derive a complete specification of the
inputs, outputs, preconditions and effects (IOPEs) that should
be provided to a service consumer
Both issues (and, as it will become obvious, the solutions we
propose) share a common basis: the use of logic
representation to describe a Web service in a richer, more
flexible way and one that can be reasoned with
8. Learning Package Overview
Problem Description
Addressing the Frame Problem in Service Specifications
Automatic Derivation of Composite Specifications
Discussion and Summary
9. A Motivating Example (1)
A Motivating Example (1)
Typical case of an online shop:
Let’s focus on the Execute Order task, which may
include the following subtasks:
10. A Motivating Example (1)
A Motivating Example (2)
The three subtasks with the exception of the package and
delivery can be performed by Web Services
A Web Service that is tasked with the money withdrawal
should:
– Check before execution that the credit card is valid and that the
account has enough money
– Withdraw the money and check that the balance is still positive
– Ban the credit card if the daily withdrawal limit (DL) has been reached
– If we’re close to the limit (less than an amount of W), warn the client,
otherwise don’t
11. A Motivating Example (1)
A Motivating Example (3)
Encoding the service description using the precondition/postcondition notation:
PRECONDITIONS:
Note the use of
Valid(creditCard, account)
balance(account) ≥ A
primed/unprimed
POSTCONDITIONS: predicates to denote
balance′(account) = balance(account) – A when the predicate
balance′(account) ≥ 0 is evaluated:
withdrawalTotal′(day, account)= balance is evaluated
withdrawalTotal(day, account) + A before execution
withdrawalTotal′(day, account) ≥ DL while balance′
¬Valid′ (creditCard, account) afterwards
(withdrawalTotal′(day, account) < DL
DL – withdrawalTotal′(day, account) ≤ W)
Warn′ (creditCard, account)
(withdrawalTotal′(day, account) < DL DL – withdrawalTotal′(day, account) > W)
¬ warn′ (creditCard, account)
12. A Motivating Example (1)
A Motivating Example (4)
This specification is not complete:
– We can’t prove that no other accounts or credit cards are affected by
the service execution
We need to add frame axioms that explicitly state that nothing else
changes. The complete specification is shown on the next slide.
As it will become obvious, including frame axioms
– becomes rather challenging, especially when handling complex specifications
– results in an even more complicated and lengthy specification that would
make computing formal proofs based on it a challenging task
14. A Motivating Example (1)
A Motivating Example:
Service Composition (1)
The effects of the frame problem aggravate in the case of composite service
specifications.
Suppose we have the following complete specifications for two services that
handle the wish list and recommendations update subtasks.
– The Completed predicate denotes that the item contained in that
particular order has been delivered to the buyer
– The Included predicate denotes that the second argument (the item) is
contained in the first argument (the wish list or the recommendations list)
PRE1: Completed(order, item) Included(buyersWishList, item)
POST1: ¬Included′(buyersWishList, item)
x, y [Completed(x, y) ≡ Completed′(x, y)]
x, y [x ≠ buyersWishList y ≠ item Included(x, y) ≡ Included′(x, y)]
PRE2: Completed(order, item)
¬Included(buyersRecoms, associatedItem)
POST2: Included′(buyersRecoms, associatedItem)
x, y [Completed(x, y) ≡ Completed′(x, y)]
x, y [x ≠ buyersRecoms y ≠ associatedItem
Included(x, y) ≡ Included′(x, y)]
15. A Motivating Example (1)
A Motivating Example:
Service Composition (2)
Suppose now that we want a service that executes an order
for all items contained in it. We may compose the 3 services
we specified in the previous slides:
• Let’s describe the parallel composition of the wish list and
recommendations update services in terms of its
preconditions and postconditions
16. A Motivating Example (1)
A Motivating Example:
Service Composition (3)
Parallel Composition of Wish List and Recommendations
Update Services
PRE1: Completed(order, item)
Included(buyersWishList, item)
PRE2: Completed(order, item)
¬ Included(buyersRecoms, associatedItem)
The preconditions of the composite service are derived from
the preconditions of the two services that participate in the
composition
17. A Motivating Example (1)
A Motivating Example:
Service Composition (3)
Parallel Composition of Wish List and Recommendations
Update Services
PRE: Completed(order, item)
Included(buyersWishList, item)
¬ Included(buyersRecoms, associatedItem)
Postconditions must be derived similarly, by considering the
postconditions of the participating services (more on
deriving composite specifications in the following section)
18. A Motivating Example (1)
A Motivating Example:
Service Composition (3)
Parallel Composition of Wish List and Recommendations Update Services
PRE: Completed(order, item)
Included(buyersWishList, item)
¬ Included(buyersRecoms, associatedItem)
POST1: ¬Included′(buyersWishList, item)
x, y [x ≠ buyersWishList y ≠ item
Included(x, y) ≡ Included′(x, y)]
x, y [Completed(x, y) ≡ Completed′(x, y)]
POST2: Included′(buyersRecoms, associatedItem)
x, y [x ≠ buyersRecoms y ≠ associatedItem
Included(x, y) ≡ Included′(x, y)]
x, y [Completed(x, y) ≡ Completed′(x, y)]
19. A Motivating Example (1)
A Motivating Example:
Service Composition (3)
Parallel Composition of Wish List and Recommendations Update Services
PRE: Completed(order, item)
Included(buyersWishList, item)
¬ Included(buyersRecoms, associatedItem)
POST: ¬Included′(buyersWishList, item) The highlighted
postconditions are
Included′(buyersRecoms, associatedItem) inconsistent (they
x, y [x ≠ buyersWishList y ≠ item cannot be true
at the same time)
Included(x, y) ≡ Included′(x, y)]
x, y [x ≠ buyersRecoms y ≠ associatedItem
Included(x, y) ≡ Included′(x, y)]
x, y [Completed(x, y) ≡ Completed′(x, y)]
20. A Motivating the Frame Problem (1)
Addressing Example (1)
The frame problem may indeed make Web Service
specifications
– lengthier, more complex
– inconsistent, in the case of Web Service composition
To solve it, we adopt the solution of Explanation Closure
axioms
– Has been applied to procedure specifications, conceptually close to
Web Service specifications
– Is expressed in first-order predicate logic, suitable for current Semantic
Web Service frameworks such as OWL-S
21. A Motivating the Frame Problem (2)
Addressing Example (1)
Frame Axioms: Procedure-oriented perspective
– State what predicates or functions each procedure (service) does not
change
Explanation Closure Axioms: State-oriented perspective
– State which procedures (services) change each predicate or function
– Extend first-order predicate logic with:
- Special predicate Occur, of arity 1 and special variable α
- Occur(α) is true iff the service denoted by variable α has executed
successfully
– Explanation Closure Axioms are also known as Change Axioms
23. A Motivating Change Axioms (2)
Expressing Example (1)
We need to provide a change axiom for each distinct
function/predicate, stating whice service execution leads to it
changing
Some postconditions may be removed since their knowledge is
captured by the change axioms
POSTCONDITIONS:
balance′(account) = balance(account) – A balance′(account) ≥ 0
αx[balance(x) ≠ balance′(x) Occur(a)
a = MoneyWithdrawal(x, A)] MoneyWithdrawal(x, A)
αx,y[Valid(x,y) ≢ Valid′(x,y) Occur(a) represents the
a = MoneyWithdrawal(x, A) execution of the
withdrawalTotal′(x) ≥ DL] Money Withdrawal
αx,y[Warn(x,y) ≢ Warn′(x,y) Occur(a) service, to withdraw
an amount A
a = MoneyWithdrawal(x, A) from account x
DL – withdrawalTotal′(x) ≤ W]
24. A Motivating Change Axioms (3)
Expressing Example (1)
Now let’s deal with the parallel composition specification from the
example:
PRE: Completed(order, item)
Included(buyersWishList, item)
¬ Included(buyersRecoms, associatedItem)
POST: ¬Included(buyersWishList, item)
Included(buyersRecoms, associatedItem)
x, y [x ≠ buyersWishList y ≠ item
Included(x, y) ≡ Included(x, y)]
x, y [x ≠ buyersRecoms y ≠ associatedItem
Included(x, y) ≡ Included(x, y)]
x, y [Completed(x, y) ≡ Completed(x, y)]
25. A Motivating Change Axioms (4)
Expressing Example (1)
A change axiom is provided for each predicate and redundant
postconditions are removed:
PRE: Completed(order, item)
Included(buyersWishList, item)
¬ Included(buyersRecoms, associatedItem)
POST: α x, y [Ιncluded (x, y)
¬ Ιncluded(x, y) Occur(α)]
α = WishListUpdate(x, y) α = RecomsUpdate(x, y)
α x,y [Completed(x, y) ¬Completed(x, y)
Occur(α)] false
26. A Motivating Change Axioms (5)
Expressing Example (1)
In OWL-S, logic formulas and rules are expressed in the
Semantic Web Rule Language (SWRL)
– For change axioms, we use an extension of SWRL, SWRL-FOL
– SWRL-FOL provides constructs to express all kinds of first-order logic
formulas
- Occur can be expressed as a unary predicate which has the
meaning that its argument belongs to a certain OWL class.
- The variable α can be expressed as an individual variable.
27. A Motivating Example (1) Change
Automatically Producing
Axioms (1)
Given a service specification in OWL-S, we want to devise an
algorithm to produce the change axioms needed for the
specification to be complete
– The algorithm must handle both atomic and composite service
specifications
– A change axiom must be added, for each predicate contained in the
specifications
– A predicate’s value should be considered changed by the service
execution, if
- It is negated in a precondition but not in a postcondition
- It is negated in a postcondition but not in a precondition
29. Learning Package Overview
Problem Description
Addressing the Frame Problem in Service Specifications
Automatic Derivation of Composite Specifications
Discussion and Summary
30. Another Motivating Example (1)
Based on the E-Government case study of S-Cube, we have
the following example: citizens submit applications to request
some government-related service, such as obtaining
government-issued documents.
The process below is followed:
The numbers denote the states before and after each
particular task in the process.
31. Another Motivating Example (2)
A possible specification for atomic services that implement the
tasks in the example process is:
Given these specifications and the description of the
composition schema, we want to derive a specification for the
composite service that implements the process
32. What constitutes a Specification for a
Composite Service?
The composite specification is directly linked to the
composition schema and the way the participating services
are orchestrated
– Which part of the participating services’ specifications should be
exposed?
- The complete specifications of all participating services?
- Only the preconditions of the services whose inputs are exposed
(and the postconditions of the services whose outputs are
exposed)?
The first choice may lead to over-specification while the
second may do just the opposite
We propose a derivation process that is based on structural
induction and attempts to construct the composite
specification using a bottom-up approach.
33. Calculating Pre/Postconditions for
Basic Control Constructs (1)
A first-order logic semantics for a service specification with
regard to its preconditions P and postconditions Q is:
where x and y are input and output variables, while si and so
denote the state before service execution and the state after a
successful execution, respectively
In order to be able to apply structural induction on any given
composition schema we need to express such specification
statements for all basic composition control constructs
34. Calculating Pre/Postconditions for
Basic Control Constructs (2)
Let’s consider the case of sequential composition
For the two services in the figure, the
following holds:
Sequence
This is equivalent to:
However, in this equation internal variable z is exposed in the
precondition, which means it is not externally checkable which is
not desirable
– We can eliminate this using the postcondition of A:
35. Calculating Pre/Postconditions for
Basic Control Constructs (3)
Now let’s consider the case of conditionals.
If C is true then A is executed,
otherwise B is executed. Hence:
Conditional
From this equation, we can deduce the following:
The next slide contains a table with pre/postconditions for all basic
control constructs. The Prover9 theorem prover was used to check
all necessary proofs.
37. Deriving a Composite Specification (1)
Using the previous slide and the composition schema for the
process of the motivating example we can derive a
specification for the composite process
1. The parallel execution of CheckRequest and CheckPayment
can be specified as follows:
2. The sequence of Login and the above service leads to:
38. Deriving a Composite Specification (2)
3. Adding the Payment service to the sequence leads to:
4. The conditional execution of
CreateCertified and
CreateUncertified is specified
with the equation on the right:
39. Deriving a Composite Specification (3)
5. Combining specifications in steps 3 and 4 to form the
complete sequence of the process we result in the final
composite specification:
40. Learning Package Overview
Problem Description
Addressing the Frame Problem in Service Specifications
Automatic Derivation of Composite Specifications
Discussion and Summary
41. Discussion (1)
The approaches described in this presentation enable the
creation of service specifications that
– are free of the effects of the frame problem
– Effectively capture a composite process comprising basic control
constructs
Both approaches enrich service specifications for both atomic
and composite services and can, in principle, be combined in
a single Web service specification that possesses both of the
aforementioned characteristics.
42. Discussion (2)
There are several research directions that complement the
work presented. For instance, the closely associated
ramification and qualification problems pose issues such as:
– How de we include ramifications (knock-on and indirect effects) in a
specification?
– What if the solution to the frame problem precludes any indirect
effects?
– How is new knowledge assimilated in an existing specification? What
if it leads to an inconsistent specification?
43. Discussion (3)
As far as deriving composite specifications is concerned,
some issues worth exploring are:
– Simplifying the resulting specification by applying known equivalences
or by exploiting specific knowledge on the particular composite service
– Supporting more complex control constructs such as loops which may
involve approximating loop invariants for the case of Web services and
determining when the loop terminates
– Handling asynchronous calls, where the client invokes a service but
does not wait for its response, which may lead to differences in the
evaluation of postconditions, depending on when the response is
received.
44. Summary
The frame problem in service specifications can be addressed
using the approach of change axioms
Change axioms can be used to provide complete descriptions for
atomic and composite services containing all major composition
schemas and an algorithm for the automatic production of change
axioms was presented
As far as deriving specifications is concerned, the presented
approach attempts to construct the specification by using structural
induction based on derivation rules defined for most fundamental
control constructs
The resulting specification can be used to formally describe the
composite service in terms of its preconditions and postconditions
without requiring any knowledge of the internals of the composition,
allowing for an actual ”black box” view of the whole process.