Here are the key elements extracted:
A = "line"
B = "File"
contains “line.ISIN”.
Rule Condition Structure CS1
14
Given order does not exist in System_A
And Is data correct? is equal to Yes
When System_A performs Create Order
Then order exists in System_A
CS1: If a requirement contains the keywords “Given”,
“When”, “Then” followed by a condition/action/result,
extract the condition, action, result.
R: Given order does not exist in System_A
And Is data correct? is equal to Yes
When System_A performs Create Order
Then order exists in System
A Practical Approach to Building a Streaming Processing Pipeline for an Onlin...Databricks
Yelp’s ad platform handles millions of ad requests everyday. To generate ad metrics and analytics in real-time, they built they ad event tracking and analyzing pipeline on top of Spark Streaming. It allows Yelp to manage large number of active ad campaigns and greatly reduce over-delivery. It also enables them to share ad metrics with advertisers in a more timely fashion.
This session will start with an overview of the entire pipeline and then focus on two specific challenges in the event consolidation part of the pipeline that Yelp had to solve. The first challenge will be about joining multiple data sources together to generate a single stream of ad events that feeds into various downstream systems. That involves solving several problems that are unique to real-time applications, such as windowed processing and handling of event delays. The second challenge covered is with regards to state management across code deployments and application restarts. Throughout the session, the speakers will share best practices for the design and development of large-scale Spark Streaming pipelines for production environments.
In the era of cloud generation, the constant activity around workloads and containers create more vulnerabilities than an organization can keep up with. Using legacy security vendors doesn't set you up for success in the cloud. You’re likely spending undue hours chasing, triaging and patching a countless stream of cloud vulnerabilities with little prioritization.
Join us for this live webinar as we detail how to streamline host and container vulnerability workflows for your software teams wanting to build fast in the cloud. We'll be covering how to:
Get visibility into active packages and associated vulnerabilities
Reduce false positives by 98%
Reduce investigation time by 30%
Spot a legacy vendor looking to do some cloud washing
A Practical Approach to Building a Streaming Processing Pipeline for an Onlin...Databricks
Yelp’s ad platform handles millions of ad requests everyday. To generate ad metrics and analytics in real-time, they built they ad event tracking and analyzing pipeline on top of Spark Streaming. It allows Yelp to manage large number of active ad campaigns and greatly reduce over-delivery. It also enables them to share ad metrics with advertisers in a more timely fashion.
This session will start with an overview of the entire pipeline and then focus on two specific challenges in the event consolidation part of the pipeline that Yelp had to solve. The first challenge will be about joining multiple data sources together to generate a single stream of ad events that feeds into various downstream systems. That involves solving several problems that are unique to real-time applications, such as windowed processing and handling of event delays. The second challenge covered is with regards to state management across code deployments and application restarts. Throughout the session, the speakers will share best practices for the design and development of large-scale Spark Streaming pipelines for production environments.
In the era of cloud generation, the constant activity around workloads and containers create more vulnerabilities than an organization can keep up with. Using legacy security vendors doesn't set you up for success in the cloud. You’re likely spending undue hours chasing, triaging and patching a countless stream of cloud vulnerabilities with little prioritization.
Join us for this live webinar as we detail how to streamline host and container vulnerability workflows for your software teams wanting to build fast in the cloud. We'll be covering how to:
Get visibility into active packages and associated vulnerabilities
Reduce false positives by 98%
Reduce investigation time by 30%
Spot a legacy vendor looking to do some cloud washing
Ingesting streaming data for analysis in apache ignite (stream sets theme)Tom Diederich
Apache Ignite provides a distributed platform for a wide variety of workloads, but often the issue is simply in getting data into the database in the first place. The wide variety of data sources and formats presents a challenge to any data engineer; in addition, 'data drift', the constant and inevitable mutation of the incoming data's structure and semantics, can break even the most well-engineered integration.
This session, aimed at data architects, data engineers and developers, will explore how we can use the open source StreamSets Data Collector to build robust data pipelines. Attendees will learn how to collect data from cloud platforms such as Amazon and Salesforce, devices, relational databases and other sources, continuously stream it to Ignite, and then use features such as Ignite's continuous queries to perform streaming analysis.
We'll start by covering the basics of reading files from disk, move on to relational databases, then look at more challenging sources such as APIs and message queues. You will learn how to:
* Build data pipelines to ingest a wide variety of data into Apache Ignite
* Anticipate and manage data drift to ensure that data keeps flowing
* Perform simple and complex ad-hoc queries in Ignite via SQL
* Write applications using Ignite to run continuous queries, combining data from multiple sources
Building and deploying microservices with event sourcing, CQRS and Docker (Me...Chris Richardson
In this talk we share our experiences developing and deploying a microservices-based application. You will learn about the distributed data management challenges that arise in a microservices architecture. We will describe how we solved them using event sourcing to reliably publish events that drive eventually consistent workflows and pdate CQRS-based views. You will also learn how we build and deploy the application using a Jenkins-based deployment pipeline that creates Docker images that run on Amazon EC2.
Improve Security Visibility with AlienVault USM Correlation DirectivesAlienVault
At the heart of SIEM is ability to correlate events from one or many sources into actionable alarms based on your security policies. AlienVault USM provides over 2100 correlation directives developed by the AlienVault Labs team, plus the ability to create your own custom rules.
Join us for this customer training session covering how to:
Ensure you are using the latest and greatest built-in correlation directives from AlienVault Labs
Write your own correlation directives based on events from one or more sources
Turn correlation information into actionable alarms
Use correlations to enforce your security policies
Kafka summit SF 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed realtime database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS). Building upon this, I explain how to build common business functionality by stepping through patterns for Scalable payment processing Run it on rails: Instrumentation and monitoring Control flow patterns (start, stop, pause) Finally, all of these concepts are combined in a solution architecture that can be used at enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
The art of the event streaming application: streams, stream processors and sc...confluent
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed realtime database. In this talk I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS). Building upon this, I explain how to build common business functionality by stepping through patterns for Scalable payment processing Run it on rails: Instrumentation and monitoring Control flow patterns (start, stop, pause) Finally, all of these concepts are combined in a solution architecture that can be used at enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Automating End-to-End Business Scenario TestingTechWell
Allstate Insurance had a problem. While thoroughly testing each of their more than thirty business systems, they were still failing to provide good service to their clients, agents, and internal customers. The reason was simple. Implementing end-to-end business processes requires more than just running data through a set of separate systems. While focusing on automating unit, integration, and system testing, they had failed to consider the need for system-to-system integration tests―tests that would verify that their business systems passed data correctly, met interface expectations, and synchronized properly. Monika Mehrotra and Sandra Alequin describe how Allstate, with the assistance of Infosys, supplemented their existing test suites with a set of end-to-end tests that provided deeper test coverage, demonstrating proper system operation from beginning to end. In addition, Allstate implemented a test environment that more closely resembled their production environment, discovering defects that had previously escaped into daily operation. Learn the importance of end-to-end, not just piecemeal testing.
The Art of The Event Streaming Application: Streams, Stream Processors and Sc...confluent
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Kakfa summit london 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Adaptive Data Cleansing with StreamSets and Cassandra (Pat Patterson, StreamS...DataStax
Cassandra is a perfect fit for consuming high volumes of time-series data directly from users, devices, and sensors. Sometimes, though, when we consume data from the real world, systematic and random errors creep in. In this session, we'll see how to use open source tools like RabbitMQ and StreamSets Data Collector with Cassandra features such as User Defined Aggregates to collect, cleanse and ingest variable quality data at scale. Discover how to combine the power of Cassandra with the flexibility of StreamSets to implement adaptive data cleansing.
About the Speaker
Pat Patterson Community Champion, StreamSets
Pat Patterson has been working with Internet technologies since 1997, building software and working with communities at Sun Microsystems, Huawei, Salesforce and StreamSets. At Sun, Pat was the community lead for OpenSSO, while at Huawei he developed cloud storage infrastructure software. A developer evangelist at Salesforce, Pat focused on identity, integration and IoT. Now community champion at StreamSets, Pat is responsible for the care and feeding of the StreamSets open source community.
Adaptive Data Cleansing with StreamSets and CassandraPat Patterson
Presented at Cassandra Summit 2016.
Cassandra is a perfect fit for consuming high volumes of time-series data directly from users, devices, and sensors. Sometimes, though, when we consume data from the real world, systematic and random errors creep in. In this session, we'll see how to use open source tools like RabbitMQ and StreamSets Data Collector with Cassandra features such as User Defined Aggregates to collect, cleanse and ingest variable quality data at scale. Discover how to combine the power of Cassandra with the flexibility of StreamSets to implement adaptive data cleansing.
ETSI NFV#13 NFV resiliency presentation - ali kafel - stratusAli Kafel
This white paper makes the case for:
Why Resiliency Management Needs to be in the Software Infrastructure. It Covers:
- Fault Management and Resiliency Management
- Seamless Protection for Faster and Simpler Devl
- Multiple Levels of Availability
- Speed of Service Restoration & Redundancy Restoration
- State Management
- Demonstrating Carrier Grade Availability and Resiliency
CQRS and Event Sourcing are popular architectural patterns that allow you to build effective event-driven micro-services.
The basic idea of these patterns is to record each event that changes the state of the domain model into the event-storage.
This approach allows you to reduce service latency for any data scale, as well as be able to restore the system without losing any data.
Ingesting streaming data for analysis in apache ignite (stream sets theme)Tom Diederich
Apache Ignite provides a distributed platform for a wide variety of workloads, but often the issue is simply in getting data into the database in the first place. The wide variety of data sources and formats presents a challenge to any data engineer; in addition, 'data drift', the constant and inevitable mutation of the incoming data's structure and semantics, can break even the most well-engineered integration.
This session, aimed at data architects, data engineers and developers, will explore how we can use the open source StreamSets Data Collector to build robust data pipelines. Attendees will learn how to collect data from cloud platforms such as Amazon and Salesforce, devices, relational databases and other sources, continuously stream it to Ignite, and then use features such as Ignite's continuous queries to perform streaming analysis.
We'll start by covering the basics of reading files from disk, move on to relational databases, then look at more challenging sources such as APIs and message queues. You will learn how to:
* Build data pipelines to ingest a wide variety of data into Apache Ignite
* Anticipate and manage data drift to ensure that data keeps flowing
* Perform simple and complex ad-hoc queries in Ignite via SQL
* Write applications using Ignite to run continuous queries, combining data from multiple sources
Building and deploying microservices with event sourcing, CQRS and Docker (Me...Chris Richardson
In this talk we share our experiences developing and deploying a microservices-based application. You will learn about the distributed data management challenges that arise in a microservices architecture. We will describe how we solved them using event sourcing to reliably publish events that drive eventually consistent workflows and pdate CQRS-based views. You will also learn how we build and deploy the application using a Jenkins-based deployment pipeline that creates Docker images that run on Amazon EC2.
Improve Security Visibility with AlienVault USM Correlation DirectivesAlienVault
At the heart of SIEM is ability to correlate events from one or many sources into actionable alarms based on your security policies. AlienVault USM provides over 2100 correlation directives developed by the AlienVault Labs team, plus the ability to create your own custom rules.
Join us for this customer training session covering how to:
Ensure you are using the latest and greatest built-in correlation directives from AlienVault Labs
Write your own correlation directives based on events from one or more sources
Turn correlation information into actionable alarms
Use correlations to enforce your security policies
Kafka summit SF 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed realtime database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS). Building upon this, I explain how to build common business functionality by stepping through patterns for Scalable payment processing Run it on rails: Instrumentation and monitoring Control flow patterns (start, stop, pause) Finally, all of these concepts are combined in a solution architecture that can be used at enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
The art of the event streaming application: streams, stream processors and sc...confluent
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed realtime database. In this talk I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS). Building upon this, I explain how to build common business functionality by stepping through patterns for Scalable payment processing Run it on rails: Instrumentation and monitoring Control flow patterns (start, stop, pause) Finally, all of these concepts are combined in a solution architecture that can be used at enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Automating End-to-End Business Scenario TestingTechWell
Allstate Insurance had a problem. While thoroughly testing each of their more than thirty business systems, they were still failing to provide good service to their clients, agents, and internal customers. The reason was simple. Implementing end-to-end business processes requires more than just running data through a set of separate systems. While focusing on automating unit, integration, and system testing, they had failed to consider the need for system-to-system integration tests―tests that would verify that their business systems passed data correctly, met interface expectations, and synchronized properly. Monika Mehrotra and Sandra Alequin describe how Allstate, with the assistance of Infosys, supplemented their existing test suites with a set of end-to-end tests that provided deeper test coverage, demonstrating proper system operation from beginning to end. In addition, Allstate implemented a test environment that more closely resembled their production environment, discovering defects that had previously escaped into daily operation. Learn the importance of end-to-end, not just piecemeal testing.
The Art of The Event Streaming Application: Streams, Stream Processors and Sc...confluent
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Kakfa summit london 2019 - the art of the event-streaming appNeil Avery
Have you ever imagined what it would be like to build a massively scalable streaming application on Kafka, the challenges, the patterns and the thought process involved? How much of the application can be reused? What patterns will you discover? How does it all fit together? Depending upon your use case and business, this can mean many things. Starting out with a data pipeline is one thing, but evolving into a company-wide real-time application that is business critical and entirely dependent upon a streaming platform is a giant leap. Large-scale streaming applications are also called event streaming applications. They are classically different from other data systems; event streaming applications are viewed as a series of interconnected streams that are topologically defined using stream processors; they hold state that models your use case as events. Almost like a deconstructed real-time database.
In this talk, I step through the origins of event streaming systems, understanding how they are developed from raw events to evolve into something that can be adopted at an organizational scale. I start with event-first thinking, Domain Driven Design to build data models that work with the fundamentals of Streams, Kafka Streams, KSQL and Serverless (FaaS).
Building upon this, I explain how to build common business functionality by stepping through the patterns for: – Scalable payment processing – Run it on rails: Instrumentation and monitoring – Control flow patterns Finally, all of these concepts are combined in a solution architecture that can be used at an enterprise scale. I will introduce enterprise patterns such as events-as-a-backbone, events as APIs and methods for governance and self-service. You will leave talk with an understanding of how to model events with event-first thinking, how to work towards reusable streaming patterns and most importantly, how it all fits together at scale.
Adaptive Data Cleansing with StreamSets and Cassandra (Pat Patterson, StreamS...DataStax
Cassandra is a perfect fit for consuming high volumes of time-series data directly from users, devices, and sensors. Sometimes, though, when we consume data from the real world, systematic and random errors creep in. In this session, we'll see how to use open source tools like RabbitMQ and StreamSets Data Collector with Cassandra features such as User Defined Aggregates to collect, cleanse and ingest variable quality data at scale. Discover how to combine the power of Cassandra with the flexibility of StreamSets to implement adaptive data cleansing.
About the Speaker
Pat Patterson Community Champion, StreamSets
Pat Patterson has been working with Internet technologies since 1997, building software and working with communities at Sun Microsystems, Huawei, Salesforce and StreamSets. At Sun, Pat was the community lead for OpenSSO, while at Huawei he developed cloud storage infrastructure software. A developer evangelist at Salesforce, Pat focused on identity, integration and IoT. Now community champion at StreamSets, Pat is responsible for the care and feeding of the StreamSets open source community.
Adaptive Data Cleansing with StreamSets and CassandraPat Patterson
Presented at Cassandra Summit 2016.
Cassandra is a perfect fit for consuming high volumes of time-series data directly from users, devices, and sensors. Sometimes, though, when we consume data from the real world, systematic and random errors creep in. In this session, we'll see how to use open source tools like RabbitMQ and StreamSets Data Collector with Cassandra features such as User Defined Aggregates to collect, cleanse and ingest variable quality data at scale. Discover how to combine the power of Cassandra with the flexibility of StreamSets to implement adaptive data cleansing.
ETSI NFV#13 NFV resiliency presentation - ali kafel - stratusAli Kafel
This white paper makes the case for:
Why Resiliency Management Needs to be in the Software Infrastructure. It Covers:
- Fault Management and Resiliency Management
- Seamless Protection for Faster and Simpler Devl
- Multiple Levels of Availability
- Speed of Service Restoration & Redundancy Restoration
- State Management
- Demonstrating Carrier Grade Availability and Resiliency
CQRS and Event Sourcing are popular architectural patterns that allow you to build effective event-driven micro-services.
The basic idea of these patterns is to record each event that changes the state of the domain model into the event-storage.
This approach allows you to reduce service latency for any data scale, as well as be able to restore the system without losing any data.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Leveraging Natural-language Requirements for Deriving Better Acceptance Criteria from Models
1. .lusoftware verification & validation
VVS
Leveraging Natural-language Requirements for
Deriving Better Acceptance Criteria from Models
Alvaro Veizagaa, Mauricio Alfereza,
Damiano Torrea, Mehrdad Sabetzadehba,
Lionel Briandab
aUniversity of Luxembourg, Luxembourg
bUniversity of Ottawa, Canada
October 22nd, 2020
Elene Pitskhelauri
Clearstream, Luxembourg
2. Context: Acceptance Testing in
Industry
2
Acceptance
Criteria (AC)
Test Engineers
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
Natural language (NL)
requirements
Business Analysts
Requirements Analysis Acceptance Testing
Requirements
models
3. Our Earlier Work on Requirements
Specification
3
AGAC
• Supports the automated generation of AC in Gherkin
• AC automatically generated from models
Rimay
• A language for writing functional requirements
• Helps write more precise requirements
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
4. AGAC: Automatic Generation of
Acceptance Criteria
4
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
Models
Business Analysts
Requirements Analysis Acceptance Testing
NL
Requirements Generation
of AC
Acceptance
Criteria (AC)
Test Engineers
5. AGAC Example
Requirements Model
Acceptance Criteria
5
Create
Alert
Send
Alert
Receive
Alert
Create
Order
Is data
correct?
[Yes]
[No]
Given order does not exist in System_A
And Is data correct? is equal to Yes
When System_A performs Create Order
Then order exists in System_A
Given alert does not exist in System_A
And Is data correct? is equal to No
When System_A performs Create Alert
Then alert exists in System_A
//Gherkin scenarios for the send and receive actions not shown
AC1AC2
Generation
of AC
6. Requirements Written in Rimay
6
ACTOR
For each “line of the File”, System must check that
Share_CLass_Identifier.Value contains “line.ISIN”.
SYSTEM RESPONSE
SCOPE (Optional) SYSTEM RESPONSE
ACTOR
When Transfer_System receives a File, Transfer_System must
forward the File to System.
SYSTEM RESPONSE
CONDITION STRUCTURE (Optional)
7. NL Requirements to Generate AC
Natural-language Requirement:
Acceptance Criteria
7
When System_A creates an alert, then System_A must set the
priotity of the alert to “high”.
Given order does not exist in System_A
And Is data correct? is equal to Yes
When System_A performs Create Order
Then order exists in System_A
Given alert does not exist in System_A
And Is data correct? is equal to No
When System_A performs Create Alert
Then alert exists in System_A
And the property priority of alert is equal to high
//Gherkin scenarios for the send and receive actions not shown
AC1AC2
8. Main Goal
8
Acceptance
Criteria (AC)
Test Engineers
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
NL
Requirements
Business Analysts
Requirements Analysis Acceptance Testing
Requirements
models
Rimay AGAC
9. 9
Enrich models with information extracted from NL
requirements in order to generate better AC
• Define a set of 13 information extraction rules (RQ1)
• Propose a systematic method that generates
recommendations (RQ2)
• Verify that the recommendations are relevant to AC (RQ3)
Main Goal
10. Identify Model
Elements to Enrich
Enriched
Model
Acceptance
Criteria
2
3
Create
Recommendations
1 Extract Information
Enrich Model4
Generate
Acceptance Criteria
5
Recommendations
NL
Requirements
Model
Requirements
Specification
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
Our Approach
10
11. Our Approach
11
Identify Model
Elements to Enrich
Enriched
Model
Acceptance
Criteria
2
3
Create
Recommendations
1 Extract Information
Enrich Model4
Generate
Acceptance Criteria
5
Recommendations
NL
Requirements
Model
Requirements
Specification
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
12. Extract Information
12
13 rules to extract AC relevant
information content from NL
requirements
• Derived from manual analysis of
overlaps between meta-models
element and the element types
in Rimay
Category #
Scope 1
Condition
Structure
7
Actor 2
System
Response
3
RQ1: How can we extract AC-related information
from NL requirements?
13. Rule Scope S1
13
Check
ISIN
more
elements
…
more
elements
…
: File
line
S1: If a prepositional phrase starts by “for each”, and
further mentions: the type A of the collection that will
be iterated over and an item B in the collection, then
extract A and B.
R: For each "line of the File”,
System must check that
Share_Class_Identifier.Value
contains “line.ISIN”.
Type
A
Type
A
Item
B
Item
B
14. Rule Condition Structure C1
14
C1: If the verb phrase A in a When structure does not
match the name of any of the actions preceding the
traced action, then extract A.
R: When Transfer_System receives a File,
Transfer_System must forward the File to
System.
Forward
File
f: File
more
elements
…
Receive
File
more
elements
…
f: File
15. Rule Actor A1
15
A1: If an actor A in an NL requirement does not
match the name of any UML actor linked to the
activity partition of the traced action, then extract A.
Create
Alert
: System
…more
elements
follow
more
elements
precede…
R: Before "8:00 am", every "calendar
day”, if System does not receive
the File,
then System must create an "Alert”.
16. Rule System Response SR1
16
SR1: If a system response creates data A (e.g.,
Report, Instruction, Alarm), then extract A.
Create
Alert
…more
elements
follow
more
elements
precede…
: Alert
R: Before "8:00 am", every "calendar day”, if System
does not receive the File, then System must create an
“Alert" .
17. Our Approach
17
Identify Model
Elements to Enrich
Enriched
Model
Acceptance
Criteria
2
3
Create
Recommendations
1 Extract Information
Enrich Model4
Generate
Acceptance Criteria
5
Recommendations
NL
Requirements
Model
Requirements
Specification
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
18. Identify Models Elements to Enrich
18
Requirement. When the Order_Issuer (hereafter known as OI) creates an
Order of type Subscription_Order, then the OI must set the settlement_method
of the Order to “FOP”.
Compare text
sequences
Activity Diagrams
Create
Order
…
…more
elements
Class Diagrams (Domain Model)
Use Case Diagrams (Actors)
…
actor
Order_Issuer
…
settlement_date : date
Subscription_Order
act Create subscription order
19. Identify Models Elements to Enrich
19
6 model elements
to enrich
R1. When the Order_Issuer (hereafter known as OI)
creates an Order of type Subscription_Order, then the OI
must set the settlement_method of the Order to “FOP”.
SR1
actor actor alias
object type
property name
property
valueobject name
A1 A2
SR2 SR3 SR2
Activity Diagrams
…
Class Diagrams (Domain Model)
act Create subscription order
Order : Subscription_Order
settlement_date : date
settlement_method : string
Subscription_Order
Create
Order
…more
elementssettlement_method = “FOP”
Use Case Diagrams (Actors)
…
A2 A1
SR1
SR2
SR2
SR3
«actor»
Order_Issuer
OI : Order_Issuer
…
20. Our Approach
20
Identify Model
Elements to Enrich
Enriched
Model
Acceptance
Criteria
2
3
Create
Recommendations
1 Extract Information
Enrich Model4
Generate
Acceptance Criteria
5
Recommendations
NL
Requirements
Model
Requirements
Specification
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
21. Create Recommendations
21
ID Description Rule
Rec. 4 Add the property “settlement method” to the
object node of type “Subscription Order"
SR2
Rec. 5 Set the “settlement_method” property’s value
to “FOP”
SR2
… … …
Recommendations
on how to enrich the
model elements
Activity Diagrams
…
Class Diagrams (Domain Model)
act Create subscription order
Order : Subscription_Order
settlement_date : date
settlement_method : string
Subscription_Order
Create
Order
…more
elementssettlement_method = “FOP”
Use Case Diagrams (Actors)
…
A2 A1
SR1
SR2
SR2
SR3
«actor»
Order_Issuer
OI : Order_Issuer
…
22. Our Approach
22
Identify Model
Elements to Enrich
Enriched
Model
Acceptance
Criteria
2
3
Create
Recommendations
1 Extract Information
Enrich Model4
Generate
Acceptance Criteria
5
Recommendations
NL
Requirements
Model
Requirements
Specification
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
23. Enrich Model
23
Enrich the model
elements following
recommendations
Activity Diagrams
…
Class Diagrams (Domain Model)
act Create subscription order
Order : Subscription_Order
settlement_date : date
settlement_method : string
Subscription_Order
Create
Order
…more
elementssettlement_method = “FOP”
Use Case Diagrams (Actors)
…
A2 A1
SR1
SR2
SR2
SR3
«actor»
Order_Issuer
OI : Order_Issuer
…
RQ2: How can we systematically enrich models with
the (AC-related) information from NL requirements?
ID Description Rule
Rec.
4
Add the property “settlement
method” to the object node of type
“Subscription Order"
SR2
Rec.
5
Set the “settlement_method”
property’s value to “FOP”
SR2
… … …
24. Our Approach
24
Identify Model
Elements to Enrich
Enriched
Model
Acceptance
Criteria
2
3
Create
Recommendations
1 Extract Information
Enrich Model4
Generate
Acceptance Criteria
5
Recommendations
NL
Requirements
Model
Requirements
Specification
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
25. Generate Acceptance Criteria
25
Generation of AC
Activity Diagrams
act Create subscription order
Order : Subscription_OrderCreate
Order
…more
elementssettlement_method = “FOP”
…
A2 A1
SR1
SR2
SR3
OI : Order_Issuer
@Intent Create
@Requirement_Id: R1
Scenario: Create an Order
Given an Order of type Subscription_Order does not
exist in OI of type Order_Issuer
When OI Create Order,
Then Order exists in OI
And the property settlement_method of Order is equal to FOP
26. Empirical Evaluation
26
• Assess whether financial analysis benefit from our
solution
• We conducted a case study in collaboration with
Clearstream Luxembourg
Investment Fund
Services
RQ3: Are our recommendations for model enrichment
useful in practice?
27. Case Study Preparation
27
Q1) Are the recommendations to enrich the model
useful to generate better AC?
Requirements Specification
27 Recommendations
AC from the Model
Enrich Model
AC from Enriched Model
Compare AC and answer Q1
5 Domain Experts
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
28. Results
28
Answer to Q1: the experts found 89% of the
recommendations (24 out of 27) relevant for
generating better Acceptance Criteria
Question Yes No
Q1 24 3
TPs FPs FNs Precision % Recall %
24 3 0 89 100
29. Results – Model Elements
29
Model Element Original Enriched %Increased
Actions 22 24 9.1%
Events 1 3 200%
Objects 11 15 36.4%
Decision nodes 8 9 12.5%
Fork and join nodes 2 3 50%
Property values 0 75 N/A
The number of instances of all element types increases in the
enriched model
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
P:Participant T2S: Settlement Platform
InterruptibleActivityRegion1
Merge2
ActivityInitial
Sendsettlement
Instruction
Inx:T2SSettlementIns
State =ToValidate
Receive and
Generate InstructionpInx:Participant
SettlementIns
Validate Ins:Validate
Instruction
Inx:T2S
SettlementIns
Settle Instruction
Inx.SettlementDate >
T2S.CurrentDate
Inx.SettlementDate
starts
Merge1
Inx:T2SSettlementIns
State =Settled
Send
Notification
Inx:T2S
SettlementIns
RunMatchingProcess
X
days
passed
Inx:T2SSettlementIns
State =Matched
ProcessInstruction
Rejection
Inx:T2S
SettlementIns Inx:T2SSettlementIns
State =Valid
Inx:T2SSettlementIns
State =Rejected
«localPostcondition» Lp1:....
FlowFinal
Receive
notification
notif:
Participant
Notification
[No]
[Inx.State ==
Valid]
[Yes]
DomainModel(Classdiagram)
Participant Settlement Ins
T2SSettlement Ins
State:T2SInstruction State
Participant Notification
Reason:String [0..1]
Message:String
Settlement Instruction
SettlementDate:Date
«enumeratio...
T2SInstruction
State
ToValidate
Valid
Settled
Rejected
Matched
0..*Participant
Instruction
ucActors
Participant
«actor»
SettlementPlatform
isInitialised:Boolean
«Pre-condition»
{SettlementPlatform.allInstances()->forAll (t/t.isInitialised=true)}
30. Results – AC Details
30
Enriched model leads to the generation of more precise
and complete AC
AC Details Original Augmented % Increase
Pre-conditions 432 535 22.1%
Post-conditions 325 2262 596%
Gherkin scenarios 156 191 22.4%
32. Conclusions
32
• Generating AC exclusively from models would miss
critical information that is available only in NL
requirements
• Our industry partner confirmed that the AC resulting
from our approach are more precise and complete
• Take-home message: We need to simultaneously
consider both models and NL requirements to be able
to generate good AC
33. Future Work
33
• Including semantic analysis for better tracing
NL requirements to models
• Investigating how our approach can be applied to
other domains and information systems that are
commonly modeled using UML
34. .lusoftware verification & validation
VVS
Leveraging Natural-language Requirements for
Deriving Better Acceptance Criteria from Models
Alvaro Veizagaa, Mauricio Alfereza,
Damiano Torrea, Mehrdad Sabetzadehba,
Lionel Briandab
aUniversity of Luxembourg, Luxembourg
bUniversity of Ottawa, Canada
October 22nd, 2020
Elene Pitskhelauri
Clearstream, Luxembourg