The document outlines Vincenzo Ferme's research on automating performance testing for continuous software development environments. It discusses the context of continuous development lifecycles and DevOps practices, and how performance testing is rarely applied in these processes. It then presents the state of the art in declarative performance engineering and the challenges of defining and executing performance tests. The document outlines the problem statement and research goals, which include how to specify performance tests and automate their execution in continuous software development lifecycles. The main contributions are summarized as developing an automation-oriented performance tests catalog, the BenchFlow declarative domain-specific language for specifying tests, and the BenchFlow model-driven framework for executing experiments.
A Declarative Approach for Performance Tests Execution in Continuous Software...Vincenzo Ferme
Software performance testing is an important activity to ensure quality in continuous software development environments. Current performance testing approaches are mostly based on scripting languages and framework where users implement, in a procedural way, the performance tests they want to issue to the system under test. However, existing solutions lack support for explicitly declaring the performance test goals and intents. Thus, while it is possible to express how to execute a performance test, its purpose and applicability context remain implicitly described. In this work, we propose a declarative domain specific language (DSL) for software performance testing and a model-driven framework that can be programmed using the mentioned language and drive the end-to-end process of executing performance tests. Users of the DSL and the framework can specify their performance intents by relying on a powerful goal-oriented language, where standard (e.g., load tests) and more advanced (e.g., stability boundary detection and configuration tests) performance tests can be specified starting from templates. The DSL and the framework have been designed to be integrated into a continuous software development process and validated through extensive use cases that illustrate the expressiveness of the goal-oriented language, and the powerful control it enables on the end-to-end performance test execution to determine how to reach the declared intent.
My talk from The 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018). Cite us: https://dl.acm.org/citation.cfm?id=3184417
A Declarative Approach for Performance Tests Execution in Continuous Software...Vincenzo Ferme
Software performance testing is an important activity to ensure quality in continuous software development environments. Current performance testing approaches are mostly based on scripting languages and framework where users implement, in a procedural way, the performance tests they want to issue to the system under test. However, existing solutions lack support for explicitly declaring the performance test goals and intents. Thus, while it is possible to express how to execute a performance test, its purpose and applicability context remain implicitly described. In this work, we propose a declarative domain specific language (DSL) for software performance testing and a model-driven framework that can be programmed using the mentioned language and drive the end-to-end process of executing performance tests. Users of the DSL and the framework can specify their performance intents by relying on a powerful goal-oriented language, where standard (e.g., load tests) and more advanced (e.g., stability boundary detection and configuration tests) performance tests can be specified starting from templates. The DSL and the framework have been designed to be integrated into a continuous software development process and validated through extensive use cases that illustrate the expressiveness of the goal-oriented language, and the powerful control it enables on the end-to-end performance test execution to determine how to reach the declared intent.
My talk from The 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018). Cite us: https://dl.acm.org/citation.cfm?id=3184417
Real case studies of QA management in big teams (60-100 people). How to setup robust QA processes and approaches in them. Main impediments and problems, how to solve them. SAFe.
Model-based Testing: Taking BDD/ATDD to the Next LevelBob Binder
Slides from presentation at the Chicago Quality Assurance Association, February 25, 2014.
Acceptance Test Driven Development (ATDD) and Behavior Driven Development (BDD) are well-established Agile practices that rely on the knowledge and intuition of testers, product owners, and developers to identify and then translate statements into test suites. But the resulting test suites often cover only a small slice of happy-path behavior. And, as a BDD specification and its associated test code base grows over time, work to maintain it either crowds out new development and testing or, typically, is simply ignored. Either is high-risk. That’s how Agile teams get eaten by the testing BackBlob.Model Based Testing is a tool-based approach to automate the creation of test cases. This presentation will outline the techniques and benefits of MBT, and show how model-based testing can address both problems. A detailed demo of Spec Explorer, a free model-based testing tool shows how a model is constructed and used to create and maintain a test suite.
DURGASOFT is INDIA's No.1 Software Training Center offers online training on various technologies like JAVA, .NET, ANDROID,HADOOP,TESTING TOOLS , ADF, INFORMATICA,TALLEAU,IOS,OBIEE,ANJULAR JA, SAP...courses from Hyderabad & Bangalore - India with Real Time Experts.
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Agile vs. DevOps for Continuous Testing: How to Optimize Your PipelinePerfecto by Perforce
Interest in continuous testing has been growing for five years now — yet the more we talk about it, the more polarized the discussion becomes. Complicating the conversation is the fact that Agile and DevOps both drive the need for continuous testing, but both require distinctly different things from a quality perspective.
This session will explore the following topics:
- What is continuous testing and why it is that hard to mature?
- Which elements of continuous testing are essential for Agile and DevOps?
- The key trends around continuous testing and the key personas involved in it.
- Strategies for measuring continuous testing progress and ROI.
- Practical tips on how to embed codeless testing into the code-based pipeline.
Real case studies of QA management in big teams (60-100 people). How to setup robust QA processes and approaches in them. Main impediments and problems, how to solve them. SAFe.
Model-based Testing: Taking BDD/ATDD to the Next LevelBob Binder
Slides from presentation at the Chicago Quality Assurance Association, February 25, 2014.
Acceptance Test Driven Development (ATDD) and Behavior Driven Development (BDD) are well-established Agile practices that rely on the knowledge and intuition of testers, product owners, and developers to identify and then translate statements into test suites. But the resulting test suites often cover only a small slice of happy-path behavior. And, as a BDD specification and its associated test code base grows over time, work to maintain it either crowds out new development and testing or, typically, is simply ignored. Either is high-risk. That’s how Agile teams get eaten by the testing BackBlob.Model Based Testing is a tool-based approach to automate the creation of test cases. This presentation will outline the techniques and benefits of MBT, and show how model-based testing can address both problems. A detailed demo of Spec Explorer, a free model-based testing tool shows how a model is constructed and used to create and maintain a test suite.
DURGASOFT is INDIA's No.1 Software Training Center offers online training on various technologies like JAVA, .NET, ANDROID,HADOOP,TESTING TOOLS , ADF, INFORMATICA,TALLEAU,IOS,OBIEE,ANJULAR JA, SAP...courses from Hyderabad & Bangalore - India with Real Time Experts.
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Agile vs. DevOps for Continuous Testing: How to Optimize Your PipelinePerfecto by Perforce
Interest in continuous testing has been growing for five years now — yet the more we talk about it, the more polarized the discussion becomes. Complicating the conversation is the fact that Agile and DevOps both drive the need for continuous testing, but both require distinctly different things from a quality perspective.
This session will explore the following topics:
- What is continuous testing and why it is that hard to mature?
- Which elements of continuous testing are essential for Agile and DevOps?
- The key trends around continuous testing and the key personas involved in it.
- Strategies for measuring continuous testing progress and ROI.
- Practical tips on how to embed codeless testing into the code-based pipeline.
Primer on application_performance_testing_v0.2Trevor Warren
This presentation focuses on the basics of Performance Testing. It talks about the processes, challenges and activities involved with Performance Testing.
Quality engineering & testing in DevOps IT delivery with TMAPRik Marselis
This webinar that I delivered in partnership with Tesena is about quality engineering and testing in DevOps IT delivery. It is based on the TMAP body of knowledge and also describes the TMAP training & certification scheme.
Rik Marselis is Principal Quality Consultant at Sogeti in the Netherlands.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
International Journal of Soft Computing and Engineering (IJShildredzr1di
International Journal of Soft Computing and Engineering (IJSCE)
ISSN: 2231-2307, Volume-2, Issue-3, July 2012
251
Abstract— In recent years, software testing is becoming more
popular and important in the software development industry.
Indeed, software testing is a broad term encircling a variety of
activities along the development cycle and beyond, aimed at
different goals. Hence, software testing research faces a collection
of challenges. A consistent roadmap of most relevant challenges is
proposed here. In it, the starting point is constituted by some
important past achievements, while the destination consists of two
major identified goals to which research ultimately leads, but
which remains as reachable as goals. The routes from the
achievements to the goals are paved by outstanding research
challenges, which are discussed in the paper along with the
ongoing work.
Software testing is as old as the hills in the history of digital
computers. The testing of software is an important means of
assessing the software to determine its quality. Since testing
typically consumes 40~50% of development efforts, and consumes
more effort for systems that require higher levels of reliability, it is
a significant part of the software engineering
Software testing is a very broad area, which involves many
other technical and non-technical areas, such as specification,
design and implementation, maintenance, process and
management issues in software engineering. Our study focuses on
the state of the art in testing techniques, as well as the latest
techniques which representing the future direction of this area.
Today, testing is the most challenging and dominating activity
used by industry, therefore, improvement in its effectiveness, both
with respect to the time and resources, is taken as a major factor
by many researchers
The purpose of testing can be quality assurance, verification,
and validation or reliability estimation. It is a tradeoff between
budget, time and quality. Software Quality is the central concern
of software engineering. Testing is the single most widely used
approach to ensuring software quality.
(Keywords: SDLC, Software quality, Testing techniq
Technique .)
I. INTRODUCTION
I. Introduction: Software Testing
Software testing is the process of executing a program or
system with the intent of finding errors. Software is not unlike
other physical processes where inputs are received and
outputs are produced. Where software differs is in the manner
in which it fails. Most physical systems fail in a fixed (and
reasonably small) set of ways. By contrast, software can fail in
Manuscript received: on July, 2012
Maneela Tuteja, Department of Information TechnologyDronacharya
College of Engineering, Gurgaon, Haryana,.
Gaurav Dubey, Amity School of Computer Sciences, Amity University,
Uttar Pradesh,India.,
.
many bizarre ways. Detec ...
Some notions of continuous testing (CT) have been applied in software development methodologies for a while but it was never called by that term. Another term sometimes used for CT is parallel testing. While some have mastered CT, most of us struggle with how to transform our current testing approaches to CT approaches and align them with evolving development methodologies. Join Tom Wissink as he discusses current examples of CT implementations across different software development methodologies (agile, waterfall, incremental) and describes where parallel or CT type testing yields the best benefits. Arguably the most challenging methodology that demands CT testing is DevOps. DevOps requires all phases of testing to be done quickly and in parallel with the development process and some contend that testing continues into actual operations. Leave this session with a better understanding of CT, and how this approach can be best leveraged in your development environment.
Aginext 2021: Built-in Quality - How agile coaches can contributeDerk-Jan de Grood
Built-in Quality is key when you want to achieve Business Agility. Yesterday I spoke at the AgiNext Conference in London. In my presentation I explained the importance of Built-in Quality, what is actually is and introduced an approach to implement it. The presentation explains how we can take a validated learning approach to eliminate waste and learn how to improve our development life cycle. I share the suggestions that SAFe makes and give a prioritised overview of quality measures. Throughout the presentation I share my thought on how Agile Coaches can contribute to built quality in.
Similar to Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era (20)
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...Vincenzo Ferme
BenchFlow is an open-source expert system providing a complete platform for automating performance tests and performance analysis. We know that not all the developers are performance experts, but in nowadays agile environment, they need to deal with performance testing and performance analysis every day. In BenchFlow, the users define objective-driven performance testing using an expressive and SUT-aware DSL implemented in YAML. Then BenchFlow automates the end-to-end process of executing the performance tests and providing performance insights, dealing with system under test deployment relying on Docker technologies, distributing simulated users load on different server, error handling, performance data collection and performance metrics and insights computation.
My talk for SPEC Research Group DevOps (https://research.spec.org/devopswg) about BenchFlow. Discover BenchFlow: https://github.com/benchflow
Towards Holistic Continuous Software Performance AssessmentVincenzo Ferme
In agile, fast and continuous development lifecycles, software performance analysis is fundamental to confidently release continuously improved software versions. Researchers and industry practitioners have identified the importance of integrating performance testing in agile development processes in a timely and efficient way. However, existing techniques are fragmented and not integrated taking into account the heterogeneous skills of the users developing polyglot distributed software, and their need to automate performance practices as they are integrated in the whole lifecycle without breaking its intrinsic velocity. In this paper we present our vision for holistic continuous software performance assessment, which is being implemented in the BenchFlow tool. BenchFlow enables performance testing and analysis practices to be pervasively integrated in continuous development lifecycle activities. Users can specify performance activities (e.g., standard performance tests) by relying on an expressive Domain Specific Language for objective-driven performance analysis. Collected performance knowledge can be thus reused to speed up performance activities throughout the entire process.
My talk from The International Workshop on Quality-aware DevOps (QUDOS 2017). Cite us: http://dl.acm.org/citation.cfm?id=3053636
Estimating the Cost for Executing Business Processes in the CloudVincenzo Ferme
Managing and running business processes in the Cloud changes how Workflow Management Systems (WfMSs) are deployed. Consequently, when designing such WfMSs, there is a need of determining the sweet spot in the performance vs. resource consumption trade-off. While all Cloud providers agree on the pay-as-you-go resource consumption model, every provider uses a different cost model to gain a competitive edge. In this paper, we present a novel method for estimating the infrastructure costs of running business processes in the Cloud. The method is based on the precise measurement of the resources required to run a mix of business process in the Cloud, while accomplishing expected performance requirements. To showcase the method we use the BenchFlow framework to run experiments on a widely used open-source WfMS executing custom workload with a varying number of simulated users. The experiments are necessary to reliably measure WfMS’s performance and resource consumption, which is then used to estimate the infrastructure costs of executing such workload on four different Cloud providers.
My talk from The International Conference on Business Process Management 2016 (BPM2016). Cite us: http://link.springer.com/chapter/10.1007/978-3-319-45468-9_5
Workflow Engine Performance Benchmarking with BenchFlowVincenzo Ferme
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss how BenchFlow addresses the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to define suitable workloads, representative performance metrics, and how to fully automate the execution of the performance experiments. Concerning the automation of experiments execution and analysis, we designed and implemented the BenchFlow framework. The framework automates the deployment of WfMS packaged within Docker containers. This way the initial configuration and conditions for each experiment can be precisely controlled and reproduced. Moreover, the BenchFlow framework supports heterogeneous WfMS APIs by providing an extensible plugin mechanism. During the benchmark execution it automatically deploys a set of BPMN models and invokes them according to parametric load functions specified declaratively. It automatically collects performance and resource consumption data, both on the driver and servers during the experiment as well as extracting them from the WfMS database afterwards. In addition to latency, throughput and resource utilisation, we compute multiple performance metrics that characterise the WfMS performance at the engine level, at the process level and at the BPMN construct level. To ensure reliability and improve usefulness of the obtained results, we automatically compute descriptive statistics and perform statistical tests to asses the homogeneity of results obtained from different repetitions of the same experiment.
The talk will also present experimental results obtained while benchmarking popular open source engines, using workflow patterns as a micro-benchmark.
Using Docker Containers to Improve Reproducibility in Software and Web Engine...Vincenzo Ferme
The ability to replicate and reproduce scientific results has become an increasingly important topic for many academic disciplines. In computer science and, more specifically, software and Web engineering, contributions of scientific work rely on developed algorithms, tools and prototypes, quantitative evaluations, and other computational analyses. Published code and data come with many undocumented assumptions, dependencies, and configurations that are internal knowledge and make reproducibility hard to achieve. This tutorial presents how Docker containers can overcome these issues and aid the reproducibility of research artefacts in software engineering and discusses their applications in the field.
Cite us: http://link.springer.com/chapter/10.1007/978-3-319-38791-8_58
A Container-Centric Methodology for Benchmarking Workflow Management SystemsVincenzo Ferme
Trusted benchmarks should provide reproducible results obtained following a transparent and well-defined process. In this paper, we show how Containers, originally developed to ease the automated deployment of Cloud application components, can be used in the context of a benchmarking methodology. The proposed methodology focuses on Workflow Management Systems (WfMSs), a critical service orchestration middleware, which can be characterized by its architectural complexity, for which Docker Containers offer a highly suitable approach. The contributions of our work are: 1) a new benchmarking approach taking full advantage of containerization technologies; and 2) the formalization of the interaction process with the WfMS vendors described clearly in a written agreement. Thus, we take advantage of emerging Cloud technologies to address technical challenges, ensuring the performance measurements can be trusted. We also make the benchmarking process transparent, automated, and repeatable so that WfMS vendors can join the benchmarking effort.
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss the reasons and propose how to address the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to generate a representative workload and how to define suitable performance metrics. You will learn how to use our framework to measure the performance and resource consumption of your BPMN engine and compare different configurations to tune its performance in your concrete real-life project. The talk will also present preliminary experimental results obtained while benchmarking popular open source engines.
On the Road to Benchmarking BPMN 2.0 Workflow EnginesVincenzo Ferme
Design the first benchmark to assess and compare the performance of Workflow Engines that are compliant with Business Process Model and Notation 2.0 (BPMN 2.0) standard.
My talk from The International Conference on Performance Engineering 2015 (ICPE2015) - http://icpe2015.ipd.kit.edu/icpe2015/
Gli Open Data: cosa sono, a cosa servono, dove si trovano. Una visione della realtà attuale sugli Open Data, nel mondo ed in Italia.
Per fruire al meglio della presentazione, che include video e note, vi consiglio di scaricarla.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Declarative Performance Testing Automation - Automating Performance Testing for the DevOps Era
1. Uni
versi
tà
del
l
a
Svi
zzera
i
tal
i
ana Software Institute
Declarative Performance
Testing Automation
Vincenzo Ferme
Committee Members:
Internal: Prof. Walter Binder, Prof. Mauro Pezzè
External: Prof. Lionel Briand, Prof. Dr. Dr. h. c. Frank Leyman
Research Advisor:
Prof. Cesare Pautasso
Automating Performance Testing
for the DevOps Era
4. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
5. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
6. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
7. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
8. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
‣ Career and Contributions
9. 2
Outline
‣ Context
‣ State of the Art & Declarative Performance Engineering
‣ Problem Statement & Research Goals
‣ Main Contributions
‣ Evaluations & Overview of Case Studies
‣ Open Challenges
‣ Career and Contributions
‣ Concluding Remarks and Highlights
11. 4
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps !4
CI Server
Repo
Developers,
Testers,
Architects
Production
CD Server
C.S.D
.L.
12. 5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
13. 5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
14. 5
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
15. How often do you check in code?
[CNCF Survey 2020]
Cumulative growth in commits by quarter (Q1 2015-Q4 2019)
The majority of respondents (53%) check in code multiple tim
How often are your release cycles?
6
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Containers
92%
18. 7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
Fast feedback-loop
Scalability and Availability
19. 7
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Time to Market
!7
Fast feedback-loop
Scalability and Availability Fewer Production Errors
21. 8
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Scalability and Availability
Match Performance Requirements
22. 8
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
Scalability and Availability
3rd Party Performance
Match Performance Requirements
23. 9
Context
Continuous Software Development Lifecycles and DevOps ➤ Performance Engineering ➤ Performance Testing and DevOps
CI Server
Repo
Developers,
Testers,
Architects
Continuous Changes
Continuous Test Execution
“Only conducting performance testing at the conclusion
of system or functional testing is like conducting a
diagnostic blood test on a patient who is already dead.
”
Scott Barber
24. 10
State of the Art
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
Performance Testing is Rarely Applied in DevOps Processes
[Bezemer et al., ICPE 2019]
[Bezemer et al., ICPE 2019]
Bezemer, C.-P., Eismann, S., Ferme, V., Grohmann, J., Heinrich, R., Jamshidi, P., Shang, W., van Hoorn, A.,
Villavicencio, M.,Walter, J., and Willnecker, F. (2019). How is Performance Addressed in DevOps? In Proceedings
of the 10th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 45–50.
25. 11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
26. 11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015]
Slowness of Execution
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
27. 11
State of the Art
Complexity of Def. and Exec.
[Streitz et al., 2018][Leitner and Bezemer, 2017] [Brunnert et al., 2015]
Slowness of Execution
Lack of Native Support for CI/CD Tools
[Leitner and Bezemer, 2017]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
28. 12
Declarative Performance Engineering
“ Enabling the performance analyst to declaratively specify what performance-
relevant questions need to be answered without being concerned about how
they should be answered.
”
[Walter et al., 2016]
[Walter et al., 2016]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
29. 12
Declarative Performance Engineering
“ Enabling the performance analyst to declaratively specify what performance-
relevant questions need to be answered without being concerned about how
they should be answered.
”
[Ferme and Pautasso, ICPE 2018]
Ferme, V. and Pautasso, C. (2018). A Declarative Approach for Performance Tests Execution in Continuous
Software Development Environments. In Proceedings of the 9th ACM/SPEC International Conference on
Performance Engineering (ICPE), pages 261–272.
[Walter et al., 2016]
Developers,
Testers,
Architects,
Performance Analyst …
[Ferme and Pautasso, ICPE 2018]
[Walter et al., 2016]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
30. 13
State of the Art
[Walter, 2018]
DECLARE
Proposes languages and tools for specifying performance
concerns, and declaratively querying performance
knowledge collected and modelled by different tools, with the
objective of providing automated answers to the specified
performance concerns.
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
31. 14
State of the Art
[Schulz et al., 2020]
ContinuITy
Focuses on dealing with the challenges of continuously updating
performance tests, by leveraging performance
knowledge of software systems collected and modelled from
the software operating in production environments.
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
32. 14
State of the Art
[Schulz et al., 2020]
ContinuITy
Focuses on dealing with the challenges of continuously updating
performance tests, by leveraging performance
knowledge of software systems collected and modelled from
the software operating in production environments.
[Avritzer et al., 2020] [Okanovic et al., 2020] [Schulz et al., 2019]
Perf. Testing and DevOps ➤ Declarative Perf. Engineering
34. 16
Problem Statement
To design new methods and techniques for the declarative
specification of performance tests and their automation processes, and
to provide models and frameworks enabling continuous and
automated execution of performance tests, in particular
referring to the target systems, target users and context of our work.
40. 19
Main Contributions
Overall Contribution ➤ Main Contributions Overview
A Declarative Approach for Performance Tests Execution
Automation enabling the continuous and automated execution of
performance tests alongside the Continuous Software
Development Lifecycle, and embrace DevOps goals by enabling the
end-to-end execution of service-level performance tests,
including S.U.T. lifecycle management.
43. 20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
BenchFlow Model-driven Framework
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
Automation-oriented
Performance Tests Catalog
44. 20
Main Contributions
Overall Contribution ➤ Main Contributions Overview
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
BenchFlow Model-driven Framework
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
Automation-oriented
Performance Tests Catalog
45. 21
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
Fill a gap identified in the performance testing literature by contributing an
automation-oriented performance test catalog providing a
comprehensive reference to properly identifying different kinds of
performance tests and their automation requirements.
47. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
48. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
49. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
50. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
51. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
- Measurements to be collected and metrics to be calculated
52. 22
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
- Assumptions on the S.U.T. maturity
- Expectations on the execution environment conditions
- Workload input parameters
- Required execution process
- Checks to be performed on the S.U.T.
- Measurements to be collected and metrics to be calculated
- Preliminary performance tests to be already executed
59. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
60. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
61. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
62. 24
Automation-oriented Performance Tests Catalog
Why a Catalog? ➤ Catalog Template ➤ Included Performance Tests ➤ Examples
1.Baseline Performance Test
2.Unit Performance Test
3.Smoke Test
4.Performance Regression Test
5.Sanity Test
6.Load Test
7.Scalability Test
8.Elasticity Test
9.Stress Test
10.Peak Load Test
11.Spike Test
12.Throttle Test
13.Soak or Stability Test
14.Exploratory Test
15.Configuration Test
16.Benchmark Performance Test
17.Acceptance Test
18.Capacity or Endurance Test
19.Chaos Test
20.Live-traffic or Canary
21.Breakpoints Perf. Test
22.Failover or Recovery Test
23.Resiliency or Reliability
24.Snapshot-load Test
25.Volume or Flood Test
66. 26
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
BenchFlow Model-driven Framework
BenchFlow Declarative DSL
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
[Ferme and Pautasso, ICPE 2018] [Ferme et al., BPM 2015] [Ferme and Pautasso, ICPE 2016]
67. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions
68. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads
69. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
70. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
71. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
72. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
Performance
Data Analysis
73. 27
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Load Functions Workloads Simulated Users
Test Data
TestBed
Management
Performance
Data Analysis
Definition of Configuration Tests
74. 28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
75. 28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
Know
you
I
SUT-awareness
76. 28
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Declarative DSL
Integration in CSDL
Goal-Driven
Performance Testing
Know
you
I
SUT-awareness
77. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling
78. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Deployment Infra.
79. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
80. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload
81. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload Collect Data
82. 29
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlow Model-driven Framework
Test Scheduling Manage S.U.T.
Deployment Infra.
Issue Workload Analyse Data
Collect Data
83. Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
s
84. Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
85. Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
30
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
86. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
87. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
88. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
89. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
90. 31
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
91. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
92. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
93. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
94. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
95. 32
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
GoalType
LOAD
SMOKE
SANITY
CONFIGURATION
SCALABILITY
SPIKE
EXHAUSTIVE_EXPLORATION
STABILITY_BOUNDARY
CAPACITY_CONSTRAINTS
REGRESSION_COMPLETE
REGRESSION_INTERSECTION
ACCEPTANCE
Observe
Exploration
exploration
0..1
observe
1
«enumeration»
ServiceMetric
AVG_RAM
AVG_CPU
RESOURCE_COST
...
«enumeration»
WorkloadMetric
AVG_RESPONSE_TIME
THROUGHPUT
AVG_LATENCY
...
Observe
ServiceObserve
+service_name: List<ServiceMetric>
services
0..N
WorkloadObserve
+workload_name: Option<List<WorkloadMetric>>
+operation_name: Option<List<WorkloadMetric>>
workloads 0..N
96. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
97. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
98. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
99. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
100. 33
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
LoadFunctionExplorationSpace
users: Option<List<Int>>
users_range: Option<[Int,Int]>
users_step: Option<StepFunction[Int]>
load_function
0..1
ExplorationSpace
services: Option<Map<String, ServiceExplorationSpace>>
service_name 0..*
ServiceExplorationSpace
resources: Option<Map<Resource, String>
configuration: Option<Map<String, List<String>>
Exploration
StabilityCriteria
services: Option<Map<String, ServiceStabilityCriterion>>
workloads: Option<Map<String, WorkloadStabilityCriterion>>
exploration_space 1
stability_criteria
0..1
exploration_strategy
1
Memory
values: Option<List<Bytes>>
range: Option<[Bytes,Bytes]>
step: Option<StepFunction[Bytes]>
Cpu
values: Option<List<Millicores>>
range: Option<[Millicores,Millicores]>
step: Option<StepFunction[Millicores]>
«abstract»
Resource
StepFunction[T]
operator: StepFunctionOperator
value: T
resources
0..2
WorkloadStabilityCriterion
max_mix_deviation: Percent
ServiceStabilityCriterion
avg_cpu: Option<StabilityCriterionSetting[Percent]>
avg_memory: Option<StabilityCriterionSetting[Percent]>
StabilityCriterionSetting[T]
operator: StabilityCriterionCondition
value: T
service_name 0..*
workload_name
0..*
ExplorationStrategy
selection: SelectionStrategyType
validation: Option<ValidationStrategyType>
regression: Option<RegressionStrategyType>
«enumeration»
StabilityCriterionCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
«enumeration»
StepFunctionOperator
PLUS
MINUS
MULTIPLY
DIVIDE
POWER
101. 34
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
102. 34
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
103. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
104. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
105. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
106. 35
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«enumeration»
GateCondition
GREATHER_THAN
LESS_THAN
GREATHER_OR_EQUAL_THEN
LESS_OR_EQUAL_THEN
EQUAL
PERCENT_MORE
PERCENT_LESS
ServiceQualityGate
gate_metric: ServiceMetric
condition: GateCondition
gate_threshold_target: String OR ServiceMetric
gate_threshold_minimum: Option<String OR ServiceMetric>
service_name
0..*
WorkloadQualityGate
max_mix_deviation: Option<Percent>
max_think_time_deviation: Option<Percent>
gate_metric: Option<WorkloadMetric>
condition: Option<GateCondition>
gate_threshold_target: Option<String OR WorkloadMetric>
gate_threshold_minimum: Option<String OR WorkloadMetric>
workload_name
0..*
QualityGate
services: Option<Map<String ServiceQualityGate>>
workloads: Option<Map<String WorkloadQualityGate>>
mean_absolute_error: Option<Percent>
RegressionQualityGate
service: Option<String>
workload: Option<String>
gate_metric: ServiceMetric OR WorkloadMetric
regression_delta_absolute: Option<Time>
regression_delta_percent: Option<Percent>
regression 0..1
107. 36
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
108. 36
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
BenchFlowTest
version: TestVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
Goal
type: GoalType
stored_knowledge: Option<Boolean>
«enumeration»
TestVersion
1
1.1
2
3
BenchFlowTestConfiguration
configuration
1
sut
1
workload_name
1..N
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
data_collection
0..1
goal
1
LoadFunction
users: Option<Int>
ramp_up: Time
steady_state: Time
ramp_down: Time
load_function
1
TerminationCriteria
+test: TestTerminationCriterion
+experiment: ExperimentTerminationCriterion
termination_criteria
0..1
QualityGates
quality_gates
0..1
109. 37
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
TerminationCriteria
TestTerminationCriterion
max_time: Time
max_number_of_experiments: Option<Int>
max_failed_experiments: Option<Percent>
TerminationCriterion
max_number_of_trials: Int
max_failed_trials: Option<Percent>
services: Option<Map<String, ServiceTerminationCriterion>>
workloads: Option<Map<String, WorkloadTerminationCriterion>>
experiment
0..1
test
0..1
WorkloadTerminationCriterion
confidence_interval_metric: WorkloadMetric
confidence_interval_value: Float
confidence_interval_precision: Percent
ServiceTerminationCriterion
confidence_interval_metric: ServiceMetric
confidence_interval_value: Float
confidence_interval_precision: Percent
service_name
0..*
workload_name
0..*
110. 38
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
s
s
111. 39
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
«abstract»
Workload
popularity: Option<Percent>
Sut
name: String
type: Option<SutType>
services_configuration: Option<Map<String, ServiceConfigurations>>
DataCollection
only_declared: Boolean
services: Option<Map<String, ServerSideConfiguration>>
workloads: Option<Map<String, ClientSideConfiguration>>
ExperimentTerminationCriteria
max_time: Time
BenchFlowExperiment
version: ExperimentVersion
name: String
description: Option<String>
workloads: Map<String, Workload>
labels: Option<String>
BechFlowExperimentConfiguration
configuration 1
sut
1
workload
1
data_collection 1
load_function
1
TerminationCriterion
max_number_of_trials: Int
max_failed_trials: Option<Percent>
services: Option<Map<String, ServiceTerminationCriterion>>
workloads: Option<Map<String, WorkloadTerminationCriterion>>
termination_criteria
1
experiment
0..1
«enumeration»
ExperimentVersion
1
1.1
1.2
1.3
1.4
2
2.1
2.2
3
LoadFunction
users: Int
ramp_up: Time
steady_state: Time
ramp_down: Time
SutVersion
version
1
112. 40
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Checkout Build Unit Tests
Integration
Tests
E2e Tests Smoke Tests
Load Tests
Acceptance
Tests
Regression
Tests
Deploy in
Production
FUNCTIONAL TESTS
FUNCTIONAL TESTS
PERFORMANCE TESTS
113. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
114. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
115. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
116. 41
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Suite
Environment
name: String
skip_deploy: Option<Boolean>
environments 0..N
TestSuite
version: TestSuiteVersion
name: String
description: Option<String>
Test
include_labels: Option<List<Regex>>
paths: Option<List<String>>
tests
1
Push
branches: Option<List<Regex>>
Trigger
scheduled: Option<Boolean>
triggers
0..1
PullRequest
contexts: Option<ContextType>
source_branches: Option<List<Regex>>
target_branches: Option<List<Regex>>
QualityGate
criterion: CriterionType
exclude: Option<List<String>> quality_gates
1
Event
on 0..N
Release
types: List<String>
Deployment
names: List<String>
suite
1
«enumeration»
CriterionType
ALL_SUCCESS
AT_LEAST_ONE_SUCCESS
«enumeration»
TestSuiteVersion
1
1.1
«enumeration»
ContextType
HEAD
MERGE
ALL
117. 42
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Test
118. 43
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Load Test"
3 description: "Example of Load Test"
4 labels: "load_test"
5 configuration:
6 goal:
7 type: "load_test"
8 # stored_knowledge: "false"
9 observe:
10 ...
11 load_function:
12 users: 1000
13 ramp_up: 5m
14 steady_state: 20m
15 ramp_down: 5m
16 termination_criteria:
17 ...
18 quality_gates:
19 ...
20 sut:
21 ...
22 workloads:
23 ...
24 data_collection:
25 # AUTOMATICALLY attached based on the observe section IF NOT specified
26 services:
27 ...
Load Test
119. 43
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Load Test"
3 description: "Example of Load Test"
4 labels: "load_test"
5 configuration:
6 goal:
7 type: "load_test"
8 # stored_knowledge: "false"
9 observe:
10 ...
11 load_function:
12 users: 1000
13 ramp_up: 5m
14 steady_state: 20m
15 ramp_down: 5m
16 termination_criteria:
17 ...
18 quality_gates:
19 ...
20 sut:
21 ...
22 workloads:
23 ...
24 data_collection:
25 # AUTOMATICALLY attached based on the observe section IF NOT specified
26 services:
27 ...
Load Test
Application Server DBMS
Workflow Engine
Job
Executor
Core
Engine
Transaction Manager
Instance
Database
Persistent Manager
Process Navigator
A
B
C
D
Task Dispatcher
Users
Service Invoker
…
Web
Service
[Skouradaki et al., ICPE 2015]
[Ferme et al., BPM 2015]
[Ferme et al., CLOSER 2016]
[Skouradaki et al., BPM 2016]
[Ferme et al., BPM 2016]
[Ivanchikj et al., BPM 2017]
[Rosinosky et al., OTM 2018]
122. 45
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Configuration Test"
3 description: "Example of Configuration Test"
4 labels: "configuration"
5 configuration:
6 goal:
7 type: "configuration"
8 stored_knowledge: "true"
9 observe:
10 ...
11 exploration:
12 exploration_space:
13 services:
14 service_a:
15 resources:
16 cpu:
17 range: [100m, 1000m]
18 step: "*4"
19 memory:
20 range: [256Mi, 1024Mi]
21 step: "+768Mi"
22 configuration:
23 NUM_SERVICE_THREAD: [12, 24]
24 dbms_a:
25 resources:
26 cpu:
27 range: [100m, 1000m]
28 step: "*10"
29 memory:
30 range: [256Mi, 1024Mi]
31 step: "+768Mi"
32 configuration:
33 QUERY_CACHE_SIZE: 48Mi
34 exploration_strategy:
35 selection: "one_at_a_time"
36 load_function:
37 ...
38 termination_criteria:
39 ...
40 quality_gates:
41 ...
Configuration Test
t
λ
wall clock time
9 AM
300
BenchFlow
Faban
Load test
template
Architect.
con g.
s0
1
ϕ
i
0.015
Γk 0.042
pass/fail (ck
) PASS
sn 1
2.164
0.108
FAIL
...
...
δk
1.26 % 2.58 %
δk
⋅ ck
1.26 % 0.00 %
norm. test mass (si
* p'(λ'))
Σ
100.00 %
74.81 %
0.142
...
...
...
...
...
...
...
...
...
...
^
Operational pro le
Empirical distribution of
workload situations
Baseline & test results
per architectural con g.
Domain metric
dashboard
#Workload
situations
ContinuITy
Analysis of
operational data
2
Experiment
generation
3
Experiment
execution
4
Domain metric
calculation
5
Collection of
operational data
1
λ'
sampled workload situation
f'
100 200 300
0.2
Relative
Mass
0.25
0.20
0.15
0.05
0
50 100 150 200
Workload Situations (Number of Users)
x
x
x x
x
x
x
0.10
x
250 300
x x
x
Step
(Intermediate)
Artifact
Tool
[Avritzer et al., JSS 2020]
[Avritzer et al., ICPE 2019]
[Avritzer et al., ECSA 2018]
123. 45
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
1 version: "3"
2 name: "Configuration Test"
3 description: "Example of Configuration Test"
4 labels: "configuration"
5 configuration:
6 goal:
7 type: "configuration"
8 stored_knowledge: "true"
9 observe:
10 ...
11 exploration:
12 exploration_space:
13 services:
14 service_a:
15 resources:
16 cpu:
17 range: [100m, 1000m]
18 step: "*4"
19 memory:
20 range: [256Mi, 1024Mi]
21 step: "+768Mi"
22 configuration:
23 NUM_SERVICE_THREAD: [12, 24]
24 dbms_a:
25 resources:
26 cpu:
27 range: [100m, 1000m]
28 step: "*10"
29 memory:
30 range: [256Mi, 1024Mi]
31 step: "+768Mi"
32 configuration:
33 QUERY_CACHE_SIZE: 48Mi
34 exploration_strategy:
35 selection: "one_at_a_time"
36 load_function:
37 ...
38 termination_criteria:
39 ...
40 quality_gates:
41 ...
Configuration Test Application Server DBMS
Workflow Engine
Job
Executor
Core
Engine
Transaction Manager
Instance
Database
Persistent Manager
Process Navigator
A
B
C
D
Task Dispatcher
Users
Service Invoker
…
Web
Service
t
λ
wall clock time
9 AM
300
BenchFlow
Faban
Load test
template
Architect.
con g.
s0
1
ϕ
i
0.015
Γk 0.042
pass/fail (ck
) PASS
sn 1
2.164
0.108
FAIL
...
...
δk
1.26 % 2.58 %
δk
⋅ ck
1.26 % 0.00 %
norm. test mass (si
* p'(λ'))
Σ
100.00 %
74.81 %
0.142
...
...
...
...
...
...
...
...
...
...
^
Operational pro le
Empirical distribution of
workload situations
Baseline & test results
per architectural con g.
Domain metric
dashboard
#Workload
situations
ContinuITy
Analysis of
operational data
2
Experiment
generation
3
Experiment
execution
4
Domain metric
calculation
5
Collection of
operational data
1
λ'
sampled workload situation
f'
100 200 300
0.2
Relative
Mass
0.25
0.20
0.15
0.05
0
50 100 150 200
Workload Situations (Number of Users)
x
x
x x
x
x
x
0.10
x
250 300
x x
x
Step
(Intermediate)
Artifact
Tool
[Avritzer et al., JSS 2020]
[Avritzer et al., ICPE 2019]
[Avritzer et al., ECSA 2018]
124. 46
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Experiment Execution
Exploration
Execution
Analysis
Test Bundle:
Test Suite / Test YAML +
SUT Deployment
Descriptor YAML + Files
Metrics
Failures
Result Analysis
Goal Exploration
Experiment
Generation
Experiment Bundle:
Experiment YAML + SUT Deployment
Descriptor YAML + Files
Success
Execution
Errors
125. 47
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
Ready
Running
user paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
126. 48
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
127. 49
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
Entity YAML Specification
Parse to YAML Object
Parse + Syntactic
Validation
Semantic Validation
Entity
Representation
Exception
128. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
129. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
130. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
131. 50
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Ready
Running
user paused
Handle Experiment Result
Remove Non Reachable Experiments
Determine Exploration Strategy
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
user terminated OR
[execution time > max_time]
experiment results
available
Waiting
user input needed
user input received
Start
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments
max_number_of_experim
132. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
133. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
134. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
135. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
136. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
137. 51
Declarative Approach for Performance Tests Execution Automation
Elements & Requirements ➤ Approach Overview ➤ Test & Experiment Model ➤ Test Suite Model ➤ Defining a Test ➤ Executing a Test
Terminated
paused
Handle Experiment Result
Validate Prediction Function
Derive Prediction Function
Remove Non Reachable Experiments
Add Stored Knowledge
Determine and Execute Experiments:
Experiment Life Cycle
[can reach goal]
[cannot reach goal]
[Acceptable Prediction Error]
[Not Acceptable
Prediction Error]
user terminated OR
[execution time > max_time]
Goal Reached
Completed with Failure
experiment results
available
needed
[no regression
model] Validate
Termination
Criteria
[experiments
remaining]
[all
experiments
executed]
[regression
model]
[no regression
model]
[regression
model]
Determine and Execute Initial
Validation Set: Experiment Life Cycle
[regression
model]
[no regression
model]
experiment results
available
[validation set
complete]
Partially Complete
Terminating
Check Quality Gates
[quality
gates
pass]
[failed quality gates]
Handle Experiment Result
[validation set
NOT complete]
Remove Non Reachable
Experiments for Validation Set
[# executed experiments >=
max_number_of_experiments]
163. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
164. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
165. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
166. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
- Questions for additional feedback
167. 61
Evaluations
Overview ➤ Expert Review ➤ Summative Evaluation ➤ Iterative Review, Case Studies ➤ Comparative Evaluation
Structure
- Introduction
- Background check
- Overview of the approach
- Multiple-choice tasks related to the Research Questions
- Questions on the overall approach
- Questions for additional feedback
- Conclusion