Software performance testing is an important activity to ensure quality in continuous software development environments. Current performance testing approaches are mostly based on scripting languages and framework where users implement, in a procedural way, the performance tests they want to issue to the system under test. However, existing solutions lack support for explicitly declaring the performance test goals and intents. Thus, while it is possible to express how to execute a performance test, its purpose and applicability context remain implicitly described. In this work, we propose a declarative domain specific language (DSL) for software performance testing and a model-driven framework that can be programmed using the mentioned language and drive the end-to-end process of executing performance tests. Users of the DSL and the framework can specify their performance intents by relying on a powerful goal-oriented language, where standard (e.g., load tests) and more advanced (e.g., stability boundary detection and configuration tests) performance tests can be specified starting from templates. The DSL and the framework have been designed to be integrated into a continuous software development process and validated through extensive use cases that illustrate the expressiveness of the goal-oriented language, and the powerful control it enables on the end-to-end performance test execution to determine how to reach the declared intent.
My talk from The 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018). Cite us: https://dl.acm.org/citation.cfm?id=3184417
Modern Release Engineering in a Nutshell - Why Researchers should Care!Bram Adams
Invited talk at the Leaders of Tomorrow Symposium of the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2016).
The presentation (and its accompanying paper, see http://mcis.polymtl.ca/publications/2016/fose.pdf) explain the basics of release engineering pipelines, common challenges industry is facing as well as pitfalls software engineering researchers are falling into.
Speakers are Bram Adams (MCIS, http://mcis.polymtl.ca) and Shane McIntosh (McGill University, http://shanemcintosh.org).
A video-taped version of the talk will be available soon at https://www.youtube.com/channel/UCL8yG6qpHk7V66l1Jt3aZrA/featured.
Modern Release Engineering in a Nutshell - Why Researchers should Care!Bram Adams
Invited talk at the Leaders of Tomorrow Symposium of the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2016).
The presentation (and its accompanying paper, see http://mcis.polymtl.ca/publications/2016/fose.pdf) explain the basics of release engineering pipelines, common challenges industry is facing as well as pitfalls software engineering researchers are falling into.
Speakers are Bram Adams (MCIS, http://mcis.polymtl.ca) and Shane McIntosh (McGill University, http://shanemcintosh.org).
A video-taped version of the talk will be available soon at https://www.youtube.com/channel/UCL8yG6qpHk7V66l1Jt3aZrA/featured.
Simple tools to fight bigger quality battleAnand Ramdeo
This presentation was given in GTAC 2008 (Also available on www.TestingGeek.com) and discuss the approach of using SVN commit hooks and batch files as continuous integration system.
Key Concepts:
The structure of a Test Framework, its components, and the process to create one will be presented. A Test Framework provides a means to automate tests fast and guarantee their extended existence. It makes the automation process straightforward and systematic, simplifies the error debugging procedures, makes the testware tolerant to all kinds of object changes and reliable in an unstable test environment. The presentation is not an illustration of a specific test framework implementation, but rather a description of the features that should be mandatory attributes of any test framework.
Learning Objectives:
How to present the testware internally as a hierarchy of test sets (structural level), use cases (functional - scenario level), test cases (the scenario steps with verification), test actions (the interface "action" words). How to present the testware externally as a set of configuration files, scripts, libraries.How to separate the business action words from the interface action words and the respective drivers.How to create debug logs that allow to identify the source of a failure in minutes.
System reliability is one of most important features for the Network service. With the development of visualized network functions, system reliability and stability becomes more complex than before. Tes ting and verifying OPNFV infrastructure for tolerating the faults and failures is a great challenge. To address this challenge, the presentation presents a test framework which covers the means and types of faults injection as well as the implement of fault injection tools. Some test cases are shown to validate the framework.
Becoming A Plumber: Building Deployment Pipelines - LISA17Daniel Barker
A core part of our IT transformation program is the implementation of deployment pipelines for every application. Attendees will learn how to build abstract pipelines that will allow multiple types of applications to fit the same basic pipeline structure. This has been a big win for injecting change and updating legacy applications.
Keynote VST2020 (Workshop on Validation, Analysis and Evolution of Software ...University of Antwerp
A keynote delivered for the 3rd Workshop on
Validation, Analysis and Evolution of Software Tests
February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada.
http://vst2020.scch.at
Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under "in vivo" circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
Parasoft delivers a complete framework to create, manage, and extract greater value from unit tests. We help you exercise and test an incomplete system—enabling you to identify problems when they are least difficult, costly, and time-consuming to fix. This reduces the length and cost of downstream processes such as debugging. Moreover, since all tests are written at the unit level, the test suite can be run independent of the complete system. This allows you to isolate code behavior changes, reduces setup complexities, and makes it practical to execute the test suite on a daily basis.
Model-based Testing: Taking BDD/ATDD to the Next LevelBob Binder
Slides from presentation at the Chicago Quality Assurance Association, February 25, 2014.
Acceptance Test Driven Development (ATDD) and Behavior Driven Development (BDD) are well-established Agile practices that rely on the knowledge and intuition of testers, product owners, and developers to identify and then translate statements into test suites. But the resulting test suites often cover only a small slice of happy-path behavior. And, as a BDD specification and its associated test code base grows over time, work to maintain it either crowds out new development and testing or, typically, is simply ignored. Either is high-risk. That’s how Agile teams get eaten by the testing BackBlob.Model Based Testing is a tool-based approach to automate the creation of test cases. This presentation will outline the techniques and benefits of MBT, and show how model-based testing can address both problems. A detailed demo of Spec Explorer, a free model-based testing tool shows how a model is constructed and used to create and maintain a test suite.
SANER 2015 ERA track: Differential Flame Graphscorpaulbezemer
Flame graphs can be used to analyze software profiles. We introduce the differential flame graph, which can be used to detect and analyze regressions in those profiles.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
Functional Continuous Integration with Selenium and HudsonDavid Jellison
This slide deck illustrates Constant Contact's approach to running sets of regression test suites on a regular schedule using SeleniumRC and Hudson. Hudson is a continuous integration server designed for building applications and running unit test unattended, and storing run report and metric artifacts. Hudson can also process a variety of different types of jobs, including running SeleniumRC Java test cases using JUnit as the test runner. SeleniumRC can be run unattended on multiple slave computers in parallel, each running SeleniumRC, to quickly run through many test cases.
Simple tools to fight bigger quality battleAnand Ramdeo
This presentation was given in GTAC 2008 (Also available on www.TestingGeek.com) and discuss the approach of using SVN commit hooks and batch files as continuous integration system.
Key Concepts:
The structure of a Test Framework, its components, and the process to create one will be presented. A Test Framework provides a means to automate tests fast and guarantee their extended existence. It makes the automation process straightforward and systematic, simplifies the error debugging procedures, makes the testware tolerant to all kinds of object changes and reliable in an unstable test environment. The presentation is not an illustration of a specific test framework implementation, but rather a description of the features that should be mandatory attributes of any test framework.
Learning Objectives:
How to present the testware internally as a hierarchy of test sets (structural level), use cases (functional - scenario level), test cases (the scenario steps with verification), test actions (the interface "action" words). How to present the testware externally as a set of configuration files, scripts, libraries.How to separate the business action words from the interface action words and the respective drivers.How to create debug logs that allow to identify the source of a failure in minutes.
System reliability is one of most important features for the Network service. With the development of visualized network functions, system reliability and stability becomes more complex than before. Tes ting and verifying OPNFV infrastructure for tolerating the faults and failures is a great challenge. To address this challenge, the presentation presents a test framework which covers the means and types of faults injection as well as the implement of fault injection tools. Some test cases are shown to validate the framework.
Becoming A Plumber: Building Deployment Pipelines - LISA17Daniel Barker
A core part of our IT transformation program is the implementation of deployment pipelines for every application. Attendees will learn how to build abstract pipelines that will allow multiple types of applications to fit the same basic pipeline structure. This has been a big win for injecting change and updating legacy applications.
Keynote VST2020 (Workshop on Validation, Analysis and Evolution of Software ...University of Antwerp
A keynote delivered for the 3rd Workshop on
Validation, Analysis and Evolution of Software Tests
February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada.
http://vst2020.scch.at
Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under "in vivo" circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
Parasoft delivers a complete framework to create, manage, and extract greater value from unit tests. We help you exercise and test an incomplete system—enabling you to identify problems when they are least difficult, costly, and time-consuming to fix. This reduces the length and cost of downstream processes such as debugging. Moreover, since all tests are written at the unit level, the test suite can be run independent of the complete system. This allows you to isolate code behavior changes, reduces setup complexities, and makes it practical to execute the test suite on a daily basis.
Model-based Testing: Taking BDD/ATDD to the Next LevelBob Binder
Slides from presentation at the Chicago Quality Assurance Association, February 25, 2014.
Acceptance Test Driven Development (ATDD) and Behavior Driven Development (BDD) are well-established Agile practices that rely on the knowledge and intuition of testers, product owners, and developers to identify and then translate statements into test suites. But the resulting test suites often cover only a small slice of happy-path behavior. And, as a BDD specification and its associated test code base grows over time, work to maintain it either crowds out new development and testing or, typically, is simply ignored. Either is high-risk. That’s how Agile teams get eaten by the testing BackBlob.Model Based Testing is a tool-based approach to automate the creation of test cases. This presentation will outline the techniques and benefits of MBT, and show how model-based testing can address both problems. A detailed demo of Spec Explorer, a free model-based testing tool shows how a model is constructed and used to create and maintain a test suite.
SANER 2015 ERA track: Differential Flame Graphscorpaulbezemer
Flame graphs can be used to analyze software profiles. We introduce the differential flame graph, which can be used to detect and analyze regressions in those profiles.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
Functional Continuous Integration with Selenium and HudsonDavid Jellison
This slide deck illustrates Constant Contact's approach to running sets of regression test suites on a regular schedule using SeleniumRC and Hudson. Hudson is a continuous integration server designed for building applications and running unit test unattended, and storing run report and metric artifacts. Hudson can also process a variety of different types of jobs, including running SeleniumRC Java test cases using JUnit as the test runner. SeleniumRC can be run unattended on multiple slave computers in parallel, each running SeleniumRC, to quickly run through many test cases.
Building a Complete Pipeline: The Essential Components of Continuous Testing ...Applitools
Full webinar recording --
In an era when testing is no longer limited to end-of-line waterfall-type process -- but is an integral and substantial part of the end-to-end development process -- it is ever more pressing to look under the hood of the tools, technologies, processes, and best practices utilized by top-notch testing teams that release world-class software.
In this webinar, we took an in-depth look at the technical aspects required to successfully scale continuous testing -- and discussed the tools and technologies used by leading engineering teams, including their approach to managing functional, performance and visual testing at scale.
We also took a hands-on approach: via a live demo, we showed how all of these come together to create a complete end-to-end quality pipeline that supports continuous testing at scale.
Accelerate Agile Development with Service Virtualization - Czech TestParasoft
Process deadlocks are endemic to parallel and Agile development environments, where different teams are simultaneously working on interconnected system components—and each team needs to access the others' components in order to complete its own tasks. But when a team ends up waiting for access to dependencies, agility is stifled. One way to break free of these constraints is to use service virtualization to simulate interactions between the application under test and the dependencies that are unavailable or difficult-to-access for dev/test purposes. This presentation explains how service virtualization can help you eliminate the delays created by unavailable and evolving dependencies so you can save time, money, and effort. It will also share case studies that show specific cases where service virtualization helped organizations compress their testing cycles to keep pace with the demands of Agile development.
Как совместить, казалось бы, несовместимое: Пирамиду Тестирования Майка Коэна и ROI калькулятор? Как добиться синергетического эффекта от использования обоих инструментов. Как эффективно использовать «парочку» от pre-sales до post-release активностей? Как адаптировать полученное решение для нужд вашего конкретного проекта. Давайте попробуем ответить на эти и многие другие вопросы в процессе нашей беседы.
How testing and testers evolved within the time and benefitsof testing tools during mature testing lifecycle. Presentation by NESS Technologies in Krakow, Poland in 2009
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
OSMC 2016 - Application Performance Management with Open-Source-Tooling by M...NETWAYS
Mario Mann ist Consultant bei der NovaTec Consulting GmbH. Seit seinem erfolgreichen Studium der Informatik ist er als Performance Engineer bei einer Großbank im Einsatz. Parallel dazu entwickelt er an inspectIT mit und ist an Themen rund um APM - Application Performance Management - engagiert.
OSMC 2016 | Application Performance Management with Open-Source-Tooling by Ma...NETWAYS
In vielen Softwareprojekten wissen häufig nicht nur Anwender, sondern auch Entwickler nicht, warum sich die Anwendung nicht verhält wie erwartet. Wieso ist meine Anwendung so langsam? Warum ist die Anwendung gerade jetzt nicht verfügbar? An Performance-Tests wurde während der Entwicklung nicht gespart. Aber selbst die innovativste Software ist nutzlos, wenn die Performance und Verfügbarkeit nicht gewährleistet ist. Was kann also das Problem sein? Abhilfe schafft in solchen Fällen die Integration von Application Performance Management (APM) in den Entwicklungsprozess und den Betrieb. Kommerzielle APM Lösungen bieten oft sehr umfassende und mächtige Werkzeuge, die allerdings mit entsprechend hohen Kosten verbunden sind und die Kunden in einen Vendor Lock-in drängen. Die Sicherstellung der Softwareperformance muss aber weder teuer noch proprietär sein. In diesem Vortrag zeigen wir die Wichtigkeit und den Umfang des Themengebiets APM auf und gehen auf die Frage ein, wie man mit Open-Source-Werkzeugen unterschiedliche Belange rund um das Thema APM adressieren kann. Fokussiert auf Java-basierte Unternehmenssoftware beleuchten wir unterschiedliche Dimensionen und Aspekte von APM und illustrieren diese anhand von konkreten Beispielen. Dabei zeigen wir für unterschiedliche Aspekte von APM, wie Open-Source-Werkzeuge sinnvoll miteinander kombiniert werden können, um die Performance der Anwendung in unterschiedlichen Phasen des Softwarelebenszyklus sicherzustellen. Die Zuhörer dieses Vortrags lernen Alternativen zu den oft schwergewichtigen, kommerziellen APM-Lösungen kennen und bekommen Ideen an die Hand, welche Open-Source-Werkzeuge wie und in welcher Kombination sinnvoll angewendet werden können, um bestimmte Ziele zu erreichen.
This prez talks about the automation benefits, usage of QTP and it's different kind of frameworks.
Also talks about the skills set required for QTP implementations.
"Test Craftmanship: Crafting Continuously Evolving Test Strategy for Testing Micro Services" by Mayur Chitnis, Software Testing Specialist, Amdocs at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=Ud54lj9jO70
To know more about #ATAGTR2021, please visit:https://www.youtube.com/watch?v=CzOLNti4V70
At Findly we know test automation is key for continuous delivery. However, in the context of a microservices architecture, our monolithic end-to-end test suites have still been limiting our ability to achieve a truly "continuous" pace of delivery. This talk will explain the principles, processes and techniques we are now using to build test suites for microservices and enable continuous delivery at Findly.
Presented at Auckland Continuous Delivery meetup, May 2016 (http://www.meetup.com/Auckland-Continuous-Delivery/events/230864194/).
Similar to A Declarative Approach for Performance Tests Execution in Continuous Software Development Environments (20)
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...Vincenzo Ferme
BenchFlow is an open-source expert system providing a complete platform for automating performance tests and performance analysis. We know that not all the developers are performance experts, but in nowadays agile environment, they need to deal with performance testing and performance analysis every day. In BenchFlow, the users define objective-driven performance testing using an expressive and SUT-aware DSL implemented in YAML. Then BenchFlow automates the end-to-end process of executing the performance tests and providing performance insights, dealing with system under test deployment relying on Docker technologies, distributing simulated users load on different server, error handling, performance data collection and performance metrics and insights computation.
My talk for SPEC Research Group DevOps (https://research.spec.org/devopswg) about BenchFlow. Discover BenchFlow: https://github.com/benchflow
Towards Holistic Continuous Software Performance AssessmentVincenzo Ferme
In agile, fast and continuous development lifecycles, software performance analysis is fundamental to confidently release continuously improved software versions. Researchers and industry practitioners have identified the importance of integrating performance testing in agile development processes in a timely and efficient way. However, existing techniques are fragmented and not integrated taking into account the heterogeneous skills of the users developing polyglot distributed software, and their need to automate performance practices as they are integrated in the whole lifecycle without breaking its intrinsic velocity. In this paper we present our vision for holistic continuous software performance assessment, which is being implemented in the BenchFlow tool. BenchFlow enables performance testing and analysis practices to be pervasively integrated in continuous development lifecycle activities. Users can specify performance activities (e.g., standard performance tests) by relying on an expressive Domain Specific Language for objective-driven performance analysis. Collected performance knowledge can be thus reused to speed up performance activities throughout the entire process.
My talk from The International Workshop on Quality-aware DevOps (QUDOS 2017). Cite us: http://dl.acm.org/citation.cfm?id=3053636
Estimating the Cost for Executing Business Processes in the CloudVincenzo Ferme
Managing and running business processes in the Cloud changes how Workflow Management Systems (WfMSs) are deployed. Consequently, when designing such WfMSs, there is a need of determining the sweet spot in the performance vs. resource consumption trade-off. While all Cloud providers agree on the pay-as-you-go resource consumption model, every provider uses a different cost model to gain a competitive edge. In this paper, we present a novel method for estimating the infrastructure costs of running business processes in the Cloud. The method is based on the precise measurement of the resources required to run a mix of business process in the Cloud, while accomplishing expected performance requirements. To showcase the method we use the BenchFlow framework to run experiments on a widely used open-source WfMS executing custom workload with a varying number of simulated users. The experiments are necessary to reliably measure WfMS’s performance and resource consumption, which is then used to estimate the infrastructure costs of executing such workload on four different Cloud providers.
My talk from The International Conference on Business Process Management 2016 (BPM2016). Cite us: http://link.springer.com/chapter/10.1007/978-3-319-45468-9_5
Workflow Engine Performance Benchmarking with BenchFlowVincenzo Ferme
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss how BenchFlow addresses the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to define suitable workloads, representative performance metrics, and how to fully automate the execution of the performance experiments. Concerning the automation of experiments execution and analysis, we designed and implemented the BenchFlow framework. The framework automates the deployment of WfMS packaged within Docker containers. This way the initial configuration and conditions for each experiment can be precisely controlled and reproduced. Moreover, the BenchFlow framework supports heterogeneous WfMS APIs by providing an extensible plugin mechanism. During the benchmark execution it automatically deploys a set of BPMN models and invokes them according to parametric load functions specified declaratively. It automatically collects performance and resource consumption data, both on the driver and servers during the experiment as well as extracting them from the WfMS database afterwards. In addition to latency, throughput and resource utilisation, we compute multiple performance metrics that characterise the WfMS performance at the engine level, at the process level and at the BPMN construct level. To ensure reliability and improve usefulness of the obtained results, we automatically compute descriptive statistics and perform statistical tests to asses the homogeneity of results obtained from different repetitions of the same experiment.
The talk will also present experimental results obtained while benchmarking popular open source engines, using workflow patterns as a micro-benchmark.
Using Docker Containers to Improve Reproducibility in Software and Web Engine...Vincenzo Ferme
The ability to replicate and reproduce scientific results has become an increasingly important topic for many academic disciplines. In computer science and, more specifically, software and Web engineering, contributions of scientific work rely on developed algorithms, tools and prototypes, quantitative evaluations, and other computational analyses. Published code and data come with many undocumented assumptions, dependencies, and configurations that are internal knowledge and make reproducibility hard to achieve. This tutorial presents how Docker containers can overcome these issues and aid the reproducibility of research artefacts in software engineering and discusses their applications in the field.
Cite us: http://link.springer.com/chapter/10.1007/978-3-319-38791-8_58
A Container-Centric Methodology for Benchmarking Workflow Management SystemsVincenzo Ferme
Trusted benchmarks should provide reproducible results obtained following a transparent and well-defined process. In this paper, we show how Containers, originally developed to ease the automated deployment of Cloud application components, can be used in the context of a benchmarking methodology. The proposed methodology focuses on Workflow Management Systems (WfMSs), a critical service orchestration middleware, which can be characterized by its architectural complexity, for which Docker Containers offer a highly suitable approach. The contributions of our work are: 1) a new benchmarking approach taking full advantage of containerization technologies; and 2) the formalization of the interaction process with the WfMS vendors described clearly in a written agreement. Thus, we take advantage of emerging Cloud technologies to address technical challenges, ensuring the performance measurements can be trusted. We also make the benchmarking process transparent, automated, and repeatable so that WfMS vendors can join the benchmarking effort.
As opposed to databases for which established benchmarks have been driving the advancement of the field since a long time, workflow engines still lack a well-accepted benchmark that allows to give a fair comparison of their performance. In this talk we discuss the reasons and propose how to address the main challenges related to benchmarking these complex middleware systems at the core of business process automation and service composition solutions. In particular, we look at how to generate a representative workload and how to define suitable performance metrics. You will learn how to use our framework to measure the performance and resource consumption of your BPMN engine and compare different configurations to tune its performance in your concrete real-life project. The talk will also present preliminary experimental results obtained while benchmarking popular open source engines.
On the Road to Benchmarking BPMN 2.0 Workflow EnginesVincenzo Ferme
Design the first benchmark to assess and compare the performance of Workflow Engines that are compliant with Business Process Model and Notation 2.0 (BPMN 2.0) standard.
My talk from The International Conference on Performance Engineering 2015 (ICPE2015) - http://icpe2015.ipd.kit.edu/icpe2015/
Gli Open Data: cosa sono, a cosa servono, dove si trovano. Una visione della realtà attuale sugli Open Data, nel mondo ed in Italia.
Per fruire al meglio della presentazione, che include video e note, vi consiglio di scaricarla.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
How world-class product teams are winning in the AI era by CEO and Founder, P...
A Declarative Approach for Performance Tests Execution in Continuous Software Development Environments
1. Università
della
Svizzera
italiana
A Declarative Approach for
Performance Tests Execution
in Continuous Software
Development Environments
Vincenzo Ferme
Software Institute
Faculty of Informatics
USI Lugano, Switzerland
Cesare Pautasso
7. 3
CI ServerRepo
Continuous Software Development Environments
Developers,
Testers,
Architects
Continuous Changes
Continuous Test Execution
Load Test, Spike Test
8. 3
CI ServerRepo
Continuous Software Development Environments
Developers,
Testers,
Architects
Continuous Changes
Continuous Test Execution
Capacity Test
Configuration Test
Load Test, Spike Test
11. State of Practice
4
Script/UI Based Integrated in CI Pipelines
Performance Tests
Execution Automation
Google Vizier
AutoPerf
DataMill
CloudPerf
End-to-end Tests against REST APIs
13. Define a Configuration Test
5
Test Definition Configuration Parameters Change
Simulated Users
14. Define a Configuration Test
5
Test Definition Configuration Parameters Change
Simulated Users
System Under Test 3rd-party tools
15. Define a Configuration Test
5
Test Definition Configuration Parameters Change
Simulated Users
System Under Test 3rd-party tools
Test Execution, Data Collection, Metrics Computation
16. Define a Configuration Test
5
Test Definition Configuration Parameters Change
Simulated Users
System Under Test 3rd-party tools
Test Execution, Data Collection, Metrics Computation
Test Goals are not
Explicit in the Definition
17. Define a Configuration Test
5
Test Definition Configuration Parameters Change
Simulated Users
System Under Test 3rd-party tools
Test Execution, Data Collection, Metrics Computation
N
o
Sem
antic
V
alidation
(Runtim
e
Errors)
Test Goals are not
Explicit in the Definition
18. Define a Configuration Test
5
Test Definition Configuration Parameters Change
Simulated Users
System Under Test 3rd-party tools
Test Execution, Data Collection, Metrics Computation
N
o
Sem
antic
V
alidation
(Runtim
e
Errors)
Scarce Visibility and
Control on the Test
Execution Lifecycle
Test Goals are not
Explicit in the Definition
19. Declarative Performance Engineering (DPE)
6
“Enabling the performance analyst to declaratively specify what
performance-relevant questions need to be answered without
being concerned about how they should be answered.
”
[Walter et al.]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
[Walter et al.]
20. Declarative Performance Engineering (DPE)
6
“Enabling the performance analyst to declaratively specify what
performance-relevant questions need to be answered without
being concerned about how they should be answered.
”
[Walter et al.]
Jürgen Walter, André van Hoorn, Heiko Koziolek, Dusan Okanovic, and Samuel Kounev. Asking ”What”?,
Automating the ”How”? -TheVision of Declarative Performance Engineering. In Proc. of ICPE 2016. 91–94.
[Walter et al.]
Developers,
Testers,
Architects,
Performance Analyst …
21. DPE: Approaches for Performance Testing
7
[Omar et al.]
Towards an automated approach to use expert systems in the performance testing of distributed systems.
[Westermann] [Scheuner et al.][Omar et al.]
“Enabling the performance analyst to declaratively specify what
performance-relevant questions need to be answered without
being concerned about how they should be answered.
”[Walter et al.]
[Westermann]
Deriving Goal-oriented Performance Models by Systematic Experimentation
[Scheuner et al.]
Cloud Work Bench - Infrastructure-as-Code Based Cloud Benchmarking.
57. Approach: Summary
26
Explicit Test Goals in
the Definition
Control on the Test
Execution Lifecycle
Terminated
Running
Handle Experiment Result
Determine Exploration Strategy
Determine and Execute
Experiments: Experiment
Life Cycle
[can reach goal]
[cannot reach goal]
user terminated
OR
[max_time]
Goal Reached
Completed with Failure
experiment
results
available
[experiments
remaining]
[all experiments executed]
Partially Complete
[number_of_experiments]
Validate
Termination
Criteria
Terminating
Check Quality Gates
[quality
gates pass]
[failed quality gates]
Visibility on the Test
Execution Lifecycle
58. Approach: Summary
26
Static Syntactic and
Semantic Validation
Explicit Test Goals in
the Definition
Control on the Test
Execution Lifecycle
Terminated
Running
Handle Experiment Result
Determine Exploration Strategy
Determine and Execute
Experiments: Experiment
Life Cycle
[can reach goal]
[cannot reach goal]
user terminated
OR
[max_time]
Goal Reached
Completed with Failure
experiment
results
available
[experiments
remaining]
[all experiments executed]
Partially Complete
[number_of_experiments]
Validate
Termination
Criteria
Terminating
Check Quality Gates
[quality
gates pass]
[failed quality gates]
Visibility on the Test
Execution Lifecycle
59. Approach: Summary
26
Extensible for different
Test Goals
Static Syntactic and
Semantic Validation
Explicit Test Goals in
the Definition
Control on the Test
Execution Lifecycle
Terminated
Running
Handle Experiment Result
Determine Exploration Strategy
Determine and Execute
Experiments: Experiment
Life Cycle
[can reach goal]
[cannot reach goal]
user terminated
OR
[max_time]
Goal Reached
Completed with Failure
experiment
results
available
[experiments
remaining]
[all experiments executed]
Partially Complete
[number_of_experiments]
Validate
Termination
Criteria
Terminating
Check Quality Gates
[quality
gates pass]
[failed quality gates]
Visibility on the Test
Execution Lifecycle
61. Future Work
27
Apply to real-world use cases:
- Apply the approach in real-world contexts
Add more goals:
- Regression
- Assess different micro services deployment alternatives
62. Future Work
27
Apply to real-world use cases:
- Apply the approach in real-world contexts
Add more goals:
- Regression
- Assess different micro services deployment alternatives
Leverage performance test execution data:
- Prioritise performance test execution
- Skip execution of tests
63. Future Work
27
Apply to real-world use cases:
- Apply the approach in real-world contexts
Add more goals:
- Regression
- Assess different micro services deployment alternatives
Leverage performance test execution data:
- Prioritise performance test execution
- Skip execution of tests
Integrate with other frameworks:
- ContinuITy framework, to leverage production data
64. State of Practice
4
Script/UI Based Integrated in CI Pipelines
Performance Tests
Execution Automation
Google Vizier
AutoPerf
DataMill
CloudPerf
End-to-end Tests against REST APIs
State of Practice
Highlights
28
Declarative DSL
Declarative Test Definition
10
configuration:
goal:
load_function:
quality_gates:
termination_criteria:
data_collection:
workload:
sut:
Model-driven Framework
BenchFlow
Model-driven Framework
20
CI ServerRepo
BenchFlow
BenchFlow
BenchFlow
BenchFlow
BenchFlow
BenchFlow
BenchFlow
Extending the Approach
configuration:
goal:
type: configuration
observe:
exploration:
…
configuration:
goal:
type: stability_boundary
observe:
exploration:
…
Extending the Proposed Approach
24
LOAD
CONFIGURATION
STABILITY_BOUNDARY
Available Goals: