Development and deployment contexts have changed considerably over the last decade. The discipline of performance testing has had difficulty keeping up with modern testing principles and software development and deployment processes.
Most people still see performance testing as a single experiment, run against a completely assembled, code-frozen, production-resourced system, with the "accuracy" of simulation and environment considered critical to the value of the data the test provides.
But what can we do to provide actionable and timely information about performance and reliability when the software is not complete, when the system is not yet assembled, or when the software will be deployed in more than one environment?
Eric deconstructs “realism” in performance simulation, talks about performance testing more cheaply to test more often, and suggest strategies and techniques to get there. He will share findings from WOPR22, where performance testers from around the world came together in May 2014 to discuss this theme in a peer workshop.
Eric Proegler Oredev Performance Testing in New ContextsEric Proegler
Virtualization, Cloud Deployments, and Cloud-Based Tools have challenged and changed performance testing practices. Today’s performance tester can summons tens of thousands of virtual users from the cloud in a few minutes at a cost far lower than the expensive on-premise installations of yesteryear.
Meanwhile, systems under test have changed more. Updated software stacks have increased the complexity of scripting and performance measurement, but the biggest changes are in the nature and quantities of resources powering the systems. Interpreting resource usage when resources are shared on a private virtualization platform is exceedingly difficult. Understanding resources when they live in a large public cloud is impossible.
You’ve worked hard to define, develop and execute a performance test on a new application to determine its behavior under load. You have barrels full of numbers. What’s next? The answer is definitely not to generate and send a canned report from your testing tool. Results interpretation and reporting is where a performance tester earns their stripes.
In the first half of this workshop we’ll start by looking at some results from actual projects and together puzzle out the essential message in each. This will be a highly interactive session where we will display a graph, provide a little context, and ask “what do you see here?” We will form hypotheses, draw tentative conclusions, determine what further information we need to confirm them, and identify key target graphs that give us the best insight on system performance and bottlenecks.
In the second half of this session, we’ll try to codify the analytic steps we went through in the first session, and consider a CAVIAR approach for collecting and evaluating test results: Collecting, Aggregating, Visualizing, Interpreting, Analyzing, And Reporting.
These are the slides I used to introduce students in my Testing Project course (http://adam.goucher.ca/?page_id=306) to Performance Testing and the JMeter (http://jakarta.apache.org) tool. Of course I cannot upload the hour long walkthrough of the tool as we created a Test Plan for the project but the slides are better than nothing.
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Eric Proegler Oredev Performance Testing in New ContextsEric Proegler
Virtualization, Cloud Deployments, and Cloud-Based Tools have challenged and changed performance testing practices. Today’s performance tester can summons tens of thousands of virtual users from the cloud in a few minutes at a cost far lower than the expensive on-premise installations of yesteryear.
Meanwhile, systems under test have changed more. Updated software stacks have increased the complexity of scripting and performance measurement, but the biggest changes are in the nature and quantities of resources powering the systems. Interpreting resource usage when resources are shared on a private virtualization platform is exceedingly difficult. Understanding resources when they live in a large public cloud is impossible.
You’ve worked hard to define, develop and execute a performance test on a new application to determine its behavior under load. You have barrels full of numbers. What’s next? The answer is definitely not to generate and send a canned report from your testing tool. Results interpretation and reporting is where a performance tester earns their stripes.
In the first half of this workshop we’ll start by looking at some results from actual projects and together puzzle out the essential message in each. This will be a highly interactive session where we will display a graph, provide a little context, and ask “what do you see here?” We will form hypotheses, draw tentative conclusions, determine what further information we need to confirm them, and identify key target graphs that give us the best insight on system performance and bottlenecks.
In the second half of this session, we’ll try to codify the analytic steps we went through in the first session, and consider a CAVIAR approach for collecting and evaluating test results: Collecting, Aggregating, Visualizing, Interpreting, Analyzing, And Reporting.
These are the slides I used to introduce students in my Testing Project course (http://adam.goucher.ca/?page_id=306) to Performance Testing and the JMeter (http://jakarta.apache.org) tool. Of course I cannot upload the hour long walkthrough of the tool as we created a Test Plan for the project but the slides are better than nothing.
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
It's a very basic introduction of Load Runner for beginners, i explored it at my own, prepared slides & shared it with my colleagues.
What is Load Runner & why we need Performance testing etc.
Enjoy :)
Performance testing with 100,000 concurrent users in AWSMatthias Matook
M-Square build an easy scalable performance test solution on AWS, using open source tools & CI servers, to allow cost-effective testing at scale. The solution is suitable for any organisation type, from startup to enterprise.
The talk covers VPC, EC2, S3, ELB’s, AWS API scripting, automation and interesting performance issues when running massive workloads on AWS.
Using JMeter and Google Analytics for Software Performance TestingXBOSoft
Ed Curran, VP of Engineering at XBOSoft, shares some of his hands on experience in working with JMeter for load and performance testing. In the webinar, he provided explanations of different types of performance testing and how you can use Google Analytics to understand what users are really doing on your web apps and then how to leverage JMeter and analyze the results to improve your app's performance.
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...Alexandru Ersenie
A presentation of how load and performance testing can be done in the J2EE world using open source tools
You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)
It gives you an basic over view to start up with Jmeter. This slide encourage you to start from basic terminology in the Performance Testing field. It contains information about Different subcategory of Performance Testing. The main focus is to connect performance testing with Jmeter.
Using JMeter for Performance Testing Live Streaming ApplicationsBlazeMeter
With live video usage increasing to watch sporting events, popular TV shows, etc., load and performance testing live streaming applications has become a must to ensure they can withstand heavy traffic.
Our Sep 6, 2017 webinar looked at using Apache JMeter™ for testing streaming applications. Until now, JMeter supported the load testing of HTTP Live Streaming (HLS) applications, the leading protocol, with a few different elements. But now, a new HLS plugin for JMeter makes the process much simpler and efficient than before.
An overview of the HLS protocol including its key components
An introduction to the new JMeter HLS plugin
How to learn more and get involved with this open-source project
Software testing tools are evolving. More testing frameworks are emerging through the open source community and commercial vendors. In addition, we’re starting to see the rise of machine-learning (ML) and artificial intelligence (AI) in testing solutions.
Given this evolution, it is important to map the tools that match both the practitioners’ skills and their testing types. When referring to the testing practitioners, we mainly look at three different personas:
-The business tester
-The software developer in test (SDET)
-The software developer
These practitioners are tasked with creating, maintaining, and executing unit tests, build acceptance tests, integration, regression, and other nonfunctional tests.
In this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, you will learn the following:
-How should testing types be dispersed among the three personas and throughout the DevOps pipeline?
-What tools should each of these three personas use for the creation and execution of tests?
-What are the key benefits to continuous testing when mapped correctly?
TLC2018 Thomas Haver: The Automation Firehose - Be Strategic and TacticalAnna Royzman
Thomas Haver teaches how to automate both strategically and tactically to maximize the benefits of automation - at Test Leadership Congress 2018.
http://testleadershipcongress-ny.com
It's a very basic introduction of Load Runner for beginners, i explored it at my own, prepared slides & shared it with my colleagues.
What is Load Runner & why we need Performance testing etc.
Enjoy :)
Performance testing with 100,000 concurrent users in AWSMatthias Matook
M-Square build an easy scalable performance test solution on AWS, using open source tools & CI servers, to allow cost-effective testing at scale. The solution is suitable for any organisation type, from startup to enterprise.
The talk covers VPC, EC2, S3, ELB’s, AWS API scripting, automation and interesting performance issues when running massive workloads on AWS.
Using JMeter and Google Analytics for Software Performance TestingXBOSoft
Ed Curran, VP of Engineering at XBOSoft, shares some of his hands on experience in working with JMeter for load and performance testing. In the webinar, he provided explanations of different types of performance testing and how you can use Google Analytics to understand what users are really doing on your web apps and then how to leverage JMeter and analyze the results to improve your app's performance.
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...Alexandru Ersenie
A presentation of how load and performance testing can be done in the J2EE world using open source tools
You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)
It gives you an basic over view to start up with Jmeter. This slide encourage you to start from basic terminology in the Performance Testing field. It contains information about Different subcategory of Performance Testing. The main focus is to connect performance testing with Jmeter.
Using JMeter for Performance Testing Live Streaming ApplicationsBlazeMeter
With live video usage increasing to watch sporting events, popular TV shows, etc., load and performance testing live streaming applications has become a must to ensure they can withstand heavy traffic.
Our Sep 6, 2017 webinar looked at using Apache JMeter™ for testing streaming applications. Until now, JMeter supported the load testing of HTTP Live Streaming (HLS) applications, the leading protocol, with a few different elements. But now, a new HLS plugin for JMeter makes the process much simpler and efficient than before.
An overview of the HLS protocol including its key components
An introduction to the new JMeter HLS plugin
How to learn more and get involved with this open-source project
Software testing tools are evolving. More testing frameworks are emerging through the open source community and commercial vendors. In addition, we’re starting to see the rise of machine-learning (ML) and artificial intelligence (AI) in testing solutions.
Given this evolution, it is important to map the tools that match both the practitioners’ skills and their testing types. When referring to the testing practitioners, we mainly look at three different personas:
-The business tester
-The software developer in test (SDET)
-The software developer
These practitioners are tasked with creating, maintaining, and executing unit tests, build acceptance tests, integration, regression, and other nonfunctional tests.
In this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, you will learn the following:
-How should testing types be dispersed among the three personas and throughout the DevOps pipeline?
-What tools should each of these three personas use for the creation and execution of tests?
-What are the key benefits to continuous testing when mapped correctly?
TLC2018 Thomas Haver: The Automation Firehose - Be Strategic and TacticalAnna Royzman
Thomas Haver teaches how to automate both strategically and tactically to maximize the benefits of automation - at Test Leadership Congress 2018.
http://testleadershipcongress-ny.com
The Automation Firehose: Be Strategic & Tactical With Your Mobile & Web TestingPerfecto by Perforce
The widespread adoption of test automation has created many challenges — for everything from development lifecycle integration to scripting strategy.
One pitfall of automation is that teams often rush to automate everything they can. This is the automation firehose.
However, just because a scenario CAN be automated does not mean it SHOULD be automated. For scenarios that should be automated, teams must adopt implementation plans to ensure tests are reliable and deriving value.
Join this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, along with Thomas Haver, Manager of Automation & Delivery. In this session, the audience will:
-Understand which test scenarios to automate.
-Learn how to maximize the benefits of automation.
-Receive a checklist to determine automation feasibility and ROI.
A workshop to demonstrate how we can apply agile and continuous delivery principles to continuously deliver value in machine learning and data science projects.
Code: https://github.com/davified/ci-workshop-app
Building an Open Source AppSec PipelineMatt Tesauro
Take the concepts of DevOps and apply them to AppSec and you have an AppSec Pipeline. Allow automation, orchestration and some ChatOps to expand the flow of your AppSec team since its not likely to get any bigger.
This was a workshop given on the UTN University, for the Software Engineering students. The idea is to give a brief explanation about TDD, and how to use it.
Building an Open Source AppSec Pipeline - 2015 Texas Linux FestMatt Tesauro
Take the ideas of DevOps and the notion of a delivery pipeline and combine them for an AppSec Pipeline. This talk covers the open source components used to create an AppSec Pipeline and the benefits we received from its implementation.
Albert Witteveen - With Cloud Computing Who Needs Performance TestingTEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on With Cloud Computing Who Needs Performance Testing by Albert Witteveen.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
An Introduction to Prometheus (GrafanaCon 2016)Brian Brazil
Often what you monitor and get alerted on is defined by your tools, rather than what makes the most sense to you and your organisation. Alerts on metrics such as CPU usage which are noisy and rarely spot real problems, while outages go undetected. Monitoring systems can also be challenging to maintain, and overall provide a poor return on investment.
In the past few years several new monitoring systems have appeared with more powerful semantics and which are easier to run, which offer a way to vastly improve how your organisation operates and prepare you for a Cloud Native environment. Prometheus is one such system. This talk will look at the monitoring ideal and how whitebox monitoring with a time series database, multi-dimensional labels and a powerful querying/alerting language can free you from midnight pages.
Building and Scaling High Performing Technology Organizations by Jez Humble a...Agile India
High performing organizations don't trade off quality, throughput, and reliability: they work to improve all of these and use their software delivery capability to drive organizational performance. In this talk, Jez presents the results from DevOps Research and Assessment's five-year research program, including how continuous delivery and good architecture produce higher software delivery performance, and how to measure culture and its impact on IT and organizational culture. They explain the importance of knowing how (and what) to measure so you focus on what’s important and communicate progress to peers, leaders, and stakeholders. Great outcomes don’t realize themselves, after all, and having the right metrics gives us the data we need to keep getting better at building, delivering, and operating software systems.
More details:
https://confengine.com/agile-india-2019/proposal/8524/building-and-scaling-high-performing-technology-organizations
Conference link: https://2019.agileindia.org
Innovate Better Through Machine data AnalyticsHal Rottenberg
This talk was presented at IP Expo Manchester in May, 2016. the themes discussed are:
- how does machine data relate to devops?
- how can tracking this data lead to better outcomes?
- what types of data are important to track?
Similar to Eric Proegler Early Performance Testing from CAST2014 (20)
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
2. *Eric Proegler, @ericproegler,
testingthoughts.com/ericproegler
*12 years in performance, 19 in software
*WOPR Organizer (Workshop on Performance and
Reliability, performance-workshop.org)
*This subject was discussed at WOPR22.
*Questions about Peer Workshops? Get at me.
WOPR22 was held May 21-23, 2014, in Malmö, Sweden, on the topic of
“Early Performance Testing.” Participants in the workshop included:
Fredrik Fristedt, Andy Hohenner, Paul Holland, Martin Hynie, Emil
Johansson, Maria Kedemo, John Meza, Eric Proegler, Bob Sklar, Paul
Stapleton, Andy Still, Neil Taitt, and Mais Tawfik Ashkar.
4. *No Process-teering/Scrumbaggery here
*Testing in Agile is about providing quick
feedback. How can we make performance
and scalability feedback happen faster – or
at all, with incomplete/unfinished systems,
on non-production hardware?
Testing Iterations?
5. Ideas
*What are the risks we are testing for?
*What is “Realistic”?
*Testing Techniques in Iterative Projects
*Reporting Early Performance Results
*Performance Testing Incomplete Systems
6. Performance Risks: Scalability
Scalability: Expensive operations mean that systems
won’t scale well
*Ops Problem? Solve with hardware?
*Tall Stacks -> Wirth’s Law: “Software is getting slower
more rapidly than hardware gets faster”
*Subject to use patterns and user models
What Does a Problem Look Like?
*Longer response times is a clue
*“High” CPU/Memory/Storage/Network Utilization
7. Capacity: System can’t support the expected load
structurally/as engineered
What Does a Problem Look Like?
*Response time very sensitive to load
*Growing Queues
*Hard or Soft Resource Limitations
*High CPU Limitation
*Increasing I/O Latency
*Run out of database threads
Performance Risks: Capacity
8. Concurrency: Operations that contend and
collide (Race conditions, database locks,
contention points)
What Does a Problem Look Like?
*Infrequent functional issues that seem to
only occur under load (Heisenbugs?)
*Process crashes
*Not easily reproducible
Performance Risks: Concurrency
9. Reliability: Degradation over time, system
becomes slower, less predictable, or
eventually fails.
What Does a Problem Look Like?
*Memory or Object Leaks
*More frequent Garbage Collection
*Decaying response times
Performance Risks: Reliability
10. Ideas
*What are the risks we are testing for?
*What is “Realistic”?
*Testing Techniques in Iterative Projects
*Reporting Early Performance Results
*Performance Testing Incomplete Systems
11. Will the completed, deployed system support:
(a, b…) users
performing (e, f…) activities
at (j, k…) rates
on mn… configuration
under rs… external conditions,
meeting x, y… response time goals ?
(Simulation Test)
Illusion of Realism: Experiment
12. “All Models are wrong. Some are useful.”
*Guesses at activities, and frequencies
*Organic loads and arrival rates – limitations imposed
by load testing tools
*Session abandonment, other human behaviors
*Simulating every activity in the system
*Data densities (row counts, cardinality)
*Warmed caching
*Loads evolving over time
Illusion of Realism: Models
13. “The environment is identical.”
*Shared resources: Virtualization, SANs, Databases,
Networks, Authentication, etc
*Execution environment versions and patching
*Software and hardware component changes, versions
and patching
*Variable network conditions, especially last-mile
*Background processing and other activities against
overlapping systems and resources
Illusion of Realism: Environment
14. Ideas
*What are the risks we are testing for?
*What is “Realistic”?
*Testing Techniques in Iterative Projects
*Reporting Early Performance Results
*Performance Testing Incomplete Systems
15. * Simulation Testing occurs at the end of the
project, before go live. If we find problems
then, they are either bad enough to delay the
whole project…
…or they are “Deferred to Phase 2”
* Simulation Tests are Expensive to Create,
Maintain, and Analyze
Simulation Tests
16. How does “Realistic” translate to agile?
*Maybe the answer is more control?
*Horizontal Scalability makes assumptions –
let’s use them
*Test subsets: single servers, single
components, cheaply and repeatedly
*Calibration tests
Simulation Tests
17. Build easily repeatable, reliable rapid tests:
* Login (measure session overhead/footprint)
* Simple workflows, avoid data caching effects
* Control variables
* Be ready to repeat/troubleshoot when you find
anomalies
Cheap and Fast Performance Tests
18. Who needs “Real”? Let’s find problems.
Examples:
*Burst loads – Ready, Set, GO!!!
*10 threads, 10 iterations each
*1 thread, 100 iterations
*1 or 10 threads, running for hours/days
Cheap and Fast Performance Tests
19. Why must we have a “comparable” environment?
* Leverage horizontal scalability – one server is
enough
* Who says we have to have systems distributed
the same way? Why can’t my test database be
on this system for calibration purposes?
* Isolation is more important than “Real”
Cheap and Fast Performance Tests
20. * Check Your Instruments!
* Calibrate, and Recalibrate as necessary
between environments, builds, day/times…any
time you want to be sure you are comparing
two variables accurately
Cheap and Fast Performance Tests
21. Now that the cost of a test is low, let’s run
lots
*Run five tests and average the results
*Run tests every day
*Run tests in Continuous Integration?
Cheap and Fast Performance Tests
22. Performance Testing in CI
*It’s being done, though it’s not easy to
automate all of the moving parts (code
deployment, system under test including data
state, consumable data, etc)
*SOASTA and others can show you where they
are using it
*Anyone here doing this?
24. Extending Automation
*Use/extend automation – Add
timers, log key transactions, build
response time records
*Watch for trends, be ready to drill
down. Automation extends your
senses, but doesn’t replace them
25. Ideas
*What are the risks we are testing for?
*What is “Realistic”?
*Testing Techniques in Iterative Projects
*Reporting Early Performance Results
*Performance Testing Incomplete Systems
26. Results Lead to Action
*Data + Analysis = Information
*Your value as a tester is the information you
provide – and the action(s) it inspires
*Be willing to sound an alarm – or approach a
key engineer: “Can I show you something?”
27. Communicate Outwards
Start your Campaign: Make results visible
* Emails
* Intranet page
* Whiteboard
* Status reports
* Gossip: “How is performance these days?”
Recruit Consumers! Become an important project
status indicator. Help people who can do
something understand and care
28. Reporting Example
Date Build Min Mean Max 90%
2/1 9.3.0.29 14.1 14.37 14.7 14.6
3/1 9.3.0.47 37.03 38.46 39.56 39.18
8/2 9.5.0.34 16.02 16.61 17 16.83
9/9 10.0.0.15 17.02 17.81 18.08 18.02
10/12 10.1.0.3 16.86 17.59 18.03 18
11/30 10.1.0.38 18.05 18.57 18.89 18.81
1/4 10.1.0.67 18.87 19.28 19.48 19.46
2/2 10.1.0.82 18.35 18.96 19.31 19.24
Calibration Results for Web Login, Search, View. Burst 10 threads iterating 10 times
29. Ideas
*What are the risks we are testing for?
*What is “Realistic”?
*Testing Techniques in Iterative Projects
*Reporting Early Performance Results
*Performance Testing Incomplete Systems
30. Testing Components – Missing Pieces
*APIs – abstraction points for developers,
less brittle than interfaces
*Mocking/Stubs/Auto-Responders – built in
abstraction points to simulate other
components that might not be there yet.
Will help Devs test, too.
31. Testing Components – Individual Pieces
*Find and re-purpose code for harnesses.
How do devs check their work?
*Mocking/Stubs/Auto-Responders – built in
abstraction points to simulate other
components that might not be there yet.
Will help Devs test, too.
32. Testing Components
Test what is there:
*Session models: login/logoff are expensive,
and matter for everyone
*Search Functions
*Think about user activities like road systems
– where are the highways?
*What will be meaningful to calibrate?
35. Incomplete Systems:Business Logic
* Frequently the scaling bottleneck in this
architecture pattern. Web services are big
buckets of functionality usually running a lot
of interpreted code
* Abstract presentation layer, borrow code
from it, test the rest of the system down
* Script load tests that address web services
directly
37. Incomplete Systems: Authentication
Create Test that just creates sessions
*Response time to create a session?
*How reliably can I create sessions?
*How many sessions can I create?
*What arrival rate can I support?
39. Incomplete Systems: Message Bus
Find or Make a Test Harness. How do
Developers Test?
* Response time to push a message?
* Response time to pull a message?
* Is this sensitive to load?
* At what rate can I create messages? With one
publisher? With multiple publishers?
* At what rate can I consume messages? What
if I add message servers/dispatchers?
41. Incomplete Systems: App Server
Identify inputs – polling file? Job
creation?
*Processing time for asynchronous processes –
per item, startup time
*Throughputs
*How many processes can I queue up? What
happens if I send a lot of synchronous stuff all
at once?
42. Some Conclusions?
*What are the risks you are testing for?
*Unpack “Realistic”: what does it mean, and
what (who) is it for?
*Consider the value of performance tests that
are cheap and easily repeatable
*Broadcast Results
*Test What You Have
*Where are the highways?