SlideShare a Scribd company logo
1 of 26
Workshop Performance
Modelling and Jmeter
Users vs request
• ¿cuánto aguanta nuestra plataforma?
SCENARIOS: Identify the scenarios that are most commonly executed or most resource-intensive
WORKLOAD MODEL: User Session Duration in average. It is important to define the load levels
that will translate into concurrent usage, overslapping users, or user sessions per second.
USER SCENARIO: Navigational Path, including intermediate steps or activities, taken by the user
to complete a task. We will call it User Session from now on.
THINK TIMES: User thinktimes  Pause between pages during a User Session depending on the
User Type*
User Types: Identify the User … new, revisiting or both.
Performance Acceptance Criteria: Response time, System load, Throughput ...
METRICS: Only well-selected metrics that are analyzed correctly and contextually provide
information of value.
DESIGN TEST: Using your scenarios, key metrics, and workload analysis …
RUN TEST: the load simulation must reflect the test design
ANALYZE RESULTS: Find bottlenecks, memory leaks, cpu hogs, bad sofware design …
The Maths behind
“mi tiempo medio de respuesta está
bien."
• Percentiles
• Moda
• Media
• Mediana
• Desviación Estándar
• https://msdn.microsoft.com/en-us/library/bb924370.aspx
Matemáticas y Sentido Común
http://www.raosoft.com/samplesize.html
JMeter
Modelización y Jmeter
Modelando el uso de la aplicación
Identificar el objetivo de la prueba
• Volumen de tráfico?
• Escala?
• Picos de carga?
• Robusto?
Jmeter Test Plan
User Defined Variables
Functional Test Mode
User defined Variables at Test Plan level to be used from Jenkins
User Behaviour in JMeter
Only Once Controllers
Cache Management
Cookie Management
Header Manager
Think Times
Resources
• http://www.raosoft.com/samplesize.html
• http://analyze.websiteoptimization.com/wso
• http://mobitest.akamai.com/m/index.cgi
• http://stevesouders.com/mobileperf/mobileperfbkm.php
• https://msdn.microsoft.com/en-us/library/bb924370.aspx
• http://www.quotium.com/performance/load-testing-
calculating-pacing-time/

More Related Content

Viewers also liked

Passion Driven Leadership
Passion Driven LeadershipPassion Driven Leadership
Passion Driven LeadershipJeff Piontek
 
Movers & shakers interview with vivek trikha head diagnostics onc quest ...
Movers & shakers interview with vivek trikha head  diagnostics onc quest ...Movers & shakers interview with vivek trikha head  diagnostics onc quest ...
Movers & shakers interview with vivek trikha head diagnostics onc quest ...oncquest
 
Dutch Azure basics presentation
Dutch Azure basics presentationDutch Azure basics presentation
Dutch Azure basics presentationsietohulst
 
SMART International Symposium for Next Generation Infrastructure: Transport m...
SMART International Symposium for Next Generation Infrastructure: Transport m...SMART International Symposium for Next Generation Infrastructure: Transport m...
SMART International Symposium for Next Generation Infrastructure: Transport m...SMART Infrastructure Facility
 
La responsabilità nell'utilizzo di applicazioni di mhealth
La responsabilità nell'utilizzo di applicazioni di mhealthLa responsabilità nell'utilizzo di applicazioni di mhealth
La responsabilità nell'utilizzo di applicazioni di mhealthMaria Livia Rizzo
 
CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...
CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...
CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...Lucas Tadeu
 
Työterveysyhteistyö: Työsuojelupaneeli 5
Työterveysyhteistyö: Työsuojelupaneeli 5Työterveysyhteistyö: Työsuojelupaneeli 5
Työterveysyhteistyö: Työsuojelupaneeli 5Työterveyslaitos
 
природа антарктиды
природа антарктидыприрода антарктиды
природа антарктидыrufinanikolaevna
 

Viewers also liked (10)

Passion Driven Leadership
Passion Driven LeadershipPassion Driven Leadership
Passion Driven Leadership
 
Movers & shakers interview with vivek trikha head diagnostics onc quest ...
Movers & shakers interview with vivek trikha head  diagnostics onc quest ...Movers & shakers interview with vivek trikha head  diagnostics onc quest ...
Movers & shakers interview with vivek trikha head diagnostics onc quest ...
 
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
 
Dutch Azure basics presentation
Dutch Azure basics presentationDutch Azure basics presentation
Dutch Azure basics presentation
 
SMART International Symposium for Next Generation Infrastructure: Transport m...
SMART International Symposium for Next Generation Infrastructure: Transport m...SMART International Symposium for Next Generation Infrastructure: Transport m...
SMART International Symposium for Next Generation Infrastructure: Transport m...
 
La responsabilità nell'utilizzo di applicazioni di mhealth
La responsabilità nell'utilizzo di applicazioni di mhealthLa responsabilità nell'utilizzo di applicazioni di mhealth
La responsabilità nell'utilizzo di applicazioni di mhealth
 
CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...
CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...
CATÁLOGO 35 ANOS DO CURSO DE HISTÓRIA DA UNIVERSIDADE FEDERAL DE CAMPINA GRAN...
 
Työterveysyhteistyö: Työsuojelupaneeli 5
Työterveysyhteistyö: Työsuojelupaneeli 5Työterveysyhteistyö: Työsuojelupaneeli 5
Työterveysyhteistyö: Työsuojelupaneeli 5
 
музыка в быту
музыка в бытумузыка в быту
музыка в быту
 
природа антарктиды
природа антарктидыприрода антарктиды
природа антарктиды
 

Similar to Workshop performance vl ctesting

Performance Best Practices
Performance Best PracticesPerformance Best Practices
Performance Best PracticesAlmudena Vivanco
 
Generating Mobile Application Onboarding Insights Through Minimalist Instruction
Generating Mobile Application Onboarding Insights Through Minimalist InstructionGenerating Mobile Application Onboarding Insights Through Minimalist Instruction
Generating Mobile Application Onboarding Insights Through Minimalist Instructioncolin gray
 
Usability methods to improve EMRs
Usability methods to improve EMRsUsability methods to improve EMRs
Usability methods to improve EMRsJeffery Belden
 
The Art and Science of Requirements Gathering
The Art and Science of Requirements GatheringThe Art and Science of Requirements Gathering
The Art and Science of Requirements GatheringVanessa Turke
 
UX (User Experience) Process, May 2017
UX (User Experience) Process, May 2017UX (User Experience) Process, May 2017
UX (User Experience) Process, May 2017Gary Coker
 
Design process design rules
Design process  design rulesDesign process  design rules
Design process design rulesPreeti Mishra
 
UX Design Process | Sample Proposal
UX Design Process | Sample Proposal UX Design Process | Sample Proposal
UX Design Process | Sample Proposal Marta Fioni
 
Lab management
Lab managementLab management
Lab managementlogumca
 
Product Backlog Mapping
Product Backlog MappingProduct Backlog Mapping
Product Backlog MappingPaul Nil
 
Software Development Life Cycle (SDLC )
Software Development Life Cycle (SDLC )Software Development Life Cycle (SDLC )
Software Development Life Cycle (SDLC )eshtiyak
 
Cloud workload analysis and simulation
Cloud workload analysis and simulationCloud workload analysis and simulation
Cloud workload analysis and simulationPrabhakar Ganesamurthy
 
From Use to User Interface
From Use     to User InterfaceFrom Use     to User Interface
From Use to User Interfaceabcd82
 
Design process evaluating interactive_designs
Design process  evaluating interactive_designsDesign process  evaluating interactive_designs
Design process evaluating interactive_designsPreeti Mishra
 
Load testing web based applications
Load testing   web based applicationsLoad testing   web based applications
Load testing web based applicationsJitendra Yadav
 

Similar to Workshop performance vl ctesting (20)

Performance Best Practices
Performance Best PracticesPerformance Best Practices
Performance Best Practices
 
Speed me up!
Speed me up!Speed me up!
Speed me up!
 
Chapter 3 principles of hci
Chapter 3 principles of hciChapter 3 principles of hci
Chapter 3 principles of hci
 
UCD overview
UCD overviewUCD overview
UCD overview
 
Generating Mobile Application Onboarding Insights Through Minimalist Instruction
Generating Mobile Application Onboarding Insights Through Minimalist InstructionGenerating Mobile Application Onboarding Insights Through Minimalist Instruction
Generating Mobile Application Onboarding Insights Through Minimalist Instruction
 
Usability methods to improve EMRs
Usability methods to improve EMRsUsability methods to improve EMRs
Usability methods to improve EMRs
 
software engineering
software engineering software engineering
software engineering
 
The Art and Science of Requirements Gathering
The Art and Science of Requirements GatheringThe Art and Science of Requirements Gathering
The Art and Science of Requirements Gathering
 
UX (User Experience) Process, May 2017
UX (User Experience) Process, May 2017UX (User Experience) Process, May 2017
UX (User Experience) Process, May 2017
 
Requirements Analysis
Requirements AnalysisRequirements Analysis
Requirements Analysis
 
Design process design rules
Design process  design rulesDesign process  design rules
Design process design rules
 
UX Design Process | Sample Proposal
UX Design Process | Sample Proposal UX Design Process | Sample Proposal
UX Design Process | Sample Proposal
 
Lab management
Lab managementLab management
Lab management
 
Product Backlog Mapping
Product Backlog MappingProduct Backlog Mapping
Product Backlog Mapping
 
Design Techniques
Design TechniquesDesign Techniques
Design Techniques
 
Software Development Life Cycle (SDLC )
Software Development Life Cycle (SDLC )Software Development Life Cycle (SDLC )
Software Development Life Cycle (SDLC )
 
Cloud workload analysis and simulation
Cloud workload analysis and simulationCloud workload analysis and simulation
Cloud workload analysis and simulation
 
From Use to User Interface
From Use     to User InterfaceFrom Use     to User Interface
From Use to User Interface
 
Design process evaluating interactive_designs
Design process  evaluating interactive_designsDesign process  evaluating interactive_designs
Design process evaluating interactive_designs
 
Load testing web based applications
Load testing   web based applicationsLoad testing   web based applications
Load testing web based applications
 

More from Almudena Vivanco

Performance Microservices in the Cloud
Performance Microservices in the CloudPerformance Microservices in the Cloud
Performance Microservices in the CloudAlmudena Vivanco
 
The sWag of performance Testing
The sWag of performance TestingThe sWag of performance Testing
The sWag of performance TestingAlmudena Vivanco
 
Continuous Performance Testing
Continuous Performance TestingContinuous Performance Testing
Continuous Performance TestingAlmudena Vivanco
 
Integrating taurus and jmeter
Integrating taurus and jmeterIntegrating taurus and jmeter
Integrating taurus and jmeterAlmudena Vivanco
 
Fine line between performance and security
Fine line between performance and securityFine line between performance and security
Fine line between performance and securityAlmudena Vivanco
 
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivancoDia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivancoAlmudena Vivanco
 
Modelling performance tests
Modelling performance testsModelling performance tests
Modelling performance testsAlmudena Vivanco
 
Web pagetest Meetup At Trovit
Web pagetest Meetup At TrovitWeb pagetest Meetup At Trovit
Web pagetest Meetup At TrovitAlmudena Vivanco
 
Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015Almudena Vivanco
 
Performance Continuous Integration
Performance Continuous IntegrationPerformance Continuous Integration
Performance Continuous IntegrationAlmudena Vivanco
 
cómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experiencecómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experienceAlmudena Vivanco
 
Oslo Schibsted Performance Gathering
Oslo Schibsted Performance GatheringOslo Schibsted Performance Gathering
Oslo Schibsted Performance GatheringAlmudena Vivanco
 

More from Almudena Vivanco (20)

Performance Microservices in the Cloud
Performance Microservices in the CloudPerformance Microservices in the Cloud
Performance Microservices in the Cloud
 
Dotnet conf2019 barcelona
Dotnet conf2019 barcelonaDotnet conf2019 barcelona
Dotnet conf2019 barcelona
 
The sWag of performance Testing
The sWag of performance TestingThe sWag of performance Testing
The sWag of performance Testing
 
Continuous Performance Testing
Continuous Performance TestingContinuous Performance Testing
Continuous Performance Testing
 
Integrating taurus and jmeter
Integrating taurus and jmeterIntegrating taurus and jmeter
Integrating taurus and jmeter
 
Fine line between performance and security
Fine line between performance and securityFine line between performance and security
Fine line between performance and security
 
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivancoDia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
 
Modelling performance tests
Modelling performance testsModelling performance tests
Modelling performance tests
 
WPT Midiendo la Felicidad
WPT Midiendo la FelicidadWPT Midiendo la Felicidad
WPT Midiendo la Felicidad
 
Web pagetest Meetup At Trovit
Web pagetest Meetup At TrovitWeb pagetest Meetup At Trovit
Web pagetest Meetup At Trovit
 
Expo qa 2016
Expo qa 2016Expo qa 2016
Expo qa 2016
 
Niji power to the user
Niji power to the userNiji power to the user
Niji power to the user
 
Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015
 
Workshop for newcomers
Workshop for newcomersWorkshop for newcomers
Workshop for newcomers
 
Velocity2014 gvp
Velocity2014 gvpVelocity2014 gvp
Velocity2014 gvp
 
Devopsdays barcelona
Devopsdays barcelonaDevopsdays barcelona
Devopsdays barcelona
 
Performance Continuous Integration
Performance Continuous IntegrationPerformance Continuous Integration
Performance Continuous Integration
 
cómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experiencecómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experience
 
Oslo Schibsted Performance Gathering
Oslo Schibsted Performance GatheringOslo Schibsted Performance Gathering
Oslo Schibsted Performance Gathering
 
Performance
Performance Performance
Performance
 

Recently uploaded

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 

Recently uploaded (20)

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 

Workshop performance vl ctesting

Editor's Notes

  1. Identify the scenarios that are most commonly executed or most resource-intensive; these will be the key scenarios used for load testing. For example, in an e-commerce application, browsing a catalog may be the most commonly executed scenario, whereas placing an order may be the most resource-intensive scenario because it accesses the database. The most commonly executed scenarios for an existing Web application can be determined by examining the log files. The most commonly executed scenarios for a new Web application can be obtained from market research, historical data, market trends, and so on. Resource-intensive scenarios can be identified by using design documents or the actual code implementation. The primary resources are: Processor Memory Disk I/O Network I/O Identify your application scenarios that are important from a performance perspective. If you have documented use cases or user stories, use them to help you define your scenarios. Key scenarios include the following: Critical scenarios. Significant scenarios. Critical Scenarios These are the scenarios that have specific performance expectations or requirements. Examples include scenarios covered by SLAs or those that have specific performance objectives. Significant Scenarios Significant scenarios do not have specific performance objectives such as a response time goal, but they may impact other critical scenarios. To help identify significant scenarios, identify scenarios with the following characteristics: Scenarios that run in parallel to a performance-critical scenario. Scenarios that are frequently executed. Scenarios that account for a high percentage of system use. Scenarios that consume significant system resources. Do not ignore your significant scenarios. Your significant scenarios can influence whether your critical scenarios meet their performance objectives. Also, do not forget to consider how your system will behave if different significant or critical scenarios are being run concurrently by different users. This "parallel integration" often drives key decisions about your application's units of work. For example, to keep search response brisk, you might need to commit orders one line item at a time.
  2. A user scenario is defined as a navigational path, including intermediate steps or activities, taken by the user to complete a task. This can also be thought of as a user session. A user will typically pause between pages during a session. This is known as user delay or think time. A session will have an average duration when viewed across multiple users. It is important to account for this when defining the load levels that will translate into concurrent usage, overlapping users, or user sessions per unit of time. Not all scenarios can be performed by a new user, a returning user, or either; know who you expect your primary users to be and test accordingly. Step 2 – Identify Workload Workload is usually derived from marketing data. It includes the following: Total users. Concurrently active users. Data volumes. Transaction volumes and transaction mix. For performance modeling, you need to identify how this workload applies to an individual scenario. The following are example requirements: You might need to support 100 concurrent users browsing. You might need to support 10 concurrent users placing orders. Note   Concurrent users are those users that hit a Web site at exactly the same moment. Simultaneous users are those users who have active connections to the same site.
  3. A user scenario is defined as a navigational path, including intermediate steps or activities, taken by the user to complete a task. This can also be thought of as a user session. A user will typically pause between pages during a session. This is known as user delay or think time. A session will have an average duration when viewed across multiple users. It is important to account for this when defining the load levels that will translate into concurrent usage, overlapping users, or user sessions per unit of time. Not all scenarios can be performed by a new user, a returning user, or either; know who you expect your primary users to be and test accordingly. Step 5 – Identify Processing Steps Itemize your scenarios and divide them into separate processing steps, such as those shown in Table 2.3. If you are familiar with UML, use cases and sequence diagrams can be used as input. Similarly, Extreme Programming user stories can provide useful input to this step. Table 2.3: Processing Steps Processing Steps 1. An order is submitted by client. 2. The client authentication token is validated. 3. Order input is validated. 4. Business rules validate the order. 5. The order is sent to a database server. 6. The order is processed. 7. A response is sent to the client. An added benefit of identifying processing steps is that they help you identify those points within your application where you should consider adding custom instrumentation. Instrumentation helps you to provide actual costs and timings when you begin testing your application.
  4. Step 4 – Identify Budget Budgets are your constraints. For example, what is the longest acceptable amount of time that an operation should take to complete, beyond which your application fails to meet its performance objectives. Your budget is usually specified in terms of the following: Execution time. Resource utilization. Execution Time Your execution time constraints determine the maximum amount of time that particular operations can take. Resource Utilization Resource utilization requirements define the threshold utilization levels for available resources. For example, you might have a peak processor utilization limit of 75 percent and your memory consumption must not exceed 50 MB. Common resources to consider include the following: CPU. Memory. Network I/O. Disk I/O. Additional Considerations Execution time and resource utilization are helpful in the context of your performance objectives. However, budget has several other dimensions you may be subject to. Other considerations for budget might include the following: Network. Network considerations include bandwidth. Hardware. Hardware considerations include items, such as servers, memory, and CPUs. Resource dependencies. Resource dependency considerations include items, such as the number of available database connections and Web service connections. Shared resources. Shared resource considerations include items, such as the amount of bandwidth you have, the amount of CPU you get if you share a server with other applications, and the amount of memory you get. Project resources. From a project perspective, budget is also a constraint, such as time and cost.
  5. Identifying performance acceptance criteria is most valuable when initiated early in the application’s development life cycle. It is frequently valuable to record the acceptance criteria for your application and store them in a place and format that is available to the entire team for review and comment. Criteria are typically determined by balancing your business, industry, technology, competitive, and user requirements. Test objectives frequently include the following: Response time.  For example, the product catalog must be displayed in less than 3 seconds. Throughput.  For example, the system must support 100 transactions per second. Resource utilization.  A frequently overlooked aspect is the amount of resources your application is consuming, in terms of processor, memory, disk input output (I/O), and network I/O. Maximum user load.  This test objective determines how many users can run on a specific hardware configuration. Business related metrics.  This objective is mapped to business volume at normal and peak values; for example, the number of orders or Help desk calls handled at a given time. Step 3 – Identify Performance Objectives For each scenario identified in Step 1, write down the performance objectives. The performance objectives are determined by your business requirements. Performance objectives usually include the following: Response time. For example, the product catalog must be displayed in less than 3 seconds. Throughput. For example, the system must support 100 transactions per second. Resource utilization. A frequently overlooked aspect is how much resource your application is consuming, in terms of CPU, memory, disk I/O, and network I/O. Consider the following when establishing your performance objectives: Workload requirements. Service level agreements. Response times. Projected growth. Lifetime of your application. For projected growth, you need to consider whether your design will meet your needs in six months time, or one year from now. If the application has a lifetime of only six months, are you prepared to trade some extensibility for performance? If your application is likely to have a long lifetime, what performance are you willing to trade for maintainability?
  6. 3 possible pov User, System and Operator Define questions related to your application performance that can be easily tested.  For example, what is the checkout response time when placing an order? How many orders are placed in a minute? These questions have definite answers. With the answers to these questions, determine quality goals for comparison against external expectations.  For example, checkout response time should be 30 seconds, and a maximum of 10 orders should be placed in a minute. The answers are based on market research, historical data, market trends, and so on. Identify the metrics.  Using your list of performance-related questions and answers, identify the metrics that provide information related to those questions and answers. Identify supporting metrics.  Using the same approach, you can identify lower-level metrics that focus on measuring the performance and identifying the bottlenecks in the system. When identifying low-level metrics, most teams find it valuable to determine a baseline for those metrics under single-user and/or normal load conditions. This helps you determine the acceptable load levels for your application. Baseline values help you analyze your application performance at varying load levels and serve as a starting point for trend analysis across builds or releases. Reevaluate the metrics to be collected regularly.  Goals, priorities, risks, and current issues are bound to change over the course of a project. With each of these changes, different metrics may provide more value than the ones that have previously been identified. Additionally, to evaluate the performance of your application in more detail and to identify potential bottlenecks, it is frequently useful to monitor metrics in the following categories: Network-specific metrics.  This set of metrics provides information about the overall health and efficiency of your network, including routers, switches, and gateways. System-related metrics.  This set of metrics helps you identify the resource utilization on your server. The resources being utilized are processor, memory, disk I/O, and network I/O. Platform-specific metrics.  Platform-specific metrics are related to software that is used to host your application, such as the Microsoft .NET Framework common language runtime (CLR) and ASP.NET-related metrics. Application-specific metrics.  These include custom performance counters inserted in your application code to monitor application health and identify performance issues. You might use custom counters to determine the number of concurrent threads waiting to acquire a particular lock, or the number of requests queued to make an outbound call to a Web service. Service-level metrics.  These metrics can help to measure overall application throughput and latency, or they might be tied to specific business scenarios. Business metrics.  These metrics are indicators of business-related information, such as the number of orders placed in a given timeframe. Step 6 – Allocate Budget Spread your budget (determined in Step 4, "Identify Budget") across your processing steps (determined in Step 5, "Identify Processing Steps") to meet your performance objectives. You need to consider execution time and resource utilization. Some of the budget may apply to only one processing step. Some of the budget may apply to the scenario and some of it may apply across scenarios. Assigning Execution Time to Steps When assigning time to processing steps, if you do not know how much time to assign, simply divide the total time equally between the steps. At this point, it is not important for the values to be precise because the budget will be reassessed after measuring actual time, but it is important to have an idea of the values. Do not insist on perfection, but aim for a reasonable degree of confidence that you are on track. You do not want to get stuck, but, at the same time, you do not want to wait until your application is built and instrumented to get real numbers. Where you do not know execution times, you need to try spreading the time evenly, see where there might be problems or where there is tension. If dividing the budget shows that each step has ample time, there is no need to examine these further. However, for the ones that look risky, conduct some experiments (for example, with prototypes) to verify that what you will need to do is possible, and then proceed. Note that one or more of your steps may have a fixed time. For example, you may make a database call that you know will not complete in less than 3 seconds. Other times are variable. The fixed and variable costs must be less than or equal to the allocated budget for the scenario. Assigning Resource Utilization Requirements When assigning resources to processing steps, consider the following: Know the cost of your materials. For example, what does technology x cost in comparison to technology y. Know the budget allocated for hardware. This defines the total resources available at your disposal. Know the hardware systems already in place. Know your application functionality. For example, heavy XML document processing may require more CPU, chatty database access or Web service communication may require more network bandwidth, or large file uploads may require more disk I/O.
  7. Do not change your test design because the design is difficult to implement in your tool. If you cannot implement your test as designed, ensure that you record the details pertaining to the test that you do implement. Ensure that the model contains all of the supplementary data needed to create the actual test. Consider including invalid data in your performance tests. For example, include some users who mistype their password on the first attempt but get it correct on a second try. First-time users usually spend significantly more time on each page or activity than experienced users. The best possible test data is test data collected from a production database or log file. Think about nonhuman system users and batch processes as well as end users. For example, there might be a batch process that runs to update the status of orders while users are performing activities on the site. In this situation, you would need to account for those processes because they might be consuming resources. Do not get overly caught up in striving for perfection, and do not fall into the trap of oversimplification. In general, it is a good idea to start executing tests when you have a reasonable test designed and then enhance the design incrementally while collecting results. Step 7 – Evaluate Evaluate the feasibility and effectiveness of the budget before time and effort is spent on prototyping and testing. Review the performance objectives and consider the following questions: Does the budget meet the objectives? Is the budget realistic? It is during the first evaluation that you identify new experiments you should do to get more accurate budget numbers. Does the model identify a resource hot spot? Are there more efficient alternatives? Can the design or features be reduced or modified to meet the objectives? Can you improve efficiency in terms of resource consumption or time? Would an alternative pattern, design, or deployment topology provide a better solution? What are you trading off? Are you trading productivity, scalability, maintainability, or security for performance? Consider the following actions: Modify your design. Reevaluate requirements. Change the way you allocate budget.
  8. Poor load simulations can render all of the work in the previous activities useless. To understand the data collected from a test execution, the load simulation must reflect the test design. When the simulation does not reflect the test design, the results are prone to misinterpretation. Consider the following steps when preparing to simulate load: Configure the test environment in such a way that it mirrors your production environment as closely as possible, noting and accounting for all differences between the two. Ensure that performance counters relevant for identified metrics and resource utilization are being measured and are not interfering with the accuracy of the simulation. Use appropriate load-generation tools to create a load with the characteristics specified in your test design. Using the load-generation tool(s), execute tests by first building up to the target load specified in your test design, in order to validate the correctness of the simulation. Some things to consider during test execution include: Begin load testing with a small number of users distributed against your user profile, and then incrementally increase the load. It is important to allow time for the system to stabilize between increases in load while evaluating the correctness of the simulation. Consider continuing to increase the load and record the behavior until you reach the threshold for the resources identified in your performance objectives, even if that load is beyond the target load specified in the test design. Information about when the system crosses identified thresholds is just as important as the value of the metrics at the target load of the test. Similarly, it is frequently valuable to continue to increase the number of users until you run up against the service-level limits beyond which you would be violating your SLAs for throughput, response time, and resource utilization. Note:  Make sure that the client computers (agents) you use to generate load are not overly stressed. Resource utilization such as processor and memory must remain well below the utilization threshold values to ensure accurate test results.
  9. You can analyze the test results to find performance bottlenecks between each test run or after all testing has been completed. Analyzing the results correctly requires training and experience with graphing correlated response time and system data. The following are the steps for analyzing the data: Analyze the captured data and compare the results against the metric’s accepted level to determine whether the performance of the application being tested shows a trend toward or away from the performance objectives. Analyze the measured metrics to diagnose potential bottlenecks. Based on the analysis, if required, capture additional metrics in subsequent test cycles. For example, suppose that during the first iteration of load tests, the process shows a marked increase in memory consumption, indicating a possible memory leak. In the subsequent iterations, additional memory counters related to generations can be captured to study the memory allocation pattern for the application. Step 8 – Validate Validate your model and estimates. Continue to create prototypes and measure the performance of the use cases by capturing metrics. This is an ongoing activity that includes prototyping and measuring. Continue to perform validation checks until your performance goals are met. The further you are in your project's life cycle, the greater the accuracy of the validation. Early on, validation is based on available benchmarks and prototype code, or just proof-of-concept code. Later, you can measure the actual code as your application develops. More Information For more information, see the following resources: For more information about validating Microsoft .NET code for performance, see "Managed Code and CLR Performance" in Chapter 13, "Code Review: .NET Application Performance." For more information about validating both prototypes and production code, see "How Measuring Applies to Life Cycle" in Chapter 15, "Measuring .NET Application Performance." For more information about the validation process, see "Performance Tuning Process" in Chapter 17, "Tuning .NET Application Performance."
  10. https://msdn.microsoft.com/en-us/library/bb924370.aspx
  11. Percentiles: In a set of observations, the value, below which a given percentage of observations fall, is called the percentile. Eg., 90th percentile is the value below which 90 % observations amongst the dataset can be found. It is an essential counter in scenarios where outliers (excessively high response time values in case of performance results) are found and impact the average response time of the transaction. Eg. If for a transaction, response time is 1 sec for 99 transactions and 100 sec for the 100th transaction. Then average response time is 2 sec but 90 percentile response times is 1 sec. Thus percentile neglects the effect of outliers that is observed in case of averages. That said, percentile statistics can stand alone only when used to represent data that’s uniformly or normally distributed and has an acceptable number of outliers Moda: Normal values are the values in the dataset that repeat the most number of times in the dataset. It can also be defined as the number in the set of data that repeats the most frequently. Average: Average is nothing but the mean of all the numbers in the observation set. We just add together all the data points and divide the total with the number of data points in the set. Eg., average of 3,4 & 5 is 4. Mediana: Median means middle value. When all of the numbers in the list are arranged in ascending / descending order, the number that occurs in the middle is called the median of the observation set. In case of even number of values in the observation set, an average of middle two numbers is the median of the observation set. Desviación Estándar: Standard deviation helps in understanding the dispersion or variation of all the data items from the average of the dataset. A higher standard deviation means that the data points are widely dispersed and a lower standard deviation indicates that those are spaced close to the average of the dataset. For a finite set of numbers, the standard deviation is found by taking the square root of the average of the squared differences of the values from their average value. Eg., For a dataset with following 8 values, SD can be calculated as – Distribución Normal:
  12. Data set A: normal distribution Also called a bell curve, a data set whose member data are weighted toward the center (or median value) is a normal distribution. When graphed, the shape of the “bell” of normally distributed data can vary from tall and narrow to short and squat, depending on the standard deviation of the data set; the smaller the standard deviation, the taller and more narrow the bell. Quantifiable human activities often result in normally distributed data. Normally distributed data is also common for response time data. Data set C: uniform distribution Uniform Distributions Uniform distribution is a term that represents a collection of data roughly equivalent to a set of random numbers that are evenly distributed between the upper and lower bounds of the data set. The key is that every number in the data set is represented approximately the same number of times. Uniform distributions are frequently used when modeling user delays, but aren’t particularly common results in actual response-time data. I’d go so far as to say that uniformly distributed results in response-time data are a pretty good indicator that someone should probably double-check the test or take a hard look at the application Statistical Significance Mathematically calculating statistical significance, also known as reliability, based on sample size, is not only beyond the scope of this column, it’s just plain complicated
  13. Typically, it is fairly easy to add iterations to performance tests to increase the total number of measurements collected; the best way to ensure statistical significance is simply to collect additional data if there is any doubt about whether or not the collected data represents reality. Whenever possible, ensure that you obtain a sample size of at least 100 measurements from at least two independent tests. Although there is no strict rule about how to decide which results are statistically similar without complex equations that call for huge volumes of data that commercially driven software projects rarely have the time or resources to collect, the following is a reasonable approach to apply if there is doubt about the significance or reliability of data after evaluating two test executions where the data was expected to be similar. Compare results from at least five test executions and apply the rules of thumb below to determine whether or not test results are similar enough to be considered reliable: If more than 20 percent (or one out of five) of the test-execution results appear not to be similar to the others, something is generally wrong with the test environment, the application, or the test itself. If a 90th percentile value for any test execution is greater than the maximum or less than the minimum value for any of the other test executions, that data set is probably not statistically similar. If measurements from a test are noticeably higher or lower, when charted side-by-side, than the results of the other test executions, it is probably not statistically similar. Confidence Intervals Because determining levels of confidence in data is even more complex and time-consuming than determining statistical significance or the existence of outliers, it is extremely rare to make such a determination during commercial software projects. A confidence interval for a specific statistic is the range of values around the statistic where the ‘true’ statistic is likely to be located within a given level of certainty.
  14. The process of identifying one or more composite application usage profiles for use in performance testing is known as workload modeling. Workload modeling can be accomplished in any number of ways, but to varying degrees the following activities are conducted, either explicitly or implicitly, during virtually all performance-testing projects that are successful in predicting or estimating performance characteristics in a production environment: Identify the objectives. Ensure that one or more models represent the peak expected load of X orders being processed per hour. Ensure that one or more models represent the difference between “quarterly close-out” period usage patterns and “typical business day” usage patterns. Ensure that one or more models represent business/marketing projections for up to one year into the future. Identify key usage scenarios. Determine navigation paths for key scenarios. Determine individual user data and variances. Determine the relative distribution of scenarios. Identify target load levels. Prepare to implement the model.