Your SlideShare is downloading. ×
0
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Devopsdays barcelona
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Devopsdays barcelona

142

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
142
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Performance is reactive and it is developed only for the system administrators remaining a dark side of the deployment and exploition of the platform.
  • Changes are delayed into production sometimes performance tweaks never go back to development or QA environments. Those tweaks are directly done onto Production environment with 0 testing.
  • Manual testing no automation leads to no concurrent testing and even less performance and load testing.
  • Requirements such as how much does it take for the home to load are irrelevant and not taken into consideration. The systems are overdimensioned and covers for all the bad software design.
  • As there is no monitoring of the performance or the real user experience only information is scattered around the marketing KPI’s and business revenues
  • Automated Tests for continupus functional testing, regression tests, smoke tests in each build. Unit testing
  • Builds are faster and they get to LIVE even faster.
  • Validate that the application be able to handle X users per hour. Validate that the users will experience response times of Y seconds or less 95 percent of the time. Validate that performance tests predict production performance within a +/- 10-percent range of variance. Investigate hardware and software as it becomes available, to detect significant performance issues early. The performance team, developers, and administrators work together with minimal supervision to tune and determine the capacity of the architecture. Conduct performance testing within the existing project duration and budget. Determine the most likely failure modes for the application under higher-than-expected load conditions. Determine appropriate system configuration settings for desired performance characteristics.
  • Performance testing is indispensable for managing certain significant business risks. For example, if your Web site cannot handle the volume of traffic it receives, your customers will shop somewhere else. Beyond identifying the obvious risks, performance testing can be a useful way of detecting many other potential problems. While performance testing does not replace other types of testing, it can reveal information relevant to usability, functionality, security, and corporate image that is difficult to obtain in other ways.   Many businesses and performance testers find it valuable to think of the risks that performance testing can address in terms of three categories: speed, scalability, and stability. Summary Matrix of Risks Addressed by Performance Testing Types Performance test type Risk(s) addressed Capacity Is system capacity meeting business volume under both normal and peak load conditions? Component Is this component meeting expectations? Is this component reasonably well optimized? Is the observed performance issue caused by this component? Endurance Will performance be consistent over time? Are there slowly growing problems that have not yet been detected? Is there external interference that was not accounted for? Investigation Which way is performance trending over time? To what should I compare future tests? Load How many users can the application handle before undesirable behavior occurs when the application is subjected to a particular workload? How much data can my database/file server handle? Are the network components adequate? Smoke Is this build/configuration ready for additional performance testing? What type of performance testing should I conduct next? Does this build exhibit better or worse performance than the last one? Spike What happens if the production load exceeds the anticipated peak load? What kinds of failures should we plan for? What indicators should we look for? Stress What happens if the production load exceeds the anticipated load? What kinds of failures should we plan for? What indicators should we look for in order to intervene prior to failure? Unit Is this segment of code reasonably efficient? Did I stay within my performance budgets? Is this code performing as anticipated under load? Validation Does the application meet the goals and requirements? Is this version faster or slower than the last one? Will I be in violation of my contract/Service Level Agreement (SLA) if I release?  
  • Preproduction is not necessary a copy of production environment that will be the ideal world. Usually we deal with a copy and down scaled environment that ( and this is important ) Will keep the ratio between the different elements in the complex system.
  • Validation criteria such as Resources of the machines and scalability of the system is key to deliver a good product. 3 validation criteria: Error Rate Response time System Resources
  • From the developers using Decorators or Annotations in Java and tools in .net to control the elapsed time in execution and compilation time to the deployment team and ops that take care of the performance once the product is live. Everyone is entitled to participate in performance.
  •   for the vast majority of the development life cycle, performance testing is about collecting useful information to enhance performance through design, architecture, and development as it happens. Comparisons against the end user–focused requirements and goals only have meaning for customer review releases or production release candidates. The rest of the time, you are looking for trends and obvious problems, not pass/fail validation. Make use of existing unit-testing code for performance testing at the component level. Doing so is quick, easy; helps the developers detect trends in performance, and can make a powerful smoke test. Performance testing is one of the single biggest catalysts to significant changes in architecture, code, hardware, and environments. Use this to your advantage by making observed performance issues highly visible across the entire team. Simply reporting on performance every day or two is not enough ― the team needs to read, understand, and react to the reports, or else the performance testing loses much of its value.
  • If the site is not available or it is slow, users will shop in another e-commerce. There is a direct relation between abandon rates and performance problems in e-commerce.
  • 3 possible pov User, System and Operator Define questions related to your application performance that can be easily tested.   For example, what is the checkout response time when placing an order? How many orders are placed in a minute? These questions have definite answers. With the answers to these questions, determine quality goals for comparison against external expectations.   For example, checkout response time should be 30 seconds, and a maximum of 10 orders should be placed in a minute. The answers are based on market research, historical data, market trends, and so on. Identify the metrics.   Using your list of performance-related questions and answers, identify the metrics that provide information related to those questions and answers. Identify supporting metrics.   Using the same approach, you can identify lower-level metrics that focus on measuring the performance and identifying the bottlenecks in the system. When identifying low-level metrics, most teams find it valuable to determine a baseline for those metrics under single-user and/or normal load conditions. This helps you determine the acceptable load levels for your application. Baseline values help you analyze your application performance at varying load levels and serve as a starting point for trend analysis across builds or releases. Reevaluate the metrics to be collected regularly.   Goals, priorities, risks, and current issues are bound to change over the course of a project. With each of these changes, different metrics may provide more value than the ones that have previously been identified. Additionally, to evaluate the performance of your application in more detail and to identify potential bottlenecks, it is frequently useful to monitor metrics in the following categories: Network-specific metrics.   This set of metrics provides information about the overall health and efficiency of your network, including routers, switches, and gateways. System-related metrics.   This set of metrics helps you identify the resource utilization on your server. The resources being utilized are processor, memory, disk I/O, and network I/O. Platform-specific metrics.   Platform-specific metrics are related to software that is used to host your application, such as the Microsoft .NET Framework common language runtime (CLR) and ASP.NET-related metrics. Application-specific metrics.   These include custom performance counters inserted in your application code to monitor application health and identify performance issues. You might use custom counters to determine the number of concurrent threads waiting to acquire a particular lock, or the number of requests queued to make an outbound call to a Web service. Service-level metrics.   These metrics can help to measure overall application throughput and latency, or they might be tied to specific business scenarios. Business metrics.   These metrics are indicators of business-related information, such as the number of orders placed in a given timeframe. Step 6 – Allocate Budget Spread your budget (determined in Step 4, "Identify Budget") across your processing steps (determined in Step 5, "Identify Processing Steps") to meet your performance objectives. You need to consider execution time and resource utilization. Some of the budget may apply to only one processing step. Some of the budget may apply to the scenario and some of it may apply across scenarios. Assigning Execution Time to Steps When assigning time to processing steps, if you do not know how much time to assign, simply divide the total time equally between the steps. At this point, it is not important for the values to be precise because the budget will be reassessed after measuring actual time, but it is important to have an idea of the values. Do not insist on perfection, but aim for a reasonable degree of confidence that you are on track. You do not want to get stuck, but, at the same time, you do not want to wait until your application is built and instrumented to get real numbers. Where you do not know execution times, you need to try spreading the time evenly, see where there might be problems or where there is tension. If dividing the budget shows that each step has ample time, there is no need to examine these further. However, for the ones that look risky, conduct some experiments (for example, with prototypes) to verify that what you will need to do is possible, and then proceed. Note that one or more of your steps may have a fixed time. For example, you may make a database call that you know will not complete in less than 3 seconds. Other times are variable. The fixed and variable costs must be less than or equal to the allocated budget for the scenario. Assigning Resource Utilization Requirements When assigning resources to processing steps, consider the following: Know the cost of your materials. For example, what does technology  x  cost in comparison to technology  y . Know the budget allocated for hardware. This defines the total resources available at your disposal. Know the hardware systems already in place. Know your application functionality. For example, heavy XML document processing may require more CPU, chatty database access or Web service communication may require more network bandwidth, or large file uploads may require more disk I/O.
  • The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule) Project Vision the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well Project Context factors that are, or may become, relevant to achieving the project vision. Such as Client expectations, budget, timeline, project environment … Understanding the system Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user. Understanding the Project Environment In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Understand the Performance Build Schedule At this stage, the project schedule makes its entrance.However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria. Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.
  • Success Criteria Validate that the application be able to handle  X  users per hour. Validate that the users will experience response times of  Y  seconds or less 95 percent of the time. Validate that performance tests predict production performance within a +/- 10-percent range of variance. Investigate hardware and software as it becomes available, to detect significant performance issues early. The performance team, developers, and administrators work together with minimal supervision to tune and determine the capacity of the architecture. Conduct performance testing within the existing project duration and budget. Determine the most likely failure modes for the application under higher-than-expected load conditions. Determine appropriate system configuration settings for desired performance characteristics.
  • PC Client, Set top box, smart TV’s and HTML5 portals for android and iOs
  • Transcript

    • 1. Telefónica Digital TELEFÓNICA DIGITAL Barcelona, October 11th 2013
    • 2. Telefónica Digital Product Development & Innovation About: me  last 8 years i have been working as performance test engineer with different tools and enviroments. 2
    • 3. Architecture Design Web Performance OptimizationWeb Performance Optimization
    • 4. No instruments Users Review Late or none Performance Testing No Real User Monitoring Reactive Performance Tuning
    • 5. No tools, no performance dashboard, performance is for sysadmins and operatorsNo tools, no performance dashboard, performance is for sysadmins and operators
    • 6. Releases are costly and it may take several months of workReleases are costly and it may take several months of work
    • 7. Manual testing of each release after code freezeManual testing of each release after code freeze
    • 8. Non functional Requirements are most likely ignoredNon functional Requirements are most likely ignored
    • 9. In production there is no monitoring of the traffic and how it affects the businessIn production there is no monitoring of the traffic and how it affects the business
    • 10. Users feedback is usually negative and there is no interaction with developers and designersUsers feedback is usually negative and there is no interaction with developers and designers
    • 11. Application’s performance affects directly to market’s performanceApplication’s performance affects directly to market’s performance
    • 12. • Continuous Integration – Functional testing – Automation – Monitoring
    • 13. Continuous Integration for functional testing is working already in nightly buildsContinuous Integration for functional testing is working already in nightly builds
    • 14. Automation reduces time to market for the applicationsAutomation reduces time to market for the applications
    • 15. Monitoring the real user behaviour and not just healthcheck of serversMonitoring the real user behaviour and not just healthcheck of servers
    • 16. Error and risks managementError and risks management
    • 17. Tuning and bugfixingTuning and bugfixing
    • 18. Listening to user feedbackListening to user feedback
    • 19. The FutureThe Future • Continuous Performance Integration – Performance tests integrated in Jenkins – Automation of the trend reports – Real User Monitoring  Real time feedback
    • 20. Telefónica Digital Product Development & Innovation SCRUM and PERFORMANCE 20
    • 21. Pruebas de RendimientoPruebas de Rendimiento Proactive Performance testing for each release. Load tests will discover the flaws and bottlenecks, the application or the systemProactive Performance testing for each release. Load tests will discover the flaws and bottlenecks, the application or the system may have in production environmentmay have in production environment TestingTestingTestingTesting
    • 22. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. Conocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento es La hipótesis de partida con más éxito.La hipótesis de partida con más éxito. High availability of the application and the system is the goal of a ready for service status. The application and the systems must be stable ,High availability of the application and the system is the goal of a ready for service status. The application and the systems must be stable , efficient and dimensioned according to the usage.efficient and dimensioned according to the usage. AvailabilityAvailabilityAvailabilityAvailability
    • 23. Not only the response time is important, an intelligent use of the resources is needed to grow in the future. Efficiency, understood asNot only the response time is important, an intelligent use of the resources is needed to grow in the future. Efficiency, understood as capacity to dispose of the system resources to achive an objective, in our case response time and uptimecapacity to dispose of the system resources to achive an objective, in our case response time and uptime VelocityVelocityVelocityVelocity
    • 24. Being able to grow depending on the necessities of the market, users and new technologies is one of the questions to which a performanceBeing able to grow depending on the necessities of the market, users and new technologies is one of the questions to which a performance engineerfor will have to answerengineerfor will have to answer ScalabilityScalabilityScalabilityScalability
    • 25. A performance test is easy. It is easy to design non realistic scenarios. It is easy to collect irrelevant data. Even with a good scenario andA performance test is easy. It is easy to design non realistic scenarios. It is easy to collect irrelevant data. Even with a good scenario and Appropiate data, it is easy to use and incorrect statistic method to analysis the results.Appropiate data, it is easy to use and incorrect statistic method to analysis the results. - Alberto Savoia- Alberto Savoia ScenariosScenariosScenariosScenarios
    • 26. One of the most important parts of a good performance test design is to have an appropiate load test environment, as similar as possible toOne of the most important parts of a good performance test design is to have an appropiate load test environment, as similar as possible to Production at all levels, networking, systems and application architecture.Production at all levels, networking, systems and application architecture. PreProductionPreProductionPreProductionPreProduction
    • 27. Otro título EscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenariosEscenarios To know the production environment is key to take good decisions about how to design a performance test plan. Designing a plan accordingTo know the production environment is key to take good decisions about how to design a performance test plan. Designing a plan according To real traffic and usage of the platform is key in creating validation criteriaTo real traffic and usage of the platform is key in creating validation criteria MonitoringMonitoringMonitoringMonitoring
    • 28. DevelopersDevelopers,, DBA's,DBA's, QA's, DevOps, product owners ... All the team is part of performanceQA's, DevOps, product owners ... All the team is part of performance Performance TeamsPerformance TeamsPerformance TeamsPerformance Teams
    • 29. Otro título There are many tools available in the market for load testing and monitoring. An effort in evaluating these tools will benefit at long term theThere are many tools available in the market for load testing and monitoring. An effort in evaluating these tools will benefit at long term the Execution of the tests. However, the most important part is how the reports are generated and who is going to interpret them.Execution of the tests. However, the most important part is how the reports are generated and who is going to interpret them. ToolsToolsToolsTools
    • 30. Otro título Mas puntos Not only unique users or session times are important. How the users work with the application and the psicology of the them are key toNot only unique users or session times are important. How the users work with the application and the psicology of the them are key to Understand the results and how it affects to business.Understand the results and how it affects to business. Real User MonitoringReal User MonitoringReal User MonitoringReal User Monitoring
    • 31. Keep it simple, use cache wisely, invest in testing and monitoring, create a culture of performance in all the organizationKeep it simple, use cache wisely, invest in testing and monitoring, create a culture of performance in all the organization Best PracticesBest PracticesBest PracticesBest Practices
    • 32. Tuning Techonology develops at high speed. To bring out the best of our product, business and techonology need to evolve by the hand. Investing inTechonology develops at high speed. To bring out the best of our product, business and techonology need to evolve by the hand. Investing in Performance research is crucial to keep up with other internet competitors.Performance research is crucial to keep up with other internet competitors. InnovationInnovationInnovationInnovation
    • 33. Telefónica Digital Product Development & Innovation Understand the Project Vision and Context Project Vision Project Context Understand the system Understand the Project Environment Understand the Performance Build Schedule 35
    • 34. Telefónica Digital Product Development & Innovation Improved way of working Improve performance unit testing by pairing performance testers with developers. Assess and configure new hardware by pairing performance testers with administrators. Evaluate algorithm efficiency. Monitor resource usage trends. Measure response times. Collect data for scalability and capacity planning. 36
    • 35. Telefónica Digital Product Development & Innovation Configure the Test Environment Set up isolated networking environment Procure hardware as similar as possible to production or at least keeping ration amongst all elements Coordinate bank of IP’s for IP spoofing Monitoring tools and operating systems like production Load generation tools or develop your own 37
    • 36. Telefónica Digital Product Development & Innovation Identify and Coordinate Tasks Work item execution method Specifically what data will be collected Specifically how that data will be collected Who will assist, how, and when Sequence of work items by priority 38
    • 37. Telefónica Digital Product Development & Innovation Execute Task(s) Keys to Conducting a Performance-Testing Task • Analyze results immediately and revise the plan accordingly. • Work closely with the team or sub-team that is most relevant to the task. • Communicate frequently and openly across the team. • Record results and significant findings. • Record other data needed to repeat the test later. • Revisit performance-testing priorities after no more than two days. 39
    • 38. Telefónica Digital Product Development & Innovation Analyze Results and Report  pause periodically to consolidate results conduct trend analysis create stakeholder reports, pair with developers, architects, and administrators to analyze results 40
    • 39. But … what are you actually doing day by day?But … what are you actually doing day by day?
    • 40. Telefónica Digital Product Development & Innovation Case of STUDY 42
    • 41. • HTML5 trends using Yslow and Firebug
    • 42. Branches comparison

    ×