Performance Continuous Integration


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Performance Continuous Integration

  1. 1. Telefónica DigitalBarcelona, March 21st2013 TELEFÓNICA DIGITAL
  2. 2. About: me  last 8 years i have been working as performance test engineer with different tools and enviroments.Telefónica Digital 2Product Development & Innovation
  3. 3. Architecture DesignWeb Performance Optimization
  4. 4. No instrumentsUsers ReviewLate or none Performance TestingNo Real User MonitoringReactive Performance Tuning
  5. 5. No tools, no performance dashboard, performance is for sysadmins and operators
  6. 6. Releases are costly and it may take several months of work
  7. 7. Manual testing of each release after code freeze
  8. 8. Non functional Requirements are most likely ignored
  9. 9. In production there is no monitoring of the traffic and how it affects the business
  10. 10. Users feedback is usually negative and there is no interaction with developers and designers
  11. 11. Application’s performance affects directly to market’s performance
  12. 12. • Continuous Integration – Functional testing – Automation – Monitoring
  13. 13. Continuous Integration for functional testing is working already in nightly builds
  14. 14. Automation reduces time to market for the applications
  15. 15. Monitoring the real user behaviour and not just healthcheck of servers
  16. 16. Error and risks management
  17. 17. Tuning and bugfixing
  18. 18. Listening to user feedback
  19. 19. The Future• Continuous Performance Integration – Performance tests integrated in Jenkins – Automation of the trend reports – Real User Monitoring  Real time feedback
  20. 20. SCRUM and PERFORMANCETelefónica Digital 20Product Development & Innovation
  21. 21. TestingProactive Performance testing for each release. Load tests will discover the flaws and bottlenecks, the application or the systemPruebas de Rendimientomay have in production environment
  22. 22. AvailabilityConocer el escenario productivo para tomar una buena decisión sobre cómo orientar las pruebas de rendimiento esHigh availability of the application and the system is the goal of a ready for service status. The application and the systems must be stable ,La hipótesis de partida con más éxito.efficient and dimensioned according to the usage.
  23. 23. VelocityNot only the response time is important, an intelligent use of the resources is needed to grow in the future. Efficiency, understood ascapacity to dispose of the system resources to achive an objective, in our case response time and uptime
  24. 24. ScalabilityBeing able to grow depending on the necessities of the market, users and new technologies is one of the focus for a performance engineer
  25. 25. ScenariosA performance test is easy. It is easy to design non realistic scenarios. It is easy to collect irrelevant data. Even with a good scenario andAppropiate data, it is easy to use and incorrect statistic method to analysis the results.- Alberto Savoia
  26. 26. PreProductionOne of the most important parts of a good performance test design is to have an appropiate load test environment, as similar as possible toProduction at all levels, networking, systems and application architecture.
  27. 27. Otro título Monitoring EscenariosTo know the production environment is key to take good decisions about how to design a performance test plan. Designing a plan accordingTo real traffic and usage of the platform is key in creating validation criteria
  28. 28. Performance Teams Developers, DBAs, QAs, DevOps, product owners ... All the team is part of performance
  29. 29. Otro título ToolsThere are many tools available in the market for load testing and monitoring. An effort in evaluating these tools will benefit at long term theExecution of the tests. However, the most important part is how the reports are generated and who is going to interpret them.
  30. 30. Otro títuloMas puntos Real User MonitoringNot only unique users or session times are important. How the users work with the application and the psicology of the them are key toUnderstand the results and how it affects to business.
  31. 31. Best PracticesKeep it simple, use cache wisely, invest in testing and monitoring, create a culture of performance in all the organization
  32. 32. Tuning InnovationTechonology develops at high speed. To bring out the best of our product, business and techonology need to evolve by the hand. Investing inPerformance research is crucial to keep up with other internet competitors.
  33. 33. Understand the Project Vision and Context Project Vision Project Context Understand the system Understand the Project Environment Understand the Performance Build ScheduleTelefónica Digital 34Product Development & Innovation
  34. 34. Identify Reasons for Testing Performance Success Criteria • Application performance requirements and goals • Performance-related targets and thresholds • Exit criteria (how to know when you are done) • Key areas of investigation • Key data to be collectedTelefónica Digital 35Product Development & Innovation
  35. 35. Identify the Value Performance Testing Adds to the Project In general, the types of information that may be valuable to discuss with the team when preparing a performance-testing strategy for a performance build include: › The reason for performance testing this delivery › Prerequisites for strategy execution › Tools and scripts required › External resources required › Risks to accomplishing the strategy › Data of special interest › Areas of concern › Pass/fail criteria › Completion criteria › Planned variants on tests › Load range › Tasks to accomplish the strategyTelefónica Digital 36Product Development & Innovation
  36. 36. Configure the Test Environment Set up isolated networking environment Procure hardware as similar as possible to production Coordinate bank of IP’s for IP spoofing Monitoring tools and operating systems like production Load generation tools or develop your ownTelefónica Digital 37Product Development & Innovation
  37. 37. Identify and Coordinate Tasks Work item execution method Specifically what data will be collected Specifically how that data will be collected Who will assist, how, and when Sequence of work items by priorityTelefónica Digital 38Product Development & Innovation
  38. 38. Execute Task(s) Keys to Conducting a Performance-Testing Task • Analyze results immediately and revise the plan accordingly. • Work closely with the team or sub-team that is most relevant to the task. • Communicate frequently and openly across the team. • Record results and significant findings. • Record other data needed to repeat the test later. • Revisit performance-testing priorities after no more than two days.Telefónica Digital 39Product Development & Innovation
  39. 39. Analyze Results and Report  pause periodically to consolidate results conduct trend analysis create stakeholder reports, pair with developers, architects, and administrators to analyze resultsTelefónica Digital 40Product Development & Innovation
  40. 40. But … what are you actually doing day by day?
  41. 41. Case of StudyTelefónica Digital 42Product Development & Innovation
  42. 42. Selenium Framework
  43. 43. Selenium Framework// Set default NetExport preferencesprofile.setPreference(domain + "netexport.alwaysEnableAutoExport", true);profile.setPreference(domain + "netexport.showPreview", false);profile.setPreference(domain + "netexport.beaconServerURL", "http://localhost/har-db");profile.setPreference(domain + "netexport.autoExportToFile", false);profile.setPreference(domain + "netexport.autoExportToServer", true);profile.setPreference(domain + "netexport.sendToConfirmation", false);//set net export preferences showslow directory to save .har files//profile.setPreference(domain + "netexport.autoExportToFile", true);//profile.setPreference(domain + "netexport.defaultLogDir", "C:showslowhar");
  44. 44. Branches comparison
  45. 45. Performance trends