An investigation of cloud platform performance. Discussing performance tests used to measure cloud platform performance as well as potential pitfalls when performance testing and hosting test systems in cloud environments.
Implementing a Performance Centre of ExcellenceRichard Bishop
An old presentation, but just as relevant today as it was when I presented this at the British Computer Society in 2006.
This presentation showed how building a performance test team using a shared knowledge base with shared code libraries and best practice techniques made the performance test team a valuable part of the project team at a large UK bank.
Making test results and reports accessible to the entire project team and acting as a intermediary between the development teams and business users made the test team vital to the success of many projects at HBoS.
Implementing a Performance Centre of ExcellenceRichard Bishop
An old presentation, but just as relevant today as it was when I presented this at the British Computer Society in 2006.
This presentation showed how building a performance test team using a shared knowledge base with shared code libraries and best practice techniques made the performance test team a valuable part of the project team at a large UK bank.
Making test results and reports accessible to the entire project team and acting as a intermediary between the development teams and business users made the test team vital to the success of many projects at HBoS.
Presentation by Richard Bishop and Gordon Appleby at HP Discover 2014 in Barcelona. In the presentation, Richard and Gordon described their experiences in cloud-based performance testing. They discussed the increased adoption of the cloud as an application-testing platform as well as the evolution of HP’s cloud-based testing products including LoadRunner, Performance Center and StormRunner.
Using dynaTrace to optimise application performanceRichard Bishop
I delivered this presentation as a webcast for Compuware in July 2012. The presentation describes my use of dynaTrace in the last 12 months or so to investigate applicaiton performance and suggest performance improvements for one of Intechnica's clients.
You can register to view the webcast recording (including the audio feed) at this URL.
http://offers.compuware.com/register?cid=70170000000h8W6
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
Presentation by Haroon Meer and Marco Slaviero at BlackHat USA in 2007.
This presentation is about timing attacks against web applications. Squeeza, a SQLi tool developed by Marco Slaviero that returns data through various channels (dns,timing,http error messages) is introduced. An attack called Cross site request timing is also discussed.
TALK | Learn how to tap into what your employer sees using Postman + osquery, an open source API for asking questions about devices like laptops, servers, and Docker containers.
Presentation by Richard Bishop and Gordon Appleby at HP Discover 2014 in Barcelona. In the presentation, Richard and Gordon described their experiences in cloud-based performance testing. They discussed the increased adoption of the cloud as an application-testing platform as well as the evolution of HP’s cloud-based testing products including LoadRunner, Performance Center and StormRunner.
Using dynaTrace to optimise application performanceRichard Bishop
I delivered this presentation as a webcast for Compuware in July 2012. The presentation describes my use of dynaTrace in the last 12 months or so to investigate applicaiton performance and suggest performance improvements for one of Intechnica's clients.
You can register to view the webcast recording (including the audio feed) at this URL.
http://offers.compuware.com/register?cid=70170000000h8W6
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
Presentation by Haroon Meer and Marco Slaviero at BlackHat USA in 2007.
This presentation is about timing attacks against web applications. Squeeza, a SQLi tool developed by Marco Slaviero that returns data through various channels (dns,timing,http error messages) is introduced. An attack called Cross site request timing is also discussed.
TALK | Learn how to tap into what your employer sees using Postman + osquery, an open source API for asking questions about devices like laptops, servers, and Docker containers.
StarWest 2013 Performance is not an afterthought – make it a part of your Agi...Andreas Grabner
This presentation was given at StarWest 2013 in Anaheim, CA and also broadcasted through the Virtual Conference.
It shows how important it is to focus on performance throughout continuous delivery in order to avoid the most common performance problem patterns that still cause applications to crash and engineers spending their weekends and nights in a firefighting/war room situation
Similar to BCS SIGiST - How Fast is the Cloud? (20)
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
17. vs.
Average Response time (s)
2.5
2
1.5
1
0.5
0
AWS (out of Azure (ported Physical (out VMWare (out
box) to optimised of box) of box)
IaaS 1 PaaSdb)
Tin IaaS 2
25. “Cloud computing performance varies more than you might think….”
“….. the price of consistency likely exceeds what you want to spend”
http://www.infoworld.com/d/cloud-computing/face-the-facts-cloud-performance-
isnt-always-stable-170066
26. Realism Repeatability
But how will it Did my change
really perform? make a difference?
Changed time in IT and working as tester after checking with LinkedIn.Added three years to experience !!At HBoS – presented here 6.5 years ago……. 9th December 2005!As someone involved in testing and quality, I should get my facts right!
Agenda is split into two partsIntroduction, reasons for conducting our research, choice of platform, methodology etc.Test results, what the results mean, cloud futures (in and out of the test lab), opportunity to discuss
Definition from: US National Institute of Standards and TechnologyKey points:flexibilityon-demand provisioningNot necessarily about cost reduction, unless used sensibly.Not simplification…..
Simplification is one misconception about cloud, but there are others. Cloud doesn’t promise to make life easier.Cloud doesn’t reduce system complexity.Won’t reduce costs, unless implemented sensibly and well managed.Key points – backed up by recent articles. e.g. Sunday Telegraph supplementGenerally speaking, NOT cheaper, NOT less complex, NOT faster than what you already have.People who make these assumptions run risk of “falling into the trough in hype cycle”
Well…..Gartner suggest that all new technologies go through “Hype Cycle” Describes the maturity, adoption and application of new technology.So why bother with cloud?Lots of negative press, easy to dismiss positive articles as hype.Like all technologies, cloud has its limitations.Key is to develop a strategy which exploits benefits and reduce impacts of disadvantages.Everybody is wary of the “trough of disillusionment”.Handouts include hidden slides giving details of hype cycle…..
This is Gartner’s Hype Cycle image for 2010.Cloud Computing, Private Cloud Computing are towards the top of the “hype curve”.Early adopters starting to implement cloud. Some negative publicity, prices start to fallBoth are predicted to be mainstream within 2-5 years.
Gartner’s Hype Cycle image for July 2011, new one due soon.Cloud Computing, Private Cloud Computing are still at the top of the “hype curve”Past the peak and predicted to be mainstream within 2-5 years.Cloud/Web Platforms (IaaS) is entering the “trough”……. e.g. Sunday Telegraph articleFail to meet inflated expectations, press loses interest, becomes less fashionable.Key to success is avoiding the trough and planning now for the long term benefits that cloud can bring.Latest “Hype cycle” is due out from Gartner in July 2012
Cloud is a very broad term and it makes sense to sub divide it further.In traditional IT, you manage entire stack. Apps, Data, Middleware, OS, Physical hardware etc…IAAS: responsibility for raw block storage, networking and hardware is outsourced. Crossover point is at OS level.Utility computing model, pay for what you use.PAAS: responsibility for support and upgrades of operating system is also passed over to third party.Similar “pay for what you use” computing model.SAAS: complete application, data, responsibility for management, maintenance etc. are passed to a third party.
There are many concerns about cloudAs relevant for testers as other cloud users.Unproven tech:reliability, can you trust your core business to the cloud? All eggs in one basket.Service / support: model is immature & learning curve hinders adoption.Lock in: Proprietary platforms, lock in. Some migration tools available.Costs: Should be cheap, but costs are difficult to quantify. Consider bandwidth, uptime, db costs.Security: Biggest concern, but same as hosting your own platforms. Same problem, different perspective.Performance: Key concern, especially against a background of increasing application complexityI won’t address all of these issues, but they all need to be considered before adopting cloud.
Why conduct our research?High profile failures widely reported - dent people’s confidence in cloud - successes go unnoticed.These sites have one thing in common... both hosted on Amazon EC2.Police UK – failed. Why? GZIP compressionoff, caching off (no cache headers returned by landing page), multiple CSS/JS files. Developers relied on EC2s inherent scalability, disregarded best-practice. - 18m hits/hr = 4k/sec, Decent caches, cache warming/pre-population would certainly have helped.Wikileaks – “Iraq war logs” published. Despite hacking group called “Operation Payback” DDoS (10Gbps traffic directed at Amazon) when leaks published. Throughout the attack, site remained available (until Amazon turned it off for breaching terms of contract). Suggests platform isn’t to blame! - Worthy of investigation - How do they perform in “real world” scenarios?
IaaS (AWS and VM)PaaS (Azure)Physical hardwareNeeded app suitable for eachNeeded familiar test tool, used physical server
After identifying target platforms, looked for an application to test.NopCommerce = two-tier ecommerce application.NET / SQL architecturePre-populated with sample dataImportantly, already ported to Azure! ….. Up and running on 4 platforms quicklyInstalled same application and test data on 4 platforms – attempted to choose similar sized/cost options
Demo < 100 products in the database by default. Fine for functional tests, not for performance. We increased this to 7,500 products, and associated multiple images with each product. Text for product names and product descriptions from a copy of “War and Peace” downloaded from the Project Gutenberg archive. Products were created in bulk using an SQL script which created products, product descriptions, assigned them to random product categories and associated pictures with the products.
Developed scripts using Forecast.Scripts and scenarioMixture of business processes simulating real user action.Retail site - 85% of users browsing/searching Remainder placing orders.Load profile“Ramp up” to peak load over a 5-minute period, maintain load for 30 minutes and then ramp-down. Load equivalent to 46 users pausing for between 5-10 seconds was simulated.User load equivalent to 20,000 pages per hour
During our tests we used..Standard Windows PERFMON monitoring for each platformIn-house Intechnica tools to compare / write reports (MetricsWizard and KPIManager)Tools produce reports to compare response times and infrastructure performanceNot break tests, just baseline comparison atconstant, consistent load for each platformStandard tests.Same data – (text from war and peace, random images)Same volume of users for each test, just changed target IP in scriptsSame workload per user (pacing between user actions)
Have given a flavour of our tests and platform choice, now time for the results.The big question is how do they perform?Before I show results, who thinks cloud is faster? – show of hands.Answer to, “how fast is the cloud compared to physical hardware?” the answer is.Faster……. and …….. slower!
This study highlights the benefits of developing applications specifically for a platform.The “port” of nopCommerce to Azure specifically exploited the performance benefits of the highly optimised PaaS (SQL Azure back-end database) and produced results that are “Faster than tin”. Simply placing the application on a normal instance of SQL (IaaS, PaaS or physical) doesn’t give the same performance as tuned back end database e.g. Azure or Amazon RDS, because theyaren’t as highly optimised for database performance.
Analysis - First step -look at the initial TrafficSpike report.Check transaction count, check error rate, ensure tests comparable.Need to ensure like-for-like comparison.Can’t compare tests with very different error rates – skews results.Once we know tests are valid we use our in-house analysis tools to perform more detailed analysis.
Metrics Wizard.Compares response times, transaction count etc. between tests to identify any changes.Usually for before/after comparison, but used to compare results from different platforms.Chart shows comparison of physical hardware with VMware platform.Red/Green used to show improvements or deteriorations in performance between tests.In this case although we see red and green it’s apparent that difference between platforms was negligible.Physical hardware ave response time = 0.712sVMWare ave response time = 0.690sThis doesn’t tell the whole story, would be good to perform “break tests” to establish headroom.For this study we looked at CPU utilisation to see if we could identify differences between platforms.
KPI Manager.This tool takes Windows PERFMON logs and compares observed performance with Microsoft best practicee.g. database server performance. IaaS server exceeded 50% CPU utilisation during the test. 50.10% observed.Key observation …. We can see that the database is inefficient from the counters below.Calculate full table scans divided by index searches.We can see lots of full table scans which are inefficient, compared to index searches.This may be an inherent feature of nopCommerce, or may be due to the “artificial data” that we used.Only three columns because Azure stats aren’t available.Azure is presented as PAAS, rather than IAAS.
We knew our database was struggling.PERFMONshowed the application doing a large number of page lookups/sec. Could reduce by moving images out of the database onto the file system and better caching on the webserver. Too many table scans, analogous to looking through the entire phonebook to find a person’s name line by line, rather than using the index to start your search closer to the name/number that you’re looking for.Latency between EC2 back end to web layer is unknown and may vary depending on location of webserver and dbserver. Hear of people “spinning up” multiple servers and pairing those with “closer” IP addresses in hope of improving performance. This can be done, but Amazon’s advice is to use their optimised databaseSignificantly less latency in physical hardware / vmwareWhat we do know…..For this application, with this test data Azure is best performing.With different applications / test data / use cases, others may perform better.
We weren’t satisfied with original IaaS “out of box performance” and felt that it misrepresented the platform.The relatively poor response times shown by IaaS initiallydemonstrated that simply moving an application to the cloud isn’t likely to result in acceptable performance. This underlines the fact that applications need to be developed specifically for the environment in which they are to be used, or optimised for the target platform, e.g. the Azure nopCommerce implementation.Further analysis of the application identified an inefficient stored procedures in the nopCommerce SQLdatabase which was used repeatedly. The effects of optimising the single most frequently used stored procedure resulted in a 57% improvement to average response times. (2.011s to 1.276s)We spent a few hours spent on this, it is likely that further improvements would also be possible by optimising other code.
Previous slides covered platform choice for apps.As well as being aware that test platforms etc. are moving to the cloud….How does the cloud affect us as testers?
Google search this morning….153m hits on “cloud testing”.Paid ads from “Microsoft, SOASTA, Blazemeter, IBM”What future for cloud testing? – The slide answers this. Test companies increasing cloud offerings.Who’s heard of these tools?Is anybody using them?How do they work “in the cloud”? Either traditional – put load gens in cloud.New wave – Cloud based platforms.Hybrid – bit of both or multiple offerings.
David Linthicum, writing in Infoworld. @DavidLinthicumTrue for testers as well as application developers.Pros = scalability, low cost, on demandCons = uncertain costs, variable performanceWork around these cons.Manage costs, automate downtime.Measure performance, consider non-cloud tests too, or some load from conventional source.
As testers, need to balance realism and repeatability.Realism – need this to answer the “but how will it really perform?” question.Repeatability – need this to answer the “did my change make a difference?” question.Different types of tests require different approaches.e.g. break test vs comparative performance test.Consider time of test, duration of test, running multiple tests (without changing anything).Do more stats work – T-tests, histogram plots, response time distributions.
“Revalidate” tests in “real” environments or repeat tests ………………….does increase costs.Need to understand your infrastructure and plan ahead. Need to recognise it won’t always work first time.On a learning curve….Used carefully, cloud benefits outweigh the disadvantages (most of the time).e.g. We have clients who do regular small tests for application tuning and infrequent “break” tests where we use large numbers of AWS servers. Costs savings are significant.
Choosing a test tool ….. This list is valid for cloud or conventional environments.Need to add a weighting to each feature and determine whether worth paying for.Big differences between cloud and conventional test tools that need to be considered.
Read through slide first….Pros and cons need to be balanced.Need to consider which parts of infrastructure can move to cloud, test tool, test environment, neither or bothNeed to look at test tools capabilities of operating in cloud environment, ease of implementationRequirements for additional monitoring when moving to the cloud, possible requirements for deep dive analysis or monitoring.Give it a try, you might like it Questions