Shunra app cloud_whitepaper

247 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
247
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Shunra app cloud_whitepaper

  1. 1. WAN. Web. Mobile. Cloud. Confidence in Application Performance™Deploying Your Application in the Cloud:Strategies to Proactively Mitigate Performance RiskA Shunra Software Best Practices White Paper By Marty Brandwin
  2. 2. A Shunra Software White PaperCorporations worldwide are shifting technology resources and Is my application Cloud-ready?infrastructure to the Cloud. These businesses expect to realizegains in operational efficiency and scalability as a result of the When analyzing an existing application for its Cloud-readiness, it isCloud’s elasticity, and they expect to reduce capital expenditures imperative to break down the application into its core dependencies,on IT infrastructure as they migrate to an operational pay-as- components and functionality. With each “piece” of the application,you-go expense and offload typical infrastructure management organizations must weigh the unique benefits and risks to determineresponsibilities (and costs) to the Cloud provider. whether the Cloud paradigm is the best option – whether each component will function as expected in the Cloud, whether it isToday, organizations recognize the value and significant gains that scalable, what costs will be incurred to maintain the component inCloud computing offers. They are also knowledgeable enough the Cloud, and how end users will experience it.to recognize the risks involved with Cloud deployments, such asthe potential bottlenecks and points of failure that are introduced Typically, preparing an application for theas application topology and dependencies now include extrahops to the Cloud. Other risks include network latency, data Cloud requires one of two applicationsecurity, bandwidth limitations, reliance on third party content development efforts: re-architectingdelivery networks, and potential development costs if applicationarchitecture or components require refactoring. The end result of all application components with aof these possible impairments is reduced application performance SaaS-like infrastructure, or buildingand a poor user experience. new components and applications thatCloud computing, therefore, is not an instant “win”. It is critical to leverage Cloud APIs for design, processanalyze the potential tradeoffs that may be necessary when moving and workflow. Both situations introducean application, or some of its components, to the Cloud. It is alsovital to be proactive in determining the impact these changes costs and performance risk to thewill have on application performance and, most importantly, user application.experience. Additional latency introduced by extra hops to the Cloud has an additive effect that can impair end user experience. 1 msec Latency for LOCAL User 50 msec Latency for REMOTE User Client Server Client Server Session Initiation Session Initiation Initiated Session Initiated Session Login Request Login Request Login Reply Login Reply Login Page Request Login Page Request Login Page Login Page Sporadic Download Sporadic Download Acknowledgements Acknowledgements Session Teardown Session Closed Session Closed Session Teardown Session Closed Session Closed 3 Seconds 30 Seconds © 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
  3. 3. A Shunra Software White PaperThe introduction of minimal additional latency can create significant but also be magnified. Take for example the latency implicationsperformance bottlenecks when a large number of application calls with a chatty application – the introduction of minimal additionalare occurring. latency can create significant performance bottlenecks when a large number of application calls are occurring. In addition, multi-tenancyCloud infrastructure changes mean existing investments in and shared Cloud resources mean that some applications can bearchitecture, data structure and performance engineering may not negatively impacted by high load and resource requirements frombe leverageable. Re-architecting the middleware and back-end other applications.tiers of an application to leverage Cloud APIs can be a significantundertaking. Application development and management platforms Pre-deployment performance testing is essential.must be capable of supporting the Cloud model throughout allstages of the application development lifecycle. Without appropriate The current Cloud performance testing paradigm requires a pre-planning for the development, refactoring and management of deployment migration of application components and data to aapplications deployed to the Cloud, organizations may be forced Cloud-based staging area in order to test functionality, establishto seek out ad hoc solutions that represent additional costs and benchmarks and set expectations. Copying over virtual machinescorporate investment, offsetting at least some of the expected gains and other components to the Cloud from the datacenter introducesfrom a Cloud migration. its own performance and resiliency risks that need to be understood.Most importantly, all of these changes put a burden on the QA/Testing team. Not only does application functionality in the Cloud To optimize pre-deployment testingneed to be validated, so does performance and adherence to service Organizations must be able to:level objectives (SLOs). While the application performs well in thetraditional datacenter, the variability of hosting it in the Cloud  Collect real-world Cloud network information introduces new performance risk. over time, including latency, jitter, packet loss, and bandwidth constraintsComplicating the migration, and critical to accurately assessingapplication topology changes, is the requirement to have a  Replay these real-world impairments in a test lab thorough understanding of the services and architecture offered bythe Cloud provider and the role of third-party vendors that may be  Understand datacenter location and end user working with the provider (content delivery networks, for example). location(s)Service level guarantees and other performance metrics are  Automatically recreate multiple network increasingly easy to establish and monitor, though it is much more scenarios, including best- and worst-casedifficult to anticipate unplanned outages, and resulting application conditionsbehavior, in the Cloud as opposed to the traditional data center. This approach to pre-deployment testing empowersMoving from the traditional datacenter and into the Cloud paradigm organizations to proactively plan for and successfullynecessitates a hand-off of control – control of data, control of deploy applications to the Cloud.centralized IT functionality. Best practices, therefore, dictate awell-choreographed and thorough performance assessment ofthe application in advance of deployment to the Cloud. While Once application components or a reference system are deployed,management and maintenance control is largely relinquished, which can be time-intensive, additional testing code may bepreparedness and validation of application performance provides required and the application may be placed in a debug state. Fromthe assurance IT organizations need to confidently deploy to the there, the application or its components can be stress tested andCloud. the interaction of both the Cloud-based and datacenter-based components can be analyzed. What-if scenarios, times of peakProactively testing (and validating) load, scalability, etc. are all conditions that can then be tested.end user experience While this high-level view of testing is consistent with what QA and Performance Engineers have come to expect in traditionalNow that you have thoroughly assessed Cloud provider capabilities datacenters, the pay-as-you-go model of the Cloud makes this aand applied that knowledge to your application development and costly proposition.hosting plans, there is one more requirement to complete yourproactive strategy: validate and ensure end user experience. Rather, pre-deployment testing in the datacenter, with real Cloud- based simulation, is a more cost-effective and flexible means forThe best-laid plans cannot fully anticipate and account for the testing applications. By precisely emulating Cloud conditions andperformance and experience risks associated with deploying services prior to deployment, organizations are able to test moreapplications in the Cloud. In fact, application issues within the Cloud scenarios at less cost and be certain of end user experience.environment can not only resurface, as they did in the datacenter, © 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
  4. 4. A Shunra Software White PaperIn addition, emulating Cloud conditions and simulating real-world again in random order, with various factors imposed to changeusage scenarios, like outages and peak loads, early in the Cloud parameters in order to test performance and scalability under thedeployment/development lifecycle allows organizations to better breadth of real-life conditions.anticipate and plan for capacity and resource requirements. Analysisof application behavior in the datacenter under Cloud conditions The company was able to precisely recreate the conditions of theand what-if scenarios can also help organizations determine which private Cloud and accurately simulate multiple test scenarios in theapplication components are best suited for, or are even capable of company’s on-site lab. As a result of an extensive and thorough pre-deployment performance test, Shunra helped the companybeing deployed to, the Cloud. validate the performance and associated requirements of the onlineA Practical Example with Shunra’s communities prior to deployment. This was of utmost importance as the company operates one of the most popular family-focusedPerformanceSuite communities on the Web, and user experience could not beTo realize value and the fastest return on your Cloud migration compromised. Shunra was also able to quantify the potential gains ininvestment, best practices dictate proactive pre-deployment testing efficiency, providing a cost justification for the migration.with solutions like Shunra’s Performance Suite. As the leading As a result of supporting this migration project, the company nowapplication performance engineering provider, Shunra has helped employs Shunra for performance validation and needs analysis onthousands of companies worldwide build performance into their dozens of online application releases annually.applications, whether WAN, Web, Mobile or Cloud.When a multinational entertainment company decided to migrate Key Impairments and Risksits online communities and social media properties to a privateIBM-hosted Cloud, it turned to Shunra to proactively determine As we mentioned, network impairments that are experienced in theand validate its migration strategy. The company had several load data center can be magnified within a Cloud architecture. Assessinggeneration tools available and functionality testing experience in the performance among varying Cloud network conditions is essential.lab, but recognized the potential impact of the move on its end users Impairments to consider, include:and wanted to ensure optimal application performance based onnetwork conditions. Latency Latency is the amount of time required for a packet to reach its destination across a given physical link. It is also, more often than not, a primary source of performance problems. One way to think about latency is through a simple analogy: the driving distance between two points. How long a car takes to get from point A to point B depends on factors like distance, speed limits, and traffic congestion. If points A and B are close in proximity, then latency is negligible. As the distance becomes greater, however, as it does when you introduce a Cloud topology and the multiple gateways that must be traversed in a typical transaction, greater performance risk is introduced. Factors contributing to latency include:NetworkCatcher enables capture and playback of real-world  Geographic distance – increasing the distance between linksnetwork behavior. introduces a delay based on the physics of sending data packets from one location to another; this delay is magnified by theThe company knew that latency would be introduced to the online potential need for additional “turns” or the need to re-sendapplications based on the physics alone of a geographic move. packets when they become corrupt or fragmented; a viciousHowever, they also needed to understand how additional gateways, cycle can result as the increased distance also increases the risk ofnetwork queues and conditions that would require packets to be re- packet corruption or loss.sent could multiply this delay.  Network queues – when traversing a network consisting of In order to test the impact of latency and other real-world network multiple intermediate networks, packets tend to “queue up” atconstraints, Shunra’s Network Catcher was deployed to the private busy routers, much as traffic accumulates at busy intersections;Cloud to capture real-life latency, jitter and packet loss values. This overloading these routes increases latency; and, if packets needdata was then replayed in a test lab using Shunra’s PerformanceSuite to be re-sent, additional traffic, and thus latency, is created.and Shunra’s seamless integration with HP LoadRunner andPerformance Center. The data was played in sequential order, and Before migrating an application to the Cloud, it is essential to © 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
  5. 5. A Shunra Software White Paperunderstand the combined impact of real-world network latencies Packet Lossand application “turns” on the performance of critical businessservices to the end user. In general, when data carried across a network is lost or corrupted, the affected packets must be resent. As discussed, this canJitter compound network impairments like latency and jitter, causing significant performance degradation. This degradation is not due asJitter is a measure of the variability of latency. It describes the much to the packet loss as it is to the time it takes for applicationsvariation in time (or delay) that is experienced between sending to respond to them. The most significant effect of packet loss isand receiving data packets. The result of jitter can be packet loss or from application timeouts, which are defined as the length of timere-ordering, which can have dramatic impact on the performance of a network host is programmed to wait for a reply before resendingvideo or audio streams. the latest information again. Each time a packet must be resent, theBandwidth Availability resulting timeouts incurred can severely reduce the quality of the end user experience.Bandwidth describes the speed at which information travels on alink per unit of time. Data cannot be sent or received faster than Packet loss can occur for several reasons:the underlying media allows. Bandwidth considerations, however,  ardware or software bugs – packets can be assembled or Hare more complicated than just the speeds at which data can disassembled incorrectly due to infrastructure or softwarebe transmitted, known as theoretical bandwidth. Rather, when defects.considering bandwidth and its impact on performance, we mustconsider other performance factors that affect how much of the  Electrical problems – high power lines, inadequate noiseavailable bandwidth can be used: isolation, air conditioners and other electrical sources can disrupt data transition. Bottlenecks – a network is only as fast as its slowest link; if users connect to a 1.5Mbps WAN through a 56 Kbps dial-up link, real  Network loads – when traffic coming to a router exceeds the bandwidth is 56 Kbps. router’s ability to process, an overflow condition results; this overflow condition may be handled automatically by the router Utilization – as with any channel, the more traffic there is (think which proactively drops packets to avoid overflow conditions. about cars on the highway), the slower the speed.  P header corruption – when packet header information is I Protocol overhead (bandwidth allocation) – different protocols corrupted, a router may misinterpret the packet as being invalid impose different bandwidth penalties – i.e., the percentage and drop it; header corruption typically occurs because of errors of the data stream allocated to addressing and other control at the physical network layer which cause data bits to toggle. functions; for example, ATM has an overhead of 10% (5 bytes for every 53-byte ATM cell), effectively lowering network bandwidth  Fragmentation – when a data packet exceeds the maximum allocated for data transfer by 10%. allowed to traverse the network, it may be broken down into smaller packets before sending it on its way; this fragmentation uality of Service (QoS) – many network providers allocate Q takes time and increases the aggregate processing time required bandwidth based on the type of traffic or destination; for (because there are more packets to process) and more risk of lost example, video may get a higher priority than email because of packets. greater potential performance problems with video; similarly, traffic going to a corporate customer may be prioritized over Networks are imperfect. Network conditions change. With a traffic to a residential customer. huge number of data packets flying in many different directions, across complex network infrastructures that incorporate multiple Asymmetric bandwidth – another complication occurs when technologies from multiple vendors, not every 0 and 1 will travel downloaded data is received much faster than uploaded data, from endpoint to endpoint exactly as expected. as with a Digital Subscriber Line (DSL) network; typically used in residential settings, when DSL is used in a business environment, Cloud migrations introduce performance risk that can and must even a small upload can temporarily slow or stop other data be mitigated to maintain user satisfaction, productivity and/or traffic. revenue streams. A proactive approach to performance engineering empowers organizations to see how their code will behave underIn Cloud environments, the impact of network connections and the variable and worst-case conditions. By incorporating the realitiesamount of data that can be carried is an essential consideration, of the network environment into the test cycle, organizations gainespecially since bandwidth is subject to contention by multiple valuable insight into the vulnerabilities that can adversely affectapplications. In a public Cloud environment, in particular, the application performance. And, they are best equipped to resolveperformance of any given application is subject to the volume of issues before end users are affected – saving considerable time andtraffic generated by all the other applications utilizing the same money.infrastructure. © 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.
  6. 6. A Shunra Software White PaperAbout ShunraWhen deploying applications across WAN, Web, Mobile or Cloud- provides customized performance results, enabling pre-productionbased networks, risk mitigation and cost avoidance is paramount. remediation and optimization, and confidence in application On BlackToday, 80% of the costs associated with application development performance prior to deployment.occur in remediating failed or underperforming applications afterdeployment, where the ineffective application has already had a Shunra is the industry-recognized leader in Application Performancenegative impact on the end user or customer experience. Shunra Engineering (APE), offering over a decade of experience with someoffers a proactive approach to application performance engineering of the most complex and sophisticated networks in the world.(APE). When implemented at the policy level and as a best practice Customers include WalMart, McDonalds, Bank of America, Appleacross the Application Lifecycle, the Shunra PerformanceSuite™ Computer, Cisco, Verizon, FedEx, GE, Walt Disney, TJX, Best Buy, eBay,builds real-world application performance testing (latency, packet Siemens, Motorola, Marriott, Merrill Lynch, ATT, ADP, ING Direct,loss, bandwidth optimization, jitter), into all business and mission- Citibank, Thomson Reuters, Master Card, IBM, Boeing, HP, Pfizer,critical applications, all prior to deployment. The Shunra solution Boeing, Intel, and the Federal Reserve Bank.discovers, predicts, emulates and analyzes the performance of Shunra is based in Philadelphia, PA and is privately held. For moreapplications over real-world networks – all within an offline, pre- information, call 1.877.474.8672 or visit.www.shunra.com.production, test lab or COE environment. The results? Shunra Ask Shunra About Our Proactive Strategies for Deploying Your Application in the Cloud Today! Visit www.shunra.com and request to be contacted. Or contact Shunra directly at 1.877.474.8672 or 1.215.564.4046 (worldwide offices listed below) WAN. Web. Mobile. Cloud. On Black Confidence in Application Performance™ Application Performance Engineering www.shunra.com Call your Local office TODAY to find out more! North America, Headquarters Israel Office European Office For a complete list of our 1800 J.F. Kennedy Blvd. Ste 601 Philadelphia, PA USA 6B Hanagar Street Neve Neeman B Hod Hasharon 73 Watling Street London channel partners, please Tel: 215 564 4046 45240, Israel EC4M 9BJ visit our website Toll Free: 1 877 474 8672 Tel: +972 9 764 3743 Tel: +44 207 153 9835 Fax: 215 564 4047 Fax: +972 9 764 3754 Fax: +44 207 285 6816 www.shunra.com info@shunra.com info@shunra.com saleseurope@shunra.com © 2011 Shunra Software Ltd. All rights reserved. Shunra is a registered trademark of Shunra Software.

×