This document discusses performance and scalability in cloud computing. It notes that poor application performance can negatively impact businesses by reducing customer retention, employee productivity, and revenue. When moving applications to the cloud, businesses must ensure performance is optimized. To address performance issues, companies should isolate factors like network access times and application architecture. The document also discusses concepts like performance, scalability, horizontal and vertical scaling, and addresses approaches like application development and Joyent's solutions to improve performance and scaling in cloud environments.
CONTINUOUS APPLICATION AVAILABILITY WITH EMC VPLEX Roy Wassili
A growing number of organizations are using
EMC VPLEX and other off-the-shelf technologies
to ‘stretch’ application processing and data
access across distance—for continuous
application availability that is practical,
affordable, and automatic.
Harnessing the Cloud for Performance Testing- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper provides insights on the various benefits of using the Cloud for Performance Testing as well as how to address the various challenges associated with this approach.
Managing the performance of enterprise applications is hard. Managing and optimizing the performance of enterprise applications on shared virtualized infrastructure (i.e. cloud computing) is even harder. This article outlines the specifics of capacity planning and performance management of EAs deployed in the cloud.
Enterprise data centres have traditionally used servers and storage that typically scale only to a few nodes. Even small capacity or performance scales required large installation increments or worse, required replicating the existing IT infrastructure, which is prohibitive in terms of cost and space. An important impediment was that as storage capacity increased, system performance and efficiency suffered. In addition, IT budgets came under pressure and created high entry barriers to scale for enterprise class data centres. However, virtualization and cloud platforms are changing that. IT departments can now linearly scale to several server and storage nodes rapidly, for capacity and performance without compromising on efficiency and to keep costs under control. This helps save space via hardware consolidation, improves productivity, and derives a competitive advantage through increased availability, lean administration, and fast deployment times.
CONTINUOUS APPLICATION AVAILABILITY WITH EMC VPLEX Roy Wassili
A growing number of organizations are using
EMC VPLEX and other off-the-shelf technologies
to ‘stretch’ application processing and data
access across distance—for continuous
application availability that is practical,
affordable, and automatic.
Harnessing the Cloud for Performance Testing- Impetus White PaperImpetus Technologies
For Impetus’ White Papers archive, visit- http://www.impetus.com/whitepaper
The paper provides insights on the various benefits of using the Cloud for Performance Testing as well as how to address the various challenges associated with this approach.
Managing the performance of enterprise applications is hard. Managing and optimizing the performance of enterprise applications on shared virtualized infrastructure (i.e. cloud computing) is even harder. This article outlines the specifics of capacity planning and performance management of EAs deployed in the cloud.
Enterprise data centres have traditionally used servers and storage that typically scale only to a few nodes. Even small capacity or performance scales required large installation increments or worse, required replicating the existing IT infrastructure, which is prohibitive in terms of cost and space. An important impediment was that as storage capacity increased, system performance and efficiency suffered. In addition, IT budgets came under pressure and created high entry barriers to scale for enterprise class data centres. However, virtualization and cloud platforms are changing that. IT departments can now linearly scale to several server and storage nodes rapidly, for capacity and performance without compromising on efficiency and to keep costs under control. This helps save space via hardware consolidation, improves productivity, and derives a competitive advantage through increased availability, lean administration, and fast deployment times.
Description on implementing a recovery environment with VMware, vSphere, vConnect, and RPA as an initial training document to application DR Teams going through Application Recovery Certification with links to additional materials.
Top 6 Advantages Of Using Cloud erp software For Your BusinessExpand ERP
Cloud ERP solutions provide advanced security with additional resources to help business handle and be more flexible in their operations. http://www.expanderp.com/
Learn aboput the “z/OS Multi-Site Business Continuity” September, 2012. This paper explores the various GDPS configuration deployments that clients have
implemented to provide high availability/continuous operations locally and/or out of region disaster recovery protection. It also explores the trend towards trying to reduce D/R testing costs by moving toward a ‘regular site switch’ or ‘site toggle’ model. For more information on IBM System z, visit http://ibm.co/PNo9Cb.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Areas Of It Complexity PowerPoint Presentation SlidesSlideTeam
Enhance your audiences knowledge with this well researched complete deck. Showcase all the important features of the deck with perfect visuals. This deck comprises of total of thirty two slides with each slide explained in detail. Each template comprises of professional diagrams and layouts. Our professional PowerPoint experts have also included icons, graphs and charts for your convenience. All you have to do is DOWNLOAD the deck. Make changes as per the requirement. Yes, these PPT slides are completely customizable. Edit the colour, text and font size. Add or delete the content from the slide. And leave your audience awestruck with the professionally designed Areas Of It Complexity PowerPoint Presentation Slides complete deck.
CIN-2650 - Cloud adoption! Enforcer to transform your organization around peo...Hendrik van Run
IBM InterConnect 2015 presentation about what it is needed from your organisation to adopt cloud. The focus here is around people, processes and technology.
Choosing between Cloud based Email system and an On Premise Email set-up can be confusing. This presentation tries to simplify the task of picking the right Email Solution for your enterprise.
Description on implementing a recovery environment with VMware, vSphere, vConnect, and RPA as an initial training document to application DR Teams going through Application Recovery Certification with links to additional materials.
Top 6 Advantages Of Using Cloud erp software For Your BusinessExpand ERP
Cloud ERP solutions provide advanced security with additional resources to help business handle and be more flexible in their operations. http://www.expanderp.com/
Learn aboput the “z/OS Multi-Site Business Continuity” September, 2012. This paper explores the various GDPS configuration deployments that clients have
implemented to provide high availability/continuous operations locally and/or out of region disaster recovery protection. It also explores the trend towards trying to reduce D/R testing costs by moving toward a ‘regular site switch’ or ‘site toggle’ model. For more information on IBM System z, visit http://ibm.co/PNo9Cb.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Areas Of It Complexity PowerPoint Presentation SlidesSlideTeam
Enhance your audiences knowledge with this well researched complete deck. Showcase all the important features of the deck with perfect visuals. This deck comprises of total of thirty two slides with each slide explained in detail. Each template comprises of professional diagrams and layouts. Our professional PowerPoint experts have also included icons, graphs and charts for your convenience. All you have to do is DOWNLOAD the deck. Make changes as per the requirement. Yes, these PPT slides are completely customizable. Edit the colour, text and font size. Add or delete the content from the slide. And leave your audience awestruck with the professionally designed Areas Of It Complexity PowerPoint Presentation Slides complete deck.
CIN-2650 - Cloud adoption! Enforcer to transform your organization around peo...Hendrik van Run
IBM InterConnect 2015 presentation about what it is needed from your organisation to adopt cloud. The focus here is around people, processes and technology.
Choosing between Cloud based Email system and an On Premise Email set-up can be confusing. This presentation tries to simplify the task of picking the right Email Solution for your enterprise.
This material was delivered to a group of aspring entrepreneurs in Lagos Nigeria by me. My name is Angela Ihunweze CEO Angela Itambo Company visit www.angelaitambo.com.ng to know about me
#IBM Open technology platforms, pre-integrated and pre-tested systems, and optimised configurations that´s
IBM Cloud Infrastructure Alliance especially designed to help you accelerate your journey to the Cloud. Contact me for more details. #ibmcloud
Small and medium-sized businesses can reduce software licensing and other OPE...Principled Technologies
A cluster of these servers ran a mix of applications with up to 27 percent better application performance than a previous-generation cluster, which
could allow companies to do a given amount of work with fewer servers
Conclusion
As you do your best to balance timing, budget, IT resources, and your current and anticipated server needs, consider how opting for newer servers could help your business. As our testing showed, there are clear benefits to choosing servers that support such workload requirements as keeping databases running at a quick pace and delivering speedy hosting for your business’s website. Plus, a solution that offers the capacity and software features to perform well while natively supporting Kubernetes containers could add value in terms of setup, flexibility, scalability, and cost-effectiveness. And you can achieve all of this and possibly reduce OPEX in the process.
In our testing with a mixed workload that reflects some of the needs common to small and medium businesses, a cluster of 16G Dell PowerEdge R7615 single-socket servers powered by 4th Gen AMD EPYC processors outperformed a cluster of previous-generation 15G Dell PowerEdge R7515 servers, with improvements of up to 27 percent and latency reduction of up to 50 percent. These results show that upgrading to the new Dell solution can be a smart step toward meeting the needs of your users now and in the years to come.
The simplest cloud migration in the world by WebscaleWebscale Networks
Cloud migration is the process of moving data, applications or other business elements from an organization’s onsite (server room, data center or other managed hosting facility) compute environment to the cloud. Webscale helps e-commerce stores to migrate to the cloud.
2020 Cloud Data Lake Platforms Buyers Guide - White paper | QuboleVasu S
Qubole's buyer guide about how cloud data lake platform helps organizations to achieve efficiency & agility by adopting an open data lake platform and why data lakes are moving to the cloud
https://www.qubole.com/resources/white-papers/2020-cloud-data-lake-platforms-buyers-guide
Peter Coffee at Southland Technology ConferencePeter Coffee
Cloud computing should do much more than merely relocate the current delays, risks, and costs of application development. Peter Coffee, former Technology Editor of eWEEK, explores the status and prospects of the multi-product, multi-vendor cloud, where complementary services offer proven development leverage and enable next-generation business processes.
Peter Coffee joined salesforce.com in January of 2007, after spending 18 years as an analyst and columnist at the enterprise technology journal eWEEK (including time under its former title PC Week). Based near Los Angeles, he works with corporate and commercial application developers to establish an international community of best practices on Force.com: the salesforce.com cloud computing Platform-as-a-Service (PaaS).
Yesterday’s siloed approach to performance monitoring is failing the needs of today’s hybridized and software-defined environments, necessitating a new, unified and blended view of performance domains. In order to effectively address these specific challenges, you need a holistic approach to performance management that converges network, application, infrastructure and end-user experience monitoring into a single interface and also integrates at the domain level. Riverbed SteelCentral provides that platform. http://rvbd.ly/1PnDaxT
Best practices for application migration to public clouds interop presentationesebeus
Best Practices for Application Migration to Public Clouds
Talk given at Interop May, 2013.
Whether you are thinking of migrating 1 application or 8000 applications to the cloud, the odds of success increase if best practices are followed. Do you know what those best practices are?
As hustler Mike McDermott said in the 1998 poker movie Rounders, “If you can't spot the sucker in the first half hour at the table, then you ARE the sucker.”
Anyone with a credit card can sit at the table of trying to move applications to public clouds. Those who want to succeed, study and learn from consistent winners. There are some hands to fold, some to play cautiously, and some to play aggressively.
This session covered best practices from helping 15 Fortune 1000 companies successfully migrate to cloud solutions.
Who should attend?
Anyone who wants to improve their odds of successfully migrating applications to public clouds.
Key Takeaways
• What are the key business considerations to address prior to migration?
• Which application workloads are suitable for public clouds?
• Which applications to replatform? Which to refactor?
• What are key considerations for replatforming and refactoring?
• What are key cloud application design concepts?
The F5 Networks Application Services Reference Architecture (White Paper)F5 Networks
Build elastic, flexible application delivery fabrics that are ready to meet the challenges of optimizing and securing applications in a constantly evolving environment.
The F5 Networks Application Services Reference Architecture (White Paper)
Is there anything that can double the advantages of hybrid cloud hosting without requiring heavy IT investment? Yes, there is. Effective resource allocation and cost management can help improve hybrid cloud benefits. Read to know how.
This presentation looks at the practical issues for moving applications to the cloud.
It addresses the need to choose the applications carefully and how to decide which type of cloud platform is suitable for delivering cloud benefits.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Performance and scale in cloud
1. Performance and Scale in
Cloud Computing
A Joyent White Paper
Executive Summary
Poor application performance causes companies to lose customers,
reduce employee productivity, and reduce bottom line revenue.
Because application performance can vary significantly based on
delivery environment, businesses must make certain that application
performance is optimized when written for deployment on the cloud or
moved from a data center to a cloud computing infrastructure.
Applications can be tested in cloud and non-cloud environments for
base-level performance comparisons. Aspects of an application, such
as disk I/O and RAM access, may cause intermittent spikes in
performance. However, as with traditional software architectures,
overall traffic patterns and peaks in system use account for the majority
of performance issues in cloud computing.
Capacity planning and expansion based on multiples of past acceptable
performance solves many performance issues when companies grow
their cloud environments. However, planning cannot always cover
sudden spikes in traffic, and manual provisioning might be required. A
more cost-effective pursuit of greater scalability and performance is the
use of more efficient application development; this technique breaks
code execution into silos serviced by more easily scaled and provisioned
resources.
In response to the need for greater performance and scalability in cloud
computing environments, Joyent Smart Technologies offer scalability
features and options that aid application performance, including
lightweight virtualization, flexible resource provisioning, dynamic load
1
2. balancing and storage caching, and CPU bursting. The Joyent
SmartPlatform development environment allows businesses to develop
more efficient applications that are easily ported to virtually any open
standards environment.
2
3. Contents
Introduction
4
Why Worry about Cloud Computing Performance?
4
Isolating the Causes of Performance Problems
5
Misperceptions Concerning Cloud Computing Performance
6
Understanding Performance, Scale, and Throughput
8
Horizontal and Vertical Scalability
9
Administrative and Geographical Scalability
10
Practical and Theoretical Limits of Scale
11
Addressing Application Scalability
11
Application Development to Improve Scalability
12
Joyent Solutions to Performance and Scaling
13
Conclusion
15
References
16
3
4. Introduction
As companies move computing resources from premises-based data
centers to private and public cloud computing facilities, they should
make certain their applications and data make a safe and smooth
transition to the cloud. In particular, businesses should ensure that
cloud-based facilities will deliver necessary application and transaction
performance—now, and in the future. Much depends on this migration
and preparation for the transition and final cutoff. Rather than simply
moving applications from the traditional data center servers to a cloud
computing environment and flick the “on” switch, companies should
examine performance issues, potential reprogramming of applications,
and capacity planning for the new cloud target to completely optimize
application performance.
Applications that performed one way in the data center may not perform
identically on a cloud platform. Companies need to isolate the areas of
an application or its deployment that may cause performance changes
and address each separately to guarantee optimal transition. In many
cases, however, the underlying infrastructure of the cloud platform may
directly affect application performance.
Businesses should also thoroughly test applications developed and
deployed specifically for cloud computing platforms. Ideally, businesses
should test the scalability of the application under a variety of network
and application conditions to make sure the new application handles not
only the current business demands but also is able to seamlessly scale
to handle planned or unplanned spikes in demand.
Why Worry about Cloud Computing Performance?
Sluggish access to data, applications, and Web pages frustrates
employees and customers alike, and some performance problems and
4
5. bottlenecks can even cause application crashes and data losses. In
these instances, performance—or lack of performance—is a direct
reflection of the company’s competency and reliability. Customers are
unlikely to put their trust in a company whose applications crash and are
reluctant to return to business sites that are frustrating to use due to
poor performance and sluggish response times.
Intranet cloud-based applications should also maintain peak
performance. Positive employee productivity relies on solid and reliable
application performance to complete work accurately and quickly.
Application crashes due to poor performance cost money and impact
morale. Poor performance hampers business expansion as well. If
applications cannot adequately perform during an increase in traffic,
businesses lose customers and revenue. Adequate current
performance does not guarantee future behavior. An application that
adequately serves 100 customers per hour may suddenly nose-dive in
responsiveness when attempting to serve 125 users. Capacity planning
based on predicted traffic and system stress testing can help
businesses make informed decisions concerning cost-effective and
optimum provisioning of their cloud platforms. In some cases,
businesses intentionally over-provision their cloud systems to ensure
that no outages or slowdowns occur. Regardless, companies should
have plans in place for addressing increased demand on their systems
to guarantee they do not lose customers, decrease employee
productivity, or diminish their business reputation. 1
Isolating the Causes of Performance Problems
Before companies can address performance issues on their cloud
infrastructure, they should rule out non-cloud factors in the performance
equation. First, and most obvious, is the access time from user to
application over the Internet or WAN. Simple Internet speed testing
programs, taken from a number of geographically disbursed locations,
5
6. or even network ping measurements when testing WAN connections
can give companies the average network access overhead.
Next, businesses can also measure the performance of the application
on a specific platform configuration to calculate how much the platform
affects overall performance. Comparisons of the application speed on
its “native” server in the data center compared to its response on the
cloud, for example, provide an indication of any significant platform
differential, but may not necessarily isolate exactly what platform factor
is causing the difference. 2 If an application’s performance suffers once
it is moved to a cloud infrastructure and network access time is not the
culprit, Joyent has found from experience that the application’s
architecture is likely to be at fault. Inefficiently written applications can
result in poor performance on different platforms, as the application is
not optimized for a particular architecture.
Optimizing application code for use on cloud platforms is discussed
later in this paper, because it directly relates to the more complex topic
of enabling cloud computing scalability. Optimum cloud computing
performance is not a simple problem to solve nor is it addressed
satisfactorily by many current cloud computing vendors.
Misperceptions Concerning Cloud Computing
Performance
In a typical corporate network data center, servers, storage, and
network switches perform together to deliver data and applications to
network users. Under this IT scenario, applications must have adequate
CPU and memory to perform, data must have sufficient disk space, and
users must have appropriate bandwidth to access the data and
applications.
Whenever IT administrators experience performance issues under this
scenario, they usually resolve issues in the following way:
6
7. • Poor application performance or application hang-ups. Usually
the application is starved for RAM or CPU cycles, and faster
processors or more RAM is added.
• Slow access to applications and data. Bandwidth is usually the
cause, and the most common solution is to add faster network
connections to the mix, increasing desktops from 10 Mbps NICs to
100 Mbps NICs, for example, or faster disk drives, such as SCSI over
fibre channel.
While these solutions may solve data center performance issues, they
may do nothing for cloud-based application optimization. Furthermore,
even under data center application performance enhancement, adding
CPU and memory may be an expensive over-provisioning to handle
simple bursts in demand. The bottom line is that cloud computing,
based on virtual provisioning of resources on an Internet-based
platform, does not conform to standard brute-force data center
solutions to performance. When companies or cloud vendors take the
simplistic “more hardware solves the problem” approach to cloud
performance, they waste money and do not completely resolve all
application issues. Unfortunately, that is precisely how many cloud
vendors attempt to solve performance issues. While they may not
always add more physical servers or memory, they frequently provision
more virtual machines. Adding virtual machines may be a short-term
solution to the problem, but adding machines is a manual task. If a
company experiences a sudden spike in traffic, how quickly will the
vendor notice the spike and assign a technician to provision more
resources to the account? How much does this cost the customer?
Besides the misperception that more hardware, either physical or virtual,
effectively solves all performance issues, companies may have a
fundamental misunderstanding of how performance, scalability, and
network throughput are interdependent yet affect applications and data
access in separate ways.
7
8. Understanding Performance, Scale, and
Throughput
Because the terms performance, scale, and throughput are used in a
variety of ways when discussing computing, it is useful to examine their
typical meanings in the context of cloud computing infrastructures.
Performance. Performance is generally tied to an application’s
capabilities within the cloud infrastructure itself. Limited bandwidth, disk
space, memory, CPU cycles, and network connections can all cause
poor performance. Often, a combination of lack of resources causes
poor application performance. Sometimes poor performance is the
result of an application architecture that does not properly distribute its
processes across available cloud resources.
Throughput. The effective rate at which data is transferred from point
A to point B on the cloud is throughput. In other words, throughput is a
measurement of raw speed. While speed of moving or processing data
can certainly improve system performance, the system is only as fast as
its slowest element. A system that deploys ten gigabit Ethernet yet its
server storage can access data at only one gigabit effectively has a one
gigabit system.
Scalability. The search for continually improving system performance
through hardware and software throughput gains is defeated when a
system is swamped by multiple, simultaneous demands. That 10
gigabit pipe slows considerably when it serves hundreds of requests
rather than a dozen. The only way to restore higher effective throughput
(and performance) in such a “swamped resources” scenario is to scale
—add more of the resource that is overloaded.
For this reason, the ability of a system to easily scale when under stress
in a cloud environment is vastly more useful than the overall throughput
or aggregate performance of individual components. In cloud
8
9. environments, this scalability is usually handled through either horizontal
or vertical scaling.
Horizontal and Vertical Scalability
When increasing resources on the cloud to restore or improve
application performance, administrators can scale either horizontally
(out) or vertically (up), depending on the nature of the resource
constraint. Vertical scaling (up) entails adding more resources to the
same computing pool—for example, adding more RAM, disk, or virtual
CPU to handle an increased application load. Horizontal scaling (out)
requires the addition of more machines or devices to the computing
platform to handle the increased demand. This is represented in the
transition from Figure 1 to Figure 2, below.
Networking Load Balancing + Web Tier Database Tier
Caching
Figure 1: Basic, Single Silo, n-tier architecture
Load Balancing + Web Tier
Caching
Networking Database Tier
Figure 2: Horizontally scaled load balancing and web-tier. Vertically scaled database tier.
Vertical scaling can handle most sudden, temporary peaks in application
demand on cloud infrastructures since they are not typically CPU-
intensive tasks. Sustained increases in demand, however, require
horizontal scaling and load balancing to restore and maintain peak
performance. Horizontal scaling is also manually intensive and time
9
10. consuming, requiring a technician to add machinery to the customer’s
cloud configuration. Manually scaling to meet a sudden peak in traffic
may not be productive—traffic may settle to its pre-peak levels before
new provisioning can come on line.
SmartMachines provide
bursting to handle
short-term variable load
SmartDataCenter provides
horizontal scaling to deal
with long term growth
Businesses may also find themselves experiencing more gradual
increases in traffic. Here, provisioning extra resources provides only
temporary relief as resource demands continue to rise and exceed the
newly provisioned resources.
Administrative and Geographical Scalability
While adding computing components or virtual resources is a logical
means to scale and improve performance, few companies realize that
the increase in resources may also necessitate an increase in
administration, particularly when deploying horizontal scaling. In
essence, a scaled increase in hard or virtual resources often requires a
corresponding increase in administrative time and expenses. This
administrative increase may not be a one-time configuration demand as
more resources require continual monitoring, backup, and maintenance.
10
11. Companies with critical cloud applications may also consider
geographical scaling as a means to more widely distribute application
load demands or as a way to move application access closer to
dispersed communities of users or customers. Geographical scaling of
resources in conjunction with synchronous replication of data pools is
another means of adding fault tolerance and disaster recovery to cloud-
based data and applications. Geographical scaling may also be
necessary in environments where it is impractical to host all data or
applications in one central location.
Practical and Theoretical Limits of Scale
While scalability is the most effective strategy for solving performance
issues in cloud infrastructures, practical and theoretical limits prevent it
from ever becoming an exponential, infinite solution. Practically
speaking, most companies cannot commit an infinite amount of money,
people, or time to improving performance. Cloud vendors also may
have a limited amount of experience, personnel, or bandwidth to
address customer application performance. Every computing
infrastructure is bound by a certain level of complexity and scale, not the
least of which is power, administration, and bandwidth, necessitating
geographical dispersal.
Addressing Application Scalability
For a cloud computing platform to effectively host business data and
applications, however, it must accommodate a wide range of
performance characteristics and network demands. Storage, CPU,
memory, and network bandwidth all come into play at various times
during typical application use. Application switching, for example,
places demands on the CPU as one application is closed, flushed from
the registers, and another application is loaded. If these applications are
large and complex, they put a greater demand on the CPU.
11
12. Serving files from the cloud to connected users stresses a number of
resources, including disk drives, drive controllers, and network
connections when transferring the data from the cloud to the user. File
storage itself consumes resources not only in the form of physical disk
space, but also disk directories and metafile systems that consume
RAM and CPU cycles when users either access or upload files into the
storage system.
As these examples illustrate, applications can benefit from both
horizontal and vertical scaling of resources on demand, yet truly
dynamic scaling is not possible on most cloud computing
infrastructures. Therefore, one of the most common and costly
responses to scaling issues by vendors is to over-provision customer
installations to accommodate a wide range of performance issues.
Application Development to Improve Scalability
One practical means for addressing application scalability and to reduce
performance bottlenecks is to segment applications into separate silos.
Web-based applications are theoretically stateless, and therefore
theoretically easy to scale—all that is needed is more memory, CPU,
storage, and bandwidth to accommodate them, as was depicted in
Figure 2. However, in practice Web-based applications are not
stateless. They are accessed through a network connection(s) that
requires an IP address(es) that is fixed and therefore stateful, and they
connect to data storage (either disk or database) which maintains logical
state as well as requiring hardware resources to execute. Balancing the
interaction between stateless and stateful elements of a Web application
requires careful architectural consideration and the use of tiers and silos
to allow some form of horizontal resource scaling. To leverage the most
from resources, application developers can break applications into
discrete tiers—state or stateless processes—that are executed in
various resource silos. Figure 3 depicts breaking an application into two
12
13. silos identified by their DNS name. By segregating state and stateless
operations and provisioning accordingly, applications and systems can
run more efficiently and with higher resource utilization than under a
more common scenario.
Networking Load Balancing + Web Tier Database Tier
Caching
http://app2.joyent.com
http://app1.joyent.com
Figure 3: Multi-silo, n-tier, scaled architecture
Joyent Solutions to Performance and Scaling
Joyent has applied its extensive experience in providing cloud
computing infrastructure and services and developed its Smart
Technologies range of products, from its SmartOS operating system,
SmartMachine virtualization technology, SmartDataCenter infrastructure
management system, to its SmartPlatform development environment.
This Joyent architecture provides a highly elastic cloud infrastructure
that accommodates bursts in traffic that other cloud infrastructures
cannot. The Joyent Smart Technologies architecture has the following
key performance and scaling advantages over traditional cloud
computing infrastructures:
SmartMachine lightweight virtualization. Joyent SmartMachines
have been designed to provide best possible performance with limited
overhead. The SmartOS operating system combines the operating
13
14. system and virtualization to eliminate redundancy and maximize
available RAM for applications.
SmartCache. Joyent makes use of all of the unused DDR3 memory in
the cloud by providing a large ARC Cache pool delivering unparalleled
Disk I/O. Both reads and writes are greatly improved as content that
would traditionally be served from disk are cached in high speed
memory without any customer interaction.
CPU bursting. The Joyent implementation of its CPU engine allows
on-demand processing cycles from a resource pool of available CPUs,
enabling instantaneous vertical scaling to meet bursts of application
demand without costly and time-consuming manual provisioning of
resources.
Choice of virtualization. While SmartMachines provide ideal
performance, Joyent recognizes that legacy operating systems and
development environments are required for many applications, and
Joyent SmartOS provides XVM virtualization technology as an integral
component of the OS allowing for other operating systems such as
Windows and Linux. These operating systems are still able to take
advantage of SmartOS capabilities such as SmartCache to achieve
improved performance, and management by SmartDataCenter.
Build clouds on architecture not rent-a-machine. The Joyent
SmartDataCenter architecture is built with performance and scale of
applications in mind versus a simplistic concept of adding more and
more virtual machines to solve application performance issues. Joyent
understands that application architecture is supported by several tiers of
servers that need low latency interconnect. Our patent pending
Honeycomb design ensures that servers (Web, App, DB, Cache) while
completely distributed and fully redundant, are provisioned in the highest
performance and lowest latency manner possible. Rent-a-machine
14
15. cloud solutions merely move physical data center inefficiencies to virtual,
cloud-based inefficiencies.
Joyent’s Smart Technology infrastructure provides additional
performance and scalability enhancements over traditional cloud
computing platforms. Joyent’s use of SmartDataCenter network
provisioning include automatic caching of distributed file systems and
load balancing of network traffic so that the network can dynamically
conform to demands made by applications with no administrator input
required; in many cases this can eliminate the need for adding physical
or virtual resources.
Joyent’s SmartPlatform is an application development environment that
allows businesses to write more efficient application code for cloud
computing environments. SmartPlatform is flexible and supports a
number of standard programming tools and languages, making
SmartPlatform applications portable to nearly any cloud computing
environment.
In addition to these core technology advancements, Joyent professional
services provide a complete methodology that helps build applications
with architectures that will operate at peak performance, scale to handle
demand, and maximize return on investment. Joyent’s multiphase
methodology, the Joyent Smart Architecture Method, helps deploy the
right architecture for every application. This approach is suitable for
existing applications that are transitioning to the cloud as well as new
applications that are being developed from scratch.
Conclusion
Scalability is the best solution to increasing and maintaining application
performance in cloud computing environments. Cloud computing
vendors often resort to brute-force horizontal scaling by adding more
physical or virtual machines, but this approach may not only waste
15
16. resources but also not entirely solve performance issues, especially
those related to disk and network I/O. In addition, customers and
vendors alike have practical limits on their ability to scale, primarily
constrained by costs and human resources. Smart application
development can alleviate performance issues in many cases by
isolating resource-intensive processes and assigning the appropriate
assets to handle the load. However, for the most part scaling to meet
performance demands remains a manual process and requires vigilant
monitoring and real-time response.
Joyent’s Smart Technologies address many issues of scalability and
performance in cloud computing, including dynamic vertical scalability,
more efficient allocation of virtual resources, and efficient I/O load
balancing.
References
1. Mouline, Imad. “Why Assumptions About Cloud Performance Can Be Dangerous.”
Cloud Computing Journal. May, 2009.
www.cloudcomputing.sys-con.com/node/957492
2. Nolle, Tom. “Meeting performance standards and SLAs in the cloud.”
SearchCloudComputing. April, 2010.
http://searchcloudcomputing.techtarget.com/tip/
0,289483,sid201_gci1357087_mem1,00.html
16