This document presents a framework for estimating the value of cloud computing compared to conventional IT solutions. The framework consists of four main steps: (1) defining the business scenario including objectives, demand behavior, and technical requirements; (2) calculating the costs of fulfilling the scenario using cloud computing; (3) calculating the costs of using a reference IT solution; and (4) comparing the total costs of both approaches to estimate the value of cloud computing for that scenario. The document also provides two examples that illustrate how real-world use cases can be analyzed using this framework.
: Cloud processing is turning into an inexorably mainstream endeavor demonstrate in which figuring assets are made accessible on-request to the client as required. The one of a kind incentivized offer of distributed computing makes new chances to adjust IT and business objectives. Distributed computing utilizes the web advancements for conveyance of IT-Enabled abilities 'as an administration' to any required clients i.e. through distributed computing we can get to anything that we need from anyplace to any PC without agonizing over anything like about their stockpiling, cost, administration etc. In this paper, I give a far-reaching study on the inspiration variables of receiving distributed computing, audit the few cloud sending and administration models. It additionally investigates certain advantages of distributed computing over customary IT benefit environment-including versatility, adaptability, decreased capital and higher asset usage are considered as appropriation explanations behind distributed computing environment. I additionally incorporate security, protection, and web reliance and accessibility as shirking issues. The later incorporates vertical versatility as specialized test in cloud environment.
Cloud Adoption in Capital Markets: A PerspectiveCognizant
For the financial services industry, the adoption of cloud services has become a viable business directive. As firms work to recoup their losses from the recent financial crisis, pay-as-you-go cloud services allow them to focus more on strategic, innovative and revenue-generating endeavors and less on managing routine IT activities and the supporting infrastructure.
: Cloud processing is turning into an inexorably mainstream endeavor demonstrate in which figuring assets are made accessible on-request to the client as required. The one of a kind incentivized offer of distributed computing makes new chances to adjust IT and business objectives. Distributed computing utilizes the web advancements for conveyance of IT-Enabled abilities 'as an administration' to any required clients i.e. through distributed computing we can get to anything that we need from anyplace to any PC without agonizing over anything like about their stockpiling, cost, administration etc. In this paper, I give a far-reaching study on the inspiration variables of receiving distributed computing, audit the few cloud sending and administration models. It additionally investigates certain advantages of distributed computing over customary IT benefit environment-including versatility, adaptability, decreased capital and higher asset usage are considered as appropriation explanations behind distributed computing environment. I additionally incorporate security, protection, and web reliance and accessibility as shirking issues. The later incorporates vertical versatility as specialized test in cloud environment.
Cloud Adoption in Capital Markets: A PerspectiveCognizant
For the financial services industry, the adoption of cloud services has become a viable business directive. As firms work to recoup their losses from the recent financial crisis, pay-as-you-go cloud services allow them to focus more on strategic, innovative and revenue-generating endeavors and less on managing routine IT activities and the supporting infrastructure.
IT leaders are adopting hybrid cloud services at a rapid pace to increase business agility and cost containment. According to a February 2016 IDG study on Hybrid Cloud Computing, up to 83 percent of C-level respondents use or plan to use a hybrid cloud.27 Transforming IT service delivery to a hybrid cloud consumption model is the clear path for the vast majority of organizations. The bigger issue is how quickly an organization can change.
While building your own hybrid cloud solution may seem attractive, you should seriously evaluate the challenges such an option presents to already overextended IT staff resources. As detailed in this paper, organizations must be prepared for the likelihood of higher costs, longer deployments, and greater risks than they may initially expect. In addition, the IT community as a whole is rapidly moving away from integrating components and delivering services manually to a more strategic focus on providing high-value services to businesses.
The race to deliver cloud-native applications and services to the business demands that traditional IT services and applications either evolve to a hybrid cloud consumption model or risk placing the business at a competitive disadvantage. An engineered solution not only saves time, money, and resources but also allows your IT staff to focus on innovation and delivering IT services that increase business value and align with the evolving marketplace of IT services for enterprise-level organizations.
Organizations should seize the opportunity to achieve the transformational efficiencies the hybrid cloud can deliver. When planning your journey, consider buying rather than building your own solution to speed time to value, minimize expenditures, and reduce risk.
In almost every organisation, irrespective of its size and industry sector, IT infrastructure managers today are grappling with the same challenges: how to transform IT efficiency; increase agility and flexibility; and lower overheads in such a way that performance, availability, resilience, data security and compliance remain under tight control.
This short paper discusses technical and financial advantages of server virtualization. The concepts in the paper are illustrated by two case studies which analyze the net present value of virtualization projects based on potential power savings.
The National Registration Department of
Malaysia (Jabatan Pendaftaran Negara
Malaysia, or JPN) is the government agency
responsible for registering important demographic
events. To expedite statistical queries
and requests for data extraction from governmental
entities, JPN custom-built an application
referred to as the Statistics and Data Extraction
Request Management System (SAL). Although
the application was designed to automatically
manage, calculate and distribute responses to
data requests, the underlying hardware platform
was woefully inadequate for the task and
frequently bogged down.
Presentation from Chesapeake Regional Tech Council\'s TechFocus Seminar on Cloud Security; Presented by Scott C Sadler, Business Development Executive - Cloud Computing, IBM US East Mid-Market & Channels on Thursday, October 27, 2011. http://www.chesapeaketech.org
Cloud computing technology has been a new buzzword in the IT industry and expecting a new horizon for coming world. It is a style of computing which is having dynamically scalable virtualized resources provided as a service over the Internet.
IT leaders are adopting hybrid cloud services at a rapid pace to increase business agility and cost containment. According to a February 2016 IDG study on Hybrid Cloud Computing, up to 83 percent of C-level respondents use or plan to use a hybrid cloud.27 Transforming IT service delivery to a hybrid cloud consumption model is the clear path for the vast majority of organizations. The bigger issue is how quickly an organization can change.
While building your own hybrid cloud solution may seem attractive, you should seriously evaluate the challenges such an option presents to already overextended IT staff resources. As detailed in this paper, organizations must be prepared for the likelihood of higher costs, longer deployments, and greater risks than they may initially expect. In addition, the IT community as a whole is rapidly moving away from integrating components and delivering services manually to a more strategic focus on providing high-value services to businesses.
The race to deliver cloud-native applications and services to the business demands that traditional IT services and applications either evolve to a hybrid cloud consumption model or risk placing the business at a competitive disadvantage. An engineered solution not only saves time, money, and resources but also allows your IT staff to focus on innovation and delivering IT services that increase business value and align with the evolving marketplace of IT services for enterprise-level organizations.
Organizations should seize the opportunity to achieve the transformational efficiencies the hybrid cloud can deliver. When planning your journey, consider buying rather than building your own solution to speed time to value, minimize expenditures, and reduce risk.
In almost every organisation, irrespective of its size and industry sector, IT infrastructure managers today are grappling with the same challenges: how to transform IT efficiency; increase agility and flexibility; and lower overheads in such a way that performance, availability, resilience, data security and compliance remain under tight control.
This short paper discusses technical and financial advantages of server virtualization. The concepts in the paper are illustrated by two case studies which analyze the net present value of virtualization projects based on potential power savings.
The National Registration Department of
Malaysia (Jabatan Pendaftaran Negara
Malaysia, or JPN) is the government agency
responsible for registering important demographic
events. To expedite statistical queries
and requests for data extraction from governmental
entities, JPN custom-built an application
referred to as the Statistics and Data Extraction
Request Management System (SAL). Although
the application was designed to automatically
manage, calculate and distribute responses to
data requests, the underlying hardware platform
was woefully inadequate for the task and
frequently bogged down.
Presentation from Chesapeake Regional Tech Council\'s TechFocus Seminar on Cloud Security; Presented by Scott C Sadler, Business Development Executive - Cloud Computing, IBM US East Mid-Market & Channels on Thursday, October 27, 2011. http://www.chesapeaketech.org
Cloud computing technology has been a new buzzword in the IT industry and expecting a new horizon for coming world. It is a style of computing which is having dynamically scalable virtualized resources provided as a service over the Internet.
50 C o m m u n i C At i o n S o f t h E A C m A P.docxalinainglis
50 C o m m u n i C At i o n S o f t h E A C m | A P R i L 2 0 1 0 | v O L . 5 3 | n O . 4
practice
CLOUd COMPUting, the long-held dream of computing
as a utility, has the potential to transform a large
part of the IT industry, making software even more
attractive as a service and shaping the way IT hardware
is designed and purchased. Developers with innovative
ideas for new Internet services no longer require the
large capital outlays in hardware to deploy their service
or the human expense to operate it. They need not
be concerned about overprovisioning for a service
whose popularity does not meet their predictions, thus
wasting costly resources, or underprovisioning for one
that becomes wildly popular, thus missing potential
customers and revenue. Moreover, companies with
large batch-oriented tasks can get results as quickly as
their programs can scale, since using 1,000 servers for
one hour costs no more than using one server for 1,000
A View
of Cloud
Computing
D o i : 1 0 . 1 1 4 5 / 1 7 2 1 6 5 4 . 1 7 2 1 6 7 2
Clearing the clouds away from the true
potential and obstacles posed by this
computing capability.
By miChAEL ARmBRuSt, ARmAnDo fox, REAn GRiffith,
Anthony D. JoSEPh, RAnDy KAtz, AnDy KonWinSKi,
Gunho LEE, DAViD PAttERSon, ARiEL RABKin, ion StoiCA,
AnD mAtEi zAhARiA
hours. This elasticity of resources, with-
out paying a premium for large scale, is
unprecedented in the history of IT.
As a result, cloud computing is a
popular topic for blogging and white
papers and has been featured in the
title of workshops, conferences, and
even magazines. Nevertheless, confu-
sion remains about exactly what it is
and when it’s useful, causing Oracle’s
CEO Larry Ellison to vent his frustra-
tion: “The interesting thing about
cloud computing is that we’ve rede-
fined cloud computing to include ev-
erything that we already do…. I don’t
understand what we would do differ-
ently in the light of cloud computing
other than change the wording of some
of our ads.”
Our goal in this article is to reduce
that confusion by clarifying terms, pro-
viding simple figures to quantify com-
parisons between of cloud and con-
ventional computing, and identifying
the top technical and non-technical
obstacles and opportunities of cloud
computing. (Armbrust et al4 is a more
detailed version of this article.)
Defining Cloud Computing
Cloud computing refers to both the
applications delivered as services over
the Internet and the hardware and sys-
tems software in the data centers that
provide those services. The services
themselves have long been referred to
as Software as a Service (SaaS).a Some
vendors use terms such as IaaS (Infra-
structure as a Service) and PaaS (Plat-
form as a Service) to describe their
products, but we eschew these because
accepted definitions for them still vary
widely. The line between “low-level”
infrastructure and a higher-level “plat-
form” is not crisp. We b.
Free Gartner Report: Aligning Supply and Demand for IT Services
Cloud computing is transforming how IT manages costs and standards, but its impact extends into how IT itself is managed as a business. Public cloud computing puts pressure on the entire IT cost structure to become wiser and more efficient about balancing the supply and demand for IT services.
While cloud commoditization is driving down prices, IT is forced to manage resulting increases in consumption. The report recommends steps CIOs should take to improve the maturity of their approach to IT service management, installing:
• Benchmarking and chargeback to manage demand for cloud services
• Expand their strategic vendor management and IT procurement practices
• Become a broker of services, including external cloud computing.
Consider using IT cost transparency improvement as a cultural change agent to transform the IT organization from a focus on “speed and quality” to one of “IT cost and business value”.
For more cloud management insights visit http://vmware-erdos.com
Cloud IT Economics: What you don't know about TCO can hurt youAl Brodie
An e-book that explores factors beyond basic price that impact the true cost of ownership for cloud implementations by comparing and contrasting prevalent approaches, using common enterprise workloads, from three prominent cloud service providers.
It’s important to establish and measure parameters for return on investment (ROI) on Cloud Computing initiatives. By analyzing the advantages of Cloud Computing since the beginning, IT personnel can show potential returns to the top executives and encourage them to get their buy-in towards cloud programs. For more info visit here http://www.intelligentia.co.in/services-cloud-computing/.
Cloud computing - the impact on revenue recognitionPwC
By 2017 global cloud service providers are expected to generate approximately $235 billion of revenue from cloud computing services.
Telecom companies are searching for ways to save money, limit fixed costs and improve efficiencies. The use of cloud computing can help. However, challenges may arise specifically in revenue recognition patterns and costs associated when accounting for revenue generated for cloud services.
To prosper in this new environment insurance companies can look to the cloud, in conjunction with other technologies, to help drive reinvention of their business model to offer new services and create direct, multi-channel relationships with customers
A PROPOSED MODEL FOR IMPROVING PERFORMANCE AND REDUCING COSTS OF IT THROUGH C...ijccsa
Information technologies are affecting the big business enterprises of todays from data processing and
transactions to achieve the goals efficiently and effectively, affecting creates new business opportunities
and towards new competitive advantage, service must be enough to match the recent trends of IT such as
cloud computing. Cloud computing technology has provided all IT services. Therefore, cloud computing
offers an alternative to adaptable with technology model current , creating reducing cost (Fixed costs and
ongoing), the proliferation of high speed Internet connections through Rent, not acquisitions, cheaper
powerful computing technology and effective performance. The public and private clouds are characterized
by flexibility, operational efficiency that reduces costs improve performance. Also cloud computing
generates business creativity and innovation resulted from collaborative ideas of users; presents cloud
infrastructure and services; paving new markets; offering security in public and private clouds; and
providing environmental impact regarding utilizing green energy technology. In this paper, the main
concentrate the cloud computing.
A Proposed Model for Improving Performance and Reducing Costs of IT Through C...neirew J
Information technologies are affecting the big business enterprises of todays from data processing and
transactions to achieve the goals efficiently and effectively, affecting creates new business opportunities
and towards new competitive advantage, service must be enough to match the recent trends of IT such as
cloud computing. Cloud computing technology has provided all IT services. Therefore, cloud computing
offers an alternative to adaptable with technology model current , creating reducing cost (Fixed costs and
ongoing), the proliferation of high speed Internet connections through Rent, not acquisitions, cheaper
powerful computing technology and effective performance. The public and private clouds are characterized
by flexibility, operational efficiency that reduces costs improve performance. Also cloud computing
generates business creativity and innovation resulted from collaborative ideas of users; presents cloud
infrastructure and services; paving new markets; offering security in public and private clouds; and
providing environmental impact regarding utilizing green energy technology. In this paper, the main
concentrate the cloud computing.
The paper aims to provide a means of understanding the model and exploring options available for complementing your technology and infrastructure needs.
As banks adapt to market changes and new technology landscapes, cloud computing is playing a major role, providing alternative ways to access to core banking technology.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
1. Do Clouds Compute?
A Framework for Estimating the Value of Cloud Computing
Markus Klems, Jens Nimis, Stefan Tai
FZI Forschungszentrum Informatik Karlsruhe, Germany
{ klems, nimis, tai } @fzi.de
1 Introduction
On-demand provisioning of scalable and reliable compute services, along with
a cost model that charges consumers based on actual service usage, has been
an objective in distributed computing research and industry for a while. Cloud
Computing promises to deliver on this objective: building on compute and
storage virtualization technologies, consumers are able to rent infrastructure
“in the Cloud”as needed, deploy applications and store data, and access them
via Web protocols on a pay-per-use basis.
In addition to the technological challenges of Cloud Computing there is
a need for an appropriate, competitive pricing model for infrastructure-as-a-
service. The acceptance of Cloud Computing depends on the ability to im-
plement a model for value co-creation. In this paper, we discuss the need for
valuation of Cloud Computing, identify key components, and structure these
components in a framework. The framework assists decision makers in esti-
mating Cloud Computing costs and to compare these costs to conventional IT
solutions.
2 Objective
The main purpose of our paper is to present a basic framework for estimat-
ing value and determine benefits from Cloud Computing as an alternative to
conventional IT infrastructure, such as privately owned and managed IT hard-
ware. Our effort is motivated by the rise of Cloud Computing providers and the
question when it is profitable for a business to use hardware resources “in the
Cloud”. More and more companies already embrace Cloud Computing services
as part of their IT infrastructure [1]. However, there is no guide to tell when
outsourcing into the Cloud is the way to go and in which cases it does not make
sense to do so. With our work we want to give an overview of economic and
technical aspects that a valuation approach to Cloud Computing must take into
consideration.
Valuation is an economic discipline about estimating the value of projects
1
2. and enterprises [2]. Corporate management relies on valuation methods in or-
der to make reasonable investment decisions. Although the basic methods are
rather simple, like Discounted Cash Flow (DCF) analysis, the difficulties lie in
appropriate application to real world cases.
Within the scope of our paper we are not going to cover specific valuation
methods. Instead, we present a generic framework that serves for cost compar-
ison analysis between hardware resources “in the Cloud”and a reference model,
such as purchasing and installing IT hardware. The result of such a compari-
son shows the value of Cloud Computing associated with a specific project and
measured in terms of opportunity costs. In later work the framework must be
fleshed out with metrics, such as project free cash flows, EBITDA, or other suit-
able economic indicators. Existing cost models, such as Gartner’s TCO seem
promising candidates for the design of a reference model [3].
3 Approach
A systematic, dedicated approach to Cloud Computing valuation is urgently
needed. Previous work from related fields, like Grid Computing, does not con-
sider all aspects relevant to Cloud Computing and can thus not be directly
applied. Previous approaches tend to mix business objectives with technologi-
cal requirements. Moreover, the role of demand behavior and the consequences
it poses on IT requirements needs to be evaluated in a new light. Most impor-
tant, it is only possible to value the benefit from Cloud Computing if compared
to alternative solutions. We believe that a structured framework will be helpful
to clarify which general business scenarios Cloud Computing addresses.
Figure 1 illustrates our framework for estimating the value of Cloud Com-
puting. In the following, we describe in more detail the valuation steps suggested
with the framework.
3.1 Business Scenario
Cloud Computing offers three basic types of services over the Internet: virtual-
ized hardware resources in form of storage capacity and processing power, plus
data transfer volume. Since Cloud Computing is based on the idea of Internet-
centric computing, access to remotely located storage and processors must be
encompassed with sufficient data transfer capacities.
The business scenario must specify the business domain (internal processes,
B2B, B2C, or other), key business objectives (cost efficiency, no SLA viola-
tions, short time to market, etc.), the demand behavior (seasonal, temporary
spikes, etc.) and technical requirements that follow from business objectives and
demand behavior (scalability, high availability, reliability, ubiquitous access, se-
curity, short deployment cycles, etc.).
2
3. Figure 1: A framework for estimating the value of Cloud Computing
3
4. 3.1.1 Business Domain
IT resources are not ends in themselves but serve specific business objectives.
Organizations can benefit from Grid Computing and Cloud Computing in differ-
ent domains: internal business processes, collaboration with business partners
and for customer-faced services (compare to [14]).
3.1.2 Business Objectives
On a high level the typical business benefits mentioned in the context of Cloud
Computing are high responsiveness to varying, unpredictable demand behavior
and shorter time to market. The IBM High Performance on Demand Solu-
tions group has identified Cloud Computing as an infrastructure for fostering
company-internal innovation processes [4]. The U.S. Defense Information Sys-
tems Agency explores Cloud Computing with the focus on rapid deployment
processes, and as a provisionable and scalable standard environment [5].
3.1.3 Demand Behavior
Services and applications in the Web can be divided into two disjoint categories:
services that deal with somewhat predictable demand behavior and those that
must handle unexpected demand volumes respectively. Services from the first
category must be built on top of a scalable infrastructure in order to adapt to
changing demand volumes. The second category is even more challenging, since
increase and decrease in demand cannot be forecasted at all and sometimes
occurs within minutes or even seconds.
Traditionally, the IT operations department of an organization must master
the difficulties involved in scaling corporate infrastructure up or down. In prac-
tice it is impossible to constantly fully utilize available server capacities, which
is why there is always a tradeoff between resource over-utilization, resulting
in glaring usability effects and possible SLA violations, and under-utilization,
leading to negative financial performance [6]. The IT department dimensions
the infrastructure according to expected demand volumes and in a way such
that enough space for business growth is left. Moreover, emergency situations,
like server outages and demand spikes must be addressed and dealt with. Asso-
ciated with under- and over-utilization is the notion of opportunity costs. The
opportunity costs of under-utilization are measured in units of wasted compute
resources, such as idle running servers. The opportunity costs of over-utilization
are the costs of losing customers or being sued as a consequence of a temporary
server outage.
Expected Demand: Seasonal Demand
An online retail store is a typical service that suffers from seasonal demand
spikes. During Christmas the retail store usually faces much higher demand vol-
umes than over the rest of the year. The IT infrastructure must be dimensioned
such that it can handle even the highest demand peaks in December.
Expected Demand: Temporary Effect
4
5. Some services and applications are short-lived and targeted to single or sel-
dom events, such as Websites for the Olympic Games 2008 in Beijing. As seen
with seasonal demand spikes, the increase and decrease of demand volume is
somewhat predictable. However, the service only exists for a comparably short
period of time, during which it experiences heavy traffic loads. After the event,
the demand will decrease to a constant low level and the service be shut down
eventually.
Expected Demand: Batch Processing
The third category of expected demand scenarios are batch processing jobs.
In this case the demand volume is usually known beforehand and does not need
to be estimated.
Unexpected demand: Temporary Effect
This scenario is similar to the “expected temporary effect”, except for one
major difference: the demand behavior cannot be predicted at all or only short
time in advance. A typical example for this scenario is a Web start-up company
that becomes popular over night because it was featured on a news network.
Many people simultaneously rush to the Website of the start-up company, caus-
ing significant traffic load and eventually bringing down the servers. Named
after two famous news sharing Websites this phenomenon is known as “Slash-
dot effect”or “Digg effect”.
3.1.4 Technical Requirements
Business objectives are put into practice with IT support and thus translate
into specific IT requirements. For example, unpredictable demand behavior
translates to the need for scalability and high availability even in the face of
significant traffic spikes; time to market is directly correlated with deployment
times.
3.2 Costs of Cloud Computing
After having modeled a business scenario and the estimated demand volumes, it
is now time to calculate the costs of a Cloud Computing setting that can fulfill
the scenario’s requirements, such as scalability and high availability.
A central point besides the scenario properties mentioned in section 3.1.3
is the question: how much storage capacity and processing power is needed in
order to cope with demand and how much data transfer will be used? The
numbers might either be fixed and already known beforehand or are unknown
and must be estimated.
In a next step a Utility Computing model needs to define compute units and
thus provides a metric to convert and compare computing resources between
the Cloud and alternative infrastructure services. Usually the Cloud Comput-
ing provider defines the Utility Computing model, associated with a pricing
scheme, such as Amazon EC2 Compute Units (ECU). The vendor-specific model
can be converted into a more generic Utility Computing unit, such as FLOPS,
I/O operations, and the like. This might be necessary when comparing Cloud
5
6. Computing offers of different vendors. Since Cloud Computing providers charge
money for their services based on the Utility Computing model, these pricing
schemes can be used in order to determine the direct costs of the Cloud Com-
puting scenario. Indirect costs comprise soft factors, such as learning to use
tools and gain experience with Cloud Computing technology.
3.3 Costs of the Reference IT Infrastructure Service
The valuation of Cloud Computing services must take into account its costs as
well as the cash flows resulting from the underlying business model. Within
the context of our valuation approach we focus on a cost comparison between
infrastructure in the Cloud and a reference infrastructure service. Reaching or
failing to reach business objectives has an impact on cash flows and can therefore
be measured in terms of monetary opportunity costs.
The reference IT infrastructure service might be conventional IT infrastruc-
ture (SME or big business), a hosted service, a Grid Computing service, or
something else. This reference model can be arbitrarily complex and detailed,
as long as it computes the estimated resource usage in a similar manner as in
the Cloud Computing scenario of section 3.2. The resource usage will not in
all cases be the same as in the Cloud Computing scenario. Some tasks might
e.g. be computed locally, thus saving data transfer. Other differences could
result from a totally different approach that must be taken in order to fulfill the
business objectives defined in the business scenario.
In the case of privately owned IT infrastructure, cost models, such as Gart-
ner’s TCO [3], provide a good tool for calculations [8]. The cost model should
comprise direct costs, such as Capital Expenditures for the facility, energy and
cooling infrastructure, cables, servers, and so on. Moreover, there are Opera-
tional Expenditures which must be taken into account, such as energy, network
fees and IT employees. Indirect costs comprise costs from failing to meet busi-
ness objectives, e.g. time to market, customer satisfaction or Quality of Service
related Service Level Agreements. There is no easy way to measure how this can
be done and will vary from case to case. More sophisticated TCO models must
be developed to mitigate this shortcoming. One approach might be to compare
cash flow streams that result from failing to deliver certain business objectives,
such as short time to market. If the introduction of a service offering is delayed
due to slow deployment processes, the resulting deficit can be calculated as a
discounted cash flow.
When all direct and indirect costs have been taken into account, the total
costs of the reference IT infrastructure service can be calculated by summing
up. Finally, costs of the Cloud Computing scenario and the reference model
scenario can be compared.
6
7. 4 Evaluation and Discussion
Early adopters of Cloud Computing technologies are IT engineers who work on
Web-scale projects, such as the New York Times TimesMachine [9]. Start-ups
with high scalability requirements turn to Cloud Computing providers, such
as Amazon EC2, in order to roll out Web-scale services with comparative low
entry costs [7]. These and other examples show that scalability, low market
barriers and rapid deployment are among the most important drivers of Cloud
Computing.
4.1 New York Times TimesMachine
In autumn 2007 New York Times senior software engineer Derek Gottfrid worked
on a project named TimesMachine. The service should provide access to any
New York Times issue since 1851, adding up to a bulk of 11 million articles which
had to be served in the form of PDF files. Previously Gottfrid and his colleagues
had implemented a solution that generated the PDF files dynamically from
already scanned TIFF images of the New York Times articles. This approach
worked well, but when traffic volumes were about to increase significantly it
would be better to serve pre-generated static PDF files.
Faced with the challenge to convert 4 Terabyte of source data into PDF,
Derek Gottfrid decided to make use of Amazon’s Web Services Elastic Compute
Cloud (EC2) and Simple Storage Service (S3). He uploaded the source data to
S3 and started a Hadoop cluster of customized EC2 Amazon Machine Images
(AMIs). With 100 EC2 AMIs running in parallel he could complete the task of
reading the source data from S3, converting it to PDF and storing it back to S3
within 36 hours.
How does this use case fit in our framework?
Gottfrid’s approach was motivated by the simplicity with which the one-
time task could be accomplished if performed “in the Cloud”. No up-front costs
were involved, except for insignificant expenditures when experimenting if the
endeavor was feasible at all. Due to the simplicity of the approach and the low
costs involved, his superiors agreed without imposing bureaucratic obstacles.
Another key driver was to cut short deployment times and thereby time to
market. The alternative to Amazon EC2 and S3 would have been to ask for
permission to purchase commodity hardware, install it and finally run the tasks
- a process that very likely would have taken several weeks or even months.
After process execution, the extra hardware would have to be sold or used in
another context.
This use case is a good example for a one-time batch-processing job that can
be performed in a Grid Computing or Cloud Computing environment. From
the backend engineer’s point of view it is favorable to be able getting started
without much configuration overhead as only the task result is relevant. The
data storage and processing volume is known beforehand and no measures have
to be taken to guarantee scalability, availability, or the like.
7
8. In a comparative study researchers from the CERN-based EGEE project
argue that Clouds differ from Grids in that they served different usage patterns.
While Grids were mostly used for short-term job executions, clouds usually sup-
ported long-lived services [10]. We agree that usage patterns are an important
differentiator between Clouds and Grids, however, the TimesMachine use case
shows that this not a question of service lifetime. Clouds are well-suited to serve
short-lived usage scenarios, such as batch-processes or situational Mash-up ser-
vices.
4.2 Major League Baseball
MLB Advanced Media is the company that develops and maintains the Major
League Baseball Web sites. During the 2007 season, director of operations
Ryan Nelson received the request to implement a chat product as an additional
service to the Web site [11]. He was told that the chat had to go online as soon
as possible. However, the company’s data center in Manhattan did not leave
much free storage capacity and processing power.
Since there was no time to order and install new machines, Nelson decided to
call the Cloud Computing provider Joyent. He arranged for 10 virtual machines
in a development cluster and another 20 machines for production mode. Nelson’s
team developed and tested the chat for about 2 months and then launched the
new product. When the playoffs and World Series started, more resources were
needed. Another 15 virtual machines and additional RAM solved the problem.
Ryan Nelson points out two major advantages of this approach. First, the
company gains flexibility to try out new products quickly and turn them off if
they are not a success. In this context, the ability to scale down shows to be
equally important as scaling up. Furthermore, Nelson’s team can better respond
to seasonal demand spikes which are typical for Web sites about sports events.
5 Related Work
Various economic aspects of outsourcing storage capacities and processing power
have been covered by previous work in distributed computing and grid comput-
ing [12], [13], [14], [15]. However, the methods and business models introduced
for Grid Computing do not consider all economic drivers which we identified
relevant for Cloud Computing, such as pushing for short time to market in the
context of organization inertia or low entry barriers for start-up companies.
With a rule of thumb calculation Jim Gray points to the opportunity costs of
distributed computing in the Internet as opposed to local computations, i.e. in
LAN clusters [12]. In his scenario $1 USD equals 1 GB sent over WAN or alter-
natively eight hours CPU processing time. Gray reasons that except for highly
processing-intensive applications outsourcing computing tasks into a distributed
environment does not pay off because network traffic fees outnumber savings in
processing power. Calculating the tradeoff between basic computing services
can be useful to get a general idea of the economies involved. This method can
8
11. easily be applied to the pricing schemes of Cloud Computing providers. For $1
USD the Web Service Amazon EC2 offers around 6 GB data transfer or 10 hours
CPU processing 1
. However, this sort of calculation only makes sense if placed
in a broader context. Whether or not computing services can be performed
locally depends on the underlying business objective. It might for example be
necessary to process data in a distributed environment in order to enable online
collaboration.
George Thanos, et al evaluate the adoption of Grid Computing technology
for business purposes in a more comprehensive way [14]. The authors shed
light on general business objectives and economic issues associated with Grid
Computing, such as economies of scale and scope, network externalities, market
barriers, etc. In particular, the explanations regarding the economic rationale
behind complementing privately owned IT infrastructure with utility comput-
ing services point out important aspects that are also valid for our valuation
model. Cloud Computing is heavily based on the notion of Utility Computing
where large-scale data centers play the role of a utility that delivers computing
services on a pay-per-use basis. The business scenarios described by Thanos, et
al only partially apply to those we can observe in Cloud Computing. Important
benefits associated with Cloud Computing, such as shorter time to market and
responsiveness to highly varying demand, are not covered. These business objec-
tives bring technological challenges that Cloud Computing explicitly addresses,
such as scalability and high availability in the face of unpredictable short-term
demand peaks.
6 Conclusion and Future Work
Cloud Computing is an emerging trend of provisioning scalable and reliable
services over the Internet as computing utilities. Early adopters of Cloud Com-
puting services, such as start-up companies engaged in Web-scale projects, intu-
itively embrace the opportunity to rely on massively scalable IT infrastructure
from providers like Amazon. However, there is no systematic, dedicated ap-
proach to measure the benefit from Cloud Computing that could serve as a
guide for decision makers to tell when outsourcing IT resources into the Cloud
makes sense.
We have addressed this problem and developed a valuation framework that
serves as a starting point for future work. Our framework provides a step-by-
step guide to determine the benefits from Cloud Computing, from describing
a business scenario to comparing Cloud Computing services with a reference
IT solution. We identify key components: business domain, objectives, demand
behavior and technical requirements. Based on business objectives and technical
requirements, the costs of a Cloud Computing service, as well as the costs of a
reference IT solution, can be calculated and compared. Well-known use cases of
1According to the Amazon Web Service pricing in July 2008 one GB of outgoing traffic
costs $0.17 for the first 10 TB per month. Running a small AMI instance with the compute
capacity of a 1.0-1.2 GHz 2007 Xeon or Opteron processor for one hour costs $0.10 USD.
11
12. Cloud Computing adopters serve as a means to discuss and evaluate the validity
of our framework.
In future work, we will identify and analyze concrete valuation methods that
can be applied within the context of our framework. Furthermore, it is necessary
to evaluate cost models that might serve as a template for estimating direct and
indirect costs, a key challenge that we have only mentioned.
References
[1] Amazon Web Services: Customer Case Studies, http://www.amazon.com/
Success-Stories-AWS-home-page/b/ref=sc_fe_l_1?ie=UTF8&node=
182241011&no=3440661
[2] Titman, S.,Martin, J.: Valuation. The Art & Science of Corporate Invest-
ment Decisions, Addison-Wesley (2007)
[3] Gartner TCO, http://amt.gartner.com/TCO/index.htm
[4] Chiu, W..: From Cloud Computing to the New Enterprise Data Center,
IBM High Performance On Demand Solutions (2008)
[5] Pentagon’s IT Unit Seeks to Adopt Cloud Comput-
ing, New York Times, http://www.nytimes.com/idg/IDG_
852573C400693880002574890080F9EF.html?ref=technology
[6] Schlossnagle, T.: Scalable Internet Architectures, Sams Publishing (2006)
[7] PowerSet Use Case, http://www.amazon.com/b?ie=UTF8&node=
331766011&me=A36L942TSJ2AJA
[8] Koomey, J.: A Simple Model for Determining True Total Cost of Ownership
for Data Centers, Uptime Institute (2007)
[9] New York Times TimesMachine use case, http://open.nytimes.com/
2007/11/01/self-service-prorated-super-computing-fun/
[10] B´egin, M.: An EGEE Comparative Study: Grids and Clouds - Evolution
or Revolution?, CERN Enabling Grids for E-Science (2008)
[11] Major League Baseball use case, http://www.networkworld.com/news/
2007/121007-your-take-mlb.html
[12] Gray, J.: Distributed Computing Economics. Microsoft Research Technical
Report: MSRTR- 2003-24, Microsoft Research (2003)
[13] Buyya, R.,Stockinger, H.,Giddy, J.,Abramson, D.: Economic Models for
Management of Resources in Grid Computing, ITCom (2001)
12
13. [14] Thanos, G., Courcoubetis, C., Stamoulis, G.: Adopting the Grid for Busi-
ness Purposes: The Main Objectives and the Associated Economic Is-
sues, Grid Economics and Business Models: 4th International Workshop,
GECON (2007)
[15] Hwang, J.,Park, J.: Decision Factors of Enterprises for Adopting Grid
Computing, Grid Economics and Business Models: 4th International Work-
shop, GECON (2007)
13