The document provides an overview of cloud computing by discussing key topics across six parts:
Part one discusses how cloud computing has disrupted IT by democratizing access. Part two covers the history of virtualization. Part three explains how cloud stacks separate concerns between physical infrastructure, virtual platforms, and applications. Part four frames clouds as an on-demand business model compared to traditional IT. Part five outlines the major types of cloud services including IaaS, PaaS and differences between them. Finally, part six notes that in reality, most cloud offerings blend aspects of infrastructure, platforms and software as a service.
The Total Cost of Ownership (TCO) of Web Applications in the AWS Cloud - Jine...Amazon Web Services
Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. In practice, it is not as simple as just measuring potential hardware expense alongside utility pricing for compute and storage resources. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare direct and indirect costs of a product or a service. Given the large differences between the two models, it is challenging to perform accurate apples-to-apples cost comparisons between on-premises data centers and cloud infrastructure that is offered as a service. In this presentation, we explain the economic benefits of deploying a web application in the Amazon Web Services (AWS) cloud over deploying an equivalent web application hosted in an on-premises data center and highlight the 5 things to not forget while calculating TCO.
Whitepaper: http://bit.ly/aws-tco-webapps
The Total Cost of Ownership (TCO) of Web Applications in the AWS Cloud - Jine...Amazon Web Services
Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. In practice, it is not as simple as just measuring potential hardware expense alongside utility pricing for compute and storage resources. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare direct and indirect costs of a product or a service. Given the large differences between the two models, it is challenging to perform accurate apples-to-apples cost comparisons between on-premises data centers and cloud infrastructure that is offered as a service. In this presentation, we explain the economic benefits of deploying a web application in the Amazon Web Services (AWS) cloud over deploying an equivalent web application hosted in an on-premises data center and highlight the 5 things to not forget while calculating TCO.
Whitepaper: http://bit.ly/aws-tco-webapps
The second in our 'Journey' series of webinars, this complimentary presentation discusses the use of AWS as a development and test environment. The flexible and pay as you go nature of AWS makes it perfect for compute environments that need to be spun up quickly and disposed of when not needed, and placing this power at the fingertips of developers means you can make step changes in productivity as you progress applications through the dev/test cycle
Offre Cloud IBM Software [Rational] - Atelier - Forum SaaS et Cloud IBM - Clu...Club Alliances
Présentation préparée par Michel Speranski [IBM Software - Rational] dans le cadre du Forum SaaS et Cloud IBM [5 février 2010], organisé par le Club Alliances
Introduction to Cloud Computing - CCGRID 2009James Broberg
Cloud computing has recently emerged as an exciting new trend in the ICT industry. Several IT vendors are promising to offer on-demand storage, application and computational hosting services, and provide coverage in several continents, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services. While these ‘clouds’ are the natural evolution of traditional clusters and data centres, they are distinguished by following a ‘utility’ pricing model where customers are charged based on their utilisation of computational resources, storage and transfer of data. Whilst these emerging services have reduced the cost of computation, application hosting and content storage and delivery by several orders of magnitude, there is significant complexity involved in ensuring applications, services and data can scale when needed to ensure consistent and reliable operation under peak loads.
This tutorial endeavors to familiarise the audience with the new cloud computing paradigm, whilst comparing and contrasting it with existing approaches to scaling out computing resources such as cluster and grid computing. Case studies of numerous existing compute, storage and application cloud services will be given, familiarising the audience with the capabilities and limitations of current providers of cloud computing services. The hands-on interaction with these services during this tutorial will allow the audience to understand the mechanisms needed to harness cloud computing in their own respective endeavors. Finally, many open research problems that have arisen from the rapid uptake of cloud computing will be detailed, which will hopefully motivate the audience to address these in their own future research and development.
With today’s evolving technological landscape, chances are you’ve heard of the term “cloud.” You may have even wondered why anyone would be talking about the weather along with computing infrastructures and data centers. Although the popularity of the cloud has gradually risen, there are many people who still don’t know exactly what it is.
The cloud is the general term for cloud computing, which refers to the use of the Internet to access hosted services or your own applications, storage, servers, and data that are hosted in a remote location. Saving your files to the cloud allows you to access them from anywhere at any time, as long as you have an Internet connection. You have probably been using cloud services or resources for a while without even knowing it. The most common cloud services include Gmail, Dropbox, Google Docs, and even Twitter.
Cloud computing is continuing to gain momentum among businesses and consumers alike. It offers consumers a more convenient way to store and share their files as well as access their data at any time, from anywhere. Businesses are virtualizing corporate desktops in the cloud and have considerably reduced the costs of maintaining their physical IT infrastructures in the process. Cloud services allow for more flexible working practices as well as greater mobility. Accessing your data on any device is safe and secure, and actually gives companies a better way to manage and secure data.
Cloud computing focuses on shared resources within the infrastructure. This means that customers share physical resources while maintaining security by reinforcing their “piece of the pie” with firewalls and virtual security. By focusing on this shared resource methodology, cloud companies can offer incredibly cost-effective rates compared to traditional computing due to the positive economies of scale. The more tenants that occupy a cloud, the more cost effective a solution becomes. The best Cloud providers use this method to offer solutions for a fraction of what they would cost should an organization attempt to do them in-house.
The cloud is quickly becoming commonplace in the world of technology. Its accessibility and convenience are undeniable, and more often than not, businesses and service providers have considered implementing or currently utilize cloud computing.
Cloud Computing Without The Hype An Executive Guide (1.00 Slideshare)Lustratus REPAMA
Author: Steve Craggs - Lustratus Research Limited.
Defining Cloud Computing and identifying the current players
This document offers a high-level summary of Cloud Computing, targeted at Executives who find themselves bombarded with Cloud Computing and need to cut through the hype to get a clear understanding of what cloud is all about.
Cloud is defined in simple terms and the main categories of cloud are identified. A high level segmentation of the cloud marketplace is also offered, and includes a reasonably comprehensive index of suppliers in the Cloud Computing marketplace and the Cloud segments in which they operate.
Danny Quilton from Capacitas presented a paper, ‘Capacity Management and the Cloud’. The presentation made the case for capacity management of cloud-based services, highlighting the critical role of capacity management in controlling cloud cost. The presentation referenced a number of client engagement case studies to debunk some of the myths surrounding cloud:
Capacity can be turned up instantaneously
Capacity planning discipline is no longer required
Cloud capacity is cheap
Bottlenecks can be alleviated by expanding cloud capacity
Capacity management can be delegated to the cloud provider
Performance is guaranteed by the cloud provider
The second in our 'Journey' series of webinars, this complimentary presentation discusses the use of AWS as a development and test environment. The flexible and pay as you go nature of AWS makes it perfect for compute environments that need to be spun up quickly and disposed of when not needed, and placing this power at the fingertips of developers means you can make step changes in productivity as you progress applications through the dev/test cycle
Offre Cloud IBM Software [Rational] - Atelier - Forum SaaS et Cloud IBM - Clu...Club Alliances
Présentation préparée par Michel Speranski [IBM Software - Rational] dans le cadre du Forum SaaS et Cloud IBM [5 février 2010], organisé par le Club Alliances
Introduction to Cloud Computing - CCGRID 2009James Broberg
Cloud computing has recently emerged as an exciting new trend in the ICT industry. Several IT vendors are promising to offer on-demand storage, application and computational hosting services, and provide coverage in several continents, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services. While these ‘clouds’ are the natural evolution of traditional clusters and data centres, they are distinguished by following a ‘utility’ pricing model where customers are charged based on their utilisation of computational resources, storage and transfer of data. Whilst these emerging services have reduced the cost of computation, application hosting and content storage and delivery by several orders of magnitude, there is significant complexity involved in ensuring applications, services and data can scale when needed to ensure consistent and reliable operation under peak loads.
This tutorial endeavors to familiarise the audience with the new cloud computing paradigm, whilst comparing and contrasting it with existing approaches to scaling out computing resources such as cluster and grid computing. Case studies of numerous existing compute, storage and application cloud services will be given, familiarising the audience with the capabilities and limitations of current providers of cloud computing services. The hands-on interaction with these services during this tutorial will allow the audience to understand the mechanisms needed to harness cloud computing in their own respective endeavors. Finally, many open research problems that have arisen from the rapid uptake of cloud computing will be detailed, which will hopefully motivate the audience to address these in their own future research and development.
With today’s evolving technological landscape, chances are you’ve heard of the term “cloud.” You may have even wondered why anyone would be talking about the weather along with computing infrastructures and data centers. Although the popularity of the cloud has gradually risen, there are many people who still don’t know exactly what it is.
The cloud is the general term for cloud computing, which refers to the use of the Internet to access hosted services or your own applications, storage, servers, and data that are hosted in a remote location. Saving your files to the cloud allows you to access them from anywhere at any time, as long as you have an Internet connection. You have probably been using cloud services or resources for a while without even knowing it. The most common cloud services include Gmail, Dropbox, Google Docs, and even Twitter.
Cloud computing is continuing to gain momentum among businesses and consumers alike. It offers consumers a more convenient way to store and share their files as well as access their data at any time, from anywhere. Businesses are virtualizing corporate desktops in the cloud and have considerably reduced the costs of maintaining their physical IT infrastructures in the process. Cloud services allow for more flexible working practices as well as greater mobility. Accessing your data on any device is safe and secure, and actually gives companies a better way to manage and secure data.
Cloud computing focuses on shared resources within the infrastructure. This means that customers share physical resources while maintaining security by reinforcing their “piece of the pie” with firewalls and virtual security. By focusing on this shared resource methodology, cloud companies can offer incredibly cost-effective rates compared to traditional computing due to the positive economies of scale. The more tenants that occupy a cloud, the more cost effective a solution becomes. The best Cloud providers use this method to offer solutions for a fraction of what they would cost should an organization attempt to do them in-house.
The cloud is quickly becoming commonplace in the world of technology. Its accessibility and convenience are undeniable, and more often than not, businesses and service providers have considered implementing or currently utilize cloud computing.
Cloud Computing Without The Hype An Executive Guide (1.00 Slideshare)Lustratus REPAMA
Author: Steve Craggs - Lustratus Research Limited.
Defining Cloud Computing and identifying the current players
This document offers a high-level summary of Cloud Computing, targeted at Executives who find themselves bombarded with Cloud Computing and need to cut through the hype to get a clear understanding of what cloud is all about.
Cloud is defined in simple terms and the main categories of cloud are identified. A high level segmentation of the cloud marketplace is also offered, and includes a reasonably comprehensive index of suppliers in the Cloud Computing marketplace and the Cloud segments in which they operate.
Danny Quilton from Capacitas presented a paper, ‘Capacity Management and the Cloud’. The presentation made the case for capacity management of cloud-based services, highlighting the critical role of capacity management in controlling cloud cost. The presentation referenced a number of client engagement case studies to debunk some of the myths surrounding cloud:
Capacity can be turned up instantaneously
Capacity planning discipline is no longer required
Cloud capacity is cheap
Bottlenecks can be alleviated by expanding cloud capacity
Capacity management can be delegated to the cloud provider
Performance is guaranteed by the cloud provider
The changing nature of healthcare and patients’ desire for convenience have given rise to nontraditional care formats such as stand-alone emergency rooms and “micro-hospitals,” and now “bedless hospitals” are joining the push. Such hospitals still have standard hospital features, including infusion suites, emergency rooms, helipads and operating areas, but no overnight space, In addition to patients’ desire for speedier, more convenient care models, the growth of such facilities is also due to growth in outpatient care within the industry. While experts say increased use of outpatient services offsets the cost of pricier inpatient care, others question whether the increase of bed-less facilities mean fewer resources for patients with complex treatment needs that require beds and overnight stays.
Helen Wilson, Managing Director, Ipsos Loyalty - the Customer Experience Specialists - presented at Mayfair Capital’s excellent Investment Seminar this week. For more visit http://www.mayfaircapital.co.uk/. Our Perils of Perception research is at https://www.ipsos-mori.com/_assets/sri/perils/. For more on our research into Generations, visit www.ipsos-mori-generations.com.
End-to-End Integrated Management with System Center 2012wwwally
Want to see hands-on how all the Microsoft System Center 2012 products work together to manage the common Windows platform? In this session full of demonstrations, Walter Eikenboom, shows you System Center 2012 for the Optimized Datacenter. See Service Manager 2012, Virtual Machine Manager 2012, Operations Manager 2012, Data Protection Manager 2012 and Orchestrator 2012!
Introduction to the SQL and Windows Azure PlatformEduardo Castro
This presentation is an introduction to the Windows and SQL Azure Cloud Computing Platform.
Regards,
Dr. Eduardo Castro Martinez
http://comunidadwindows.org
http://ecastrom.blogspot.com
20-minute speed-run presentation on what metrics and web analytics information startups need to collect. Focuses on companies with a lean methodology, and the kinds of data that will actually help them achieve product/market fit before the money runs out.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
30. For much of its history, AT&T and its Bell System functioned as
a legally sanctioned, regulated monopoly.
The US accepted this principle, initially in a 1913 agreement
known as the Kingsbury Commitment.
Anti-trust suit filed in 1949 led in 1956 to a consent decree
whereby AT&T agreed to restrict its activities to the regulated
business of the national telephone system and government
work.
Changes in telecommunications led to a U.S. government
antitrust suit in 1974.
In 1982 when AT&T agreed to divest itself of the wholly owned
Bell operating companies that provided local exchange service.
In 1984 Bell was dead. In its place was a new AT&T and seven
regional Bell operating companies (collectively, the RBOCs.)
http://www.corp.att.com/history/history3.html
63. Virtualization divorces the app from the machine.
One on many
Virtual machine
Physical Physical Physical
machine machine machine
Physical Physical Physical
machine machine machine
64. Virtualization divorces the app from the machine.
One on many (or)
Virtual machine
Physical Physical Physical
machine machine machine
Physical Physical Physical
machine machine machine
65. Virtualization divorces the app from the machine.
One on many (or) Many on one
Physical machine
Virtual machine
Virtual Virtual Virtual
Physical Physical Physical machine machine machine
machine machine machine
Virtual Virtual Virtual
Physical Physical Physical machine machine machine
machine machine machine
100. • 60 seconds per page
Desktop EC2 • 200 machine
Pages 17,481 17,481 instances
Minutes/page 1 1 • 1,407 hours of virtual
# of machines 1 200 machine time
Total minutes 17,481 • Searchable database
Total hours 291.4 26.0 available 26 hours
Total days 12.1 1.1 later
• $144.62 total cost
107. Machine
Image
Web Machine
server Image
Machine instance
108. App Machine
Server Image
Machine instance
Web Machine
server Image
Machine instance
109. Machine
Image
App Machine
Server Image
Machine instance
Web Machine
server Image
Machine instance
110. DB Machine
server Image
Machine instance
App Machine
Server Image
Machine instance
Web Machine
server Image
Machine instance
111. DB Machine
Storage
server Image
Machine instance
App Machine
Server Image
Machine instance
Web Machine
server Image
Machine instance
112. DB
Storage
server
Machine instance
App
Server
Machine instance
Web
server
Machine instance
113. DB
Storage server
Bigger
machine
instance
App
Server
Machine instance
Web
server
Machine instance
114. DB
Storage
server
Machine instance
App
Server
Machine instance
Web
server
Machine instance
115. DB DB
Storage
server server
Machine instance Machine instance
App App
Server Server
Machine instance Machine instance
Web Web
server server
Machine instance Machine instance
116. DB DB
Storage
server server
Machine instance Machine instance
App App
Server Server
Machine instance Machine instance
Web Web
server server
Machine instance Machine instance
Load
balancer
Machine instance
117. Platform as a Service
Google App Engine, Salesforce Force.com,
Rackspace Cloud Sites, Joyent Smart Platform,
(and nearly every enterprise mainframe.)
122. Shared components
Data Processing platform
Storage
API
Others’ Others’
code code
Your Others’
code code
Others’ Others’
code code
123. Shared components
Data Processing platform
Storage
API
Others’ Others’
code code
User Auth
database API
Your Others’
code code
Others’ Others’
code code
124. Shared components
Data Processing platform
Storage
API
Others’ Others’
code code
User Auth
database API
Your Others’
code code
Image Image
functions API Others’ Others’
code code
...
125. Shared components
Data Processing platform
Storage
API
Others’ Others’
code code
User Auth
database API
Your Others’
code code
Image Image
functions API Others’ Others’
code code
...
Big Blob
objects API
126. Shared components
Data Processing platform
Storage
API
Others’ Others’
code code
User Auth
database API
Your Others’
code code
Image Image
functions API Others’ Others’
code code
...
Big Blob Governor
objects API
127. Shared components
Data Processing platform
Storage
API
Others’ Others’
code code
User Auth
database API
Your Others’
code code
Image Image
functions API Others’ Others’
code code
...
Big Blob Governor Console
objects API
128. Shared components
Data Processing platform
Storage
API
Others’ Others’
code code
User Auth
database API
Your Others’
code code
Image Image
functions API Others’ Others’
code code
...
Big Blob Governor Console Schedule
objects API
135. IaaS and PaaS differences
IaaS PaaS
Any operating system you Use only selected
want languages and built-in APIs
Limited by capacity of Limited by governors to
virtual machine avoid overloading
Scale by adding more Scaling is automatic
machines
Use built-in storage
Many storage options (file (Bigtable, etc.)
system, object, key-value)
136. Quota Limit
Governor Apps per developer 10
(usage cap) Time per request 30s
Blobstore (total file size) 1GB
Maximum HTTP response size 10MB
Datastore item size 1MB
Application code size 150MB
Daily cap Emails per day 1,500
(free quota) Bandwidth in per day 1 GB
Bandwidth out per day 1GB
CPU time per day 6.5h
HTTP requests per day 1,300,000
Datastore API calls per day 10,000,000
URLFetch API calls per day 657,084
http://en.wikipedia.org/wiki/Google_App_Engine
147. Service What it does
Elastic Compute Cloud Virtual machines, by the hour
Elastic Mapreduce Massively parallel data processing
Virtual Private Cloud On demand machines within internal IT
Elastic Load Balancing Traffic distribution
Cloudfront Content delivery acceleration
Flexible Payments Service Funds transfer & payments
SimpleDB Realtime structured data queries
Simple Storage Service Eleven nines redundant storage
Relational Database Service On-demand RDBMS
Elastic Block Store Block-level storage (file system)
Fulfillment Web Service Merchant delivery system
Simple Queue Service On-demand message bus
Simple Notification Service System for sending mass notifications
Cloudwatch Monitoring of cloud resources
Mechanical turk Humans as an API
148. Service What it does
App Engine Executing Python or Java code
Bigtable datastore Store data for very fast retrieval
Calendar Data API Create and modify events
Inbox feed API Read a GMail inbox
Contact data API Interact with someone’s GMail contacts
Documents list API Manage a user’s Google Docs
OpenID single signon Use Google authentication to sign in
Secure data connector Link Google Apps to enterprise apps
Memcache Fast front-end for data
Image manipulation Resize, rotate, crop & flip images
Task queue Queue and dispatch tasks to code
Blobstore Serve large objects to visitors
193. Expense reports can no
longer enforce IT
policy.
Wiley GAAP 2010: Interpretation and Application of Generally Accepted Accounting
Principles (By Barry J. Epstein, Ralph Nach, Steven M. Bragg)
194. Airfare
DNS
Cloud
Public
transit
Important
research
Hotel
223. Always on
premise
Private
Compliance-
enforced
Need to track and
audit
Legislative
Data near local
computation
224. Always on Can be done
premise anywhere
Private
Compliance- Testing
enforced
Training
Need to track and
Prototyping
audit
Batch processing
Legislative
Seasonal load
Data near local
computation
225. Always on Can be done Always in
premise anywhere cloud
Private
Partner access
Compliance- Testing
enforced Proximity to cloud
Training services (storage,
Need to track and
Prototyping CDN, etc.)
audit
Batch processing Massively grid/
Legislative
Seasonal load parallel (genomic,
Data near local modelling)
computation
226. Always on Can be done Always in
premise anywhere cloud
Load/pricing engine
Private
Partner access
Compliance- Testing
enforced Proximity to cloud
Training services (storage,
Need to track and
Prototyping CDN, etc.)
audit
Batch processing Massively grid/
Legislative
Seasonal load parallel (genomic,
Data near local modelling)
computation
227. Always on Can be done Always in
premise anywhere cloud
Load/pricing engine
Private
Partner access
Compliance- Testing
enforced Proximity to cloud
Training services (storage,
Policy engine
Need to track and
Prototyping CDN, etc.)
audit
Batch processing Massively grid/
Legislative
Seasonal load parallel (genomic,
Data near local modelling)
computation
228. Virtual machine
(infrastructure cloud)
Always on Can be done Always in
premise anywhere cloud
Load/pricing engine
Private
Partner access
Compliance- Testing
enforced Proximity to cloud
Training services (storage,
Policy engine
Need to track and
Prototyping CDN, etc.)
audit
Batch processing Massively grid/
Legislative
Seasonal load parallel (genomic,
Data near local modelling)
computation
229. Compute task
(service cloud)
Always on Can be done Always in
premise anywhere cloud
Load/pricing engine
Private
Partner access
Compliance- Testing
enforced Proximity to cloud
Training services (storage,
Policy engine
Need to track and
Prototyping CDN, etc.)
audit
Batch processing Massively grid/
Legislative
Seasonal load parallel (genomic,
Data near local modelling)
computation
Cloud computing is an approach to computing that’s more flexible and lets organizations focus on their core business by insulating them from much of the underlying IT work.
At its most basic, it’s computing as a utility – pay for what you need, when you need it, rather than paying for it all up front.
This is what Nicolas Carr talked about in his book The Big Switch.
But clouds can be confusing. Part of the reason is that they’re a big deal, which means everyone wants to be a part of them – even companies who have nothing to do with clouds.
I’m going to try and clear some of this up for you.
First, let’s talk about disruption.
Once, IT was a monopoly.
Today, it’s a free market. The line of business has tremendous choice in what it owns, runs, and uses.
The boardroom loves this: instead of managing machines, they manage services.
But enterprise IT doesn’t like it much, because it forces them to compete, and puts them side-by-side with organizations that spend their entire day doing detailed usage and billing.
It’s not all bad, though. There’s a lot to be learned from a transition from monopoly to a free market.
There were a couple of reasons IT was a monopoly for so long.
First, the machines were expensive. That meant they were a scarce resource, and someone had to control what we could do with them.
Second, they were complicated. It took a very strange sect of experts to understand them. AVIDAC, Argonne's first digital computer, began operation in January 1953. It was built by the Physics Division for $250,000. Pictured is pioneer Argonne computer scientist Jean F. Hall.
AVIDAC stands for "Argonne Version of the Institute's Digital Automatic Computer" and was based on the IAS architecture developed by John von Neumann.
This was also a result of scarcity. When computers and humans interact, they need to meet each other halfway. But it takes a lot of computing power to make something that’s easy to use;
in the early days of computing, humans were cheap and machines weren’t
So we used punched cards,
and switches,
and esoteric programming languages like assembler.
Think about what a monopoly means.
A monopoly was once awarded for a big project beyond the scope of any one organization, but needed for the public good.
Sometimes, nobody wants the monopoly—like building the roads.
For the most part, governments have a monopoly on roadwork, because it’s something we need, but the benefits are hard to quantify or charge back for.
(IT’s been handed many of these thankless tasks over the years, and the business has never complained.)
The only time we can charge back for roads are when the resource is specific and billable: a toll highway, a bridge.
Sometimes, we form a company with a monopoly, or allow one to operate, in order to build something or allow an inventor to recoup investment. This is how we got the telephone system, or railways.
When monopolies are created with a specific purpose, that’s good. But when they start to stagnate and restrict competition, we break them apart.
In fact, there’s a lot of antitrust regulation that prevents companies from controlling too much of something because they can stifle innovation and charge whatever they want. That’s one of the things the DOJ does.
In other words, early on monopolies are good because they let us undertake hugely beneficial, but largely unbillable, tasks.
Later, however, they’re bad because they reduce the level of creativity and experimentation.
Today, computing is cheap. We can buy many times the compute power of the Apollo missions with a swipe of a credit card.
It’s also not complicated. Everyone can use a computer. Because today, the computer is cheap and the human’s expensive we spend so much time on user interfaces, from GUIs to augmented reality to touchscreens to voice control to geopresence.
What used to take a long time to procure, configure, and deploy is now a mouseclick.
The way data centers are designed must reflect this shift from IT-as-a-monopoly to IT-as-an-enabler
That means building a set of platforms that can adapt and adjust:
From rack-and-stack servers to click-and-drag deployment
From underused bare metal to on-demand virtual machines
From procurement and process to self-service and quick decommissioning.
The lesson of monopolies is an important one. When a monopoly set out to build a railroad, it didn’t spend a lot of time asking potential travelers what they wanted.
When you’re building something huge and expensive, you build what you want, and expect people to be grateful for it.
But today’s IT user is driving IT requirements.
They can shop around—choosing SaaS, clouds, and internal IT according to their business requirements.
They’re increasingly able to build the applications themselves, but expect IT to deliver smooth, fast platforms on which to experiment.
As the line of business looks more and more like a consumer in a competitive market—and less and less like a grateful customer of a monopoly—IT has to change its offerings.
It’s an inversion of the traditional IT “pyramid”, where the hardware dictates the platforms, which in turn dictates, the apps, which dictates what users can do.
Today, what users want to do drives the apps they use, which drives the platforms and the hardware.
We’ve had big changes since that time. The first was client-server computing: the idea that not everything lived in a mainframe, and some things worked well on the desktop. Software like Visicalc—the first spreadsheet—were useful for businesses, even those who couldn’t afford a mainframe.
A second big change was the Web. This browser-based model made computing accessible to the masses. As a result, it became part of society, and everyone knew how to work it. These days, you don’t have to teach a new hire how to use a web browser: they know what links do; what the back button is; and so on.
A third change is the move to mobility. This has been bigger overseas, where the mobile phone is the dominant way of accessing the Internet, but it’s still a shift to the always-connected, always-on lifestyles we lead today.
And now there’s cloud computing. Clouds are as big a shift as client-server, or the web browser, or mobility.
The step-function nature of dedicated machines doesn’t distribute workload very efficiently.
Virtualization lets us put many workloads on a single machine
Once workloads are virtualized, several things happen. First, they’re portable
Second, they’re ephemeral. That is, they’re short-lived: Once people realize that they don’t have to hoard machines, they spin them up and down a lot more.
Which inevitably leads to automation and scripting: We need to spin up and down machines, and move them from place to place. This is hard, error-prone work for humans, but perfect for automation now that rack-and-stack has been replaced by point-and-click
Automation, once in place, can have a front end put on it. That leads to self service.
These are the foundations on which new IT is being built. Taken together, they’re a big part of the movement towards cloud computing, whether that’s in house or on-demand.
Okay, so these things mean we have applications that run “virtually” – that is, they’re divorced from the underlying hardware. One machine can do ten things; ten machines can do one thing.
Okay, so these things mean we have applications that run “virtually” – that is, they’re divorced from the underlying hardware. One machine can do ten things; ten machines can do one thing.
Okay, so these things mean we have applications that run “virtually” – that is, they’re divorced from the underlying hardware. One machine can do ten things; ten machines can do one thing.
This is the “technical” definition of cloud computing: virtualized, automated, self-service computing resources. Some people call this a “private cloud”; others think it’s just IT-done-right. Whatever the case, data centers are furiously retooling themselves, much to the enjoyment of companies like VMWare and Citrix.
Part three: Stacks and the separation of concerns
At its most simple, this is all about a “stack” of services. Stacks are a common idea in computing and networking. Basically, they’re a separation of different tasks.
We’re familiar with the idea of a stack. There’s a stack in the postal service.
You worry about the address, and the stamp. The postal service handles the rest—it doesn’t care what’s inside your envelope; and you don’t care what route your letter takes to its destination, as long as it gets there.
You worry about the address, and the stamp. The postal service handles the rest—it doesn’t care what’s inside your envelope; and you don’t care what route your letter takes to its destination, as long as it gets there.
You worry about the address, and the stamp. The postal service handles the rest—it doesn’t care what’s inside your envelope; and you don’t care what route your letter takes to its destination, as long as it gets there.
You worry about the address, and the stamp. The postal service handles the rest—it doesn’t care what’s inside your envelope; and you don’t care what route your letter takes to its destination, as long as it gets there.
You worry about the address, and the stamp. The postal service handles the rest—it doesn’t care what’s inside your envelope; and you don’t care what route your letter takes to its destination, as long as it gets there.
You worry about the address, and the stamp. The postal service handles the rest—it doesn’t care what’s inside your envelope; and you don’t care what route your letter takes to its destination, as long as it gets there.
You worry about the address, and the stamp. The postal service handles the rest—it doesn’t care what’s inside your envelope; and you don’t care what route your letter takes to its destination, as long as it gets there.
But wait -- there’s more! There’s another way to look at cloud computing.
Notice that so far, nothing I’ve said about clouds implies you can’t just run your own. Up until now, they’ve been DIY.
This is the clouds-as-a-business-model definition. In this, cloud computing is a third-party service.
All of the things we’ve seen about cloud technology make it possible to deliver computing as a utility -- computing on tap.
The virtualization provides a blood/brain barrier between the application the user is running, and the machines on which it runs.
That means you can focus on the thing your business does that makes you special
And stop worrying about many of the tasks you really didn’t want to do anyway.
Sharing and economies of scale keep costs down. Cloud providers are poised to make the most of these economies of scale. Consider that in July 2008, Microsoft revealed that it had 96,000 servers at the Quincy facility, consuming "about 11 megawatts"
More than 80% dedicated to Microsoft's Live Search and the remaining for Hotmail
In August, a really good discovery was posted to a blog called "istartedsomething.com":  a screen shot of a software dashboard that illustrates power consumption and server count at each of Microsoft's fifteen data centers, caught in a Microsoft video posted to their web site.
The move towards the cloud business model has a lot to do with the economies of scale that exist when you can concentrate infrastructure, and put it near dams. (There’s a good—if hotly debated argument—that clouds-as-a-business-model are inevitable, because of the economics.)
The move towards the cloud business model has a lot to do with the economies of scale that exist when you can concentrate infrastructure, and put it near dams. (There’s a good—if hotly debated argument—that clouds-as-a-business-model are inevitable, because of the economics.)
The move towards the cloud business model has a lot to do with the economies of scale that exist when you can concentrate infrastructure, and put it near dams. (There’s a good—if hotly debated argument—that clouds-as-a-business-model are inevitable, because of the economics.)
The move towards the cloud business model has a lot to do with the economies of scale that exist when you can concentrate infrastructure, and put it near dams. (There’s a good—if hotly debated argument—that clouds-as-a-business-model are inevitable, because of the economics.)
Cloud providers are thinking at a scale that nearly every enterprise can’t compete with. That’s because operating efficiency, and accounting for everything, are core to their business; whereas making widgets is core to yours.
Self-service means customers can deploy and destroy their own machines.
So while you can build an automated, self-service, on-demand private cloud, there are also many public options (is that a bad word in DC? )
Most of the time, when you hear someone say they’re concerned about the security of cloud computing, they’re talking about public clouds, and the issues that come with putting your data somewhere virtually but not knowing where it is physically.
So far, while I’ve told you a lot about clouds, I haven’t really told you what they are. That’s partly because there are many kinds of cloud computing.We can separate clouds into three distinct groups.
The first is called Infrastructure as a Service, because you’re renting pieces of (virtual) infrastructure.
This is what IT people think of when you say “clouds” – virtual machines I can use for just an hour. Here’s Amazon’s “menu” of machines.
A great example of these clouds in action is what the Washington Post did with Hillarly Clinton’s diaries during her campaign. They needed to get all 17,481 pages of Hillary Clinton’s White House schedule scanned and searchable quickly. Using 200 machines, the Post was able to get the data to reporters in only 26 hours. In fact, the experiment is even more compelling: Desktop OCR took about 30 minutes per page to properly scan, read, resize, and format each page – which means that it would have taken nearly a year, and cost $123 in power, to do the work on a single machine.
In an IaaS model, you’re getting computers as a utility. The unit of the transaction is a virtual machine. It’s still up to you to install an operating system, and software, or at least to choose it from a list. You don’t really have a machine -- you have an image of one, and when you stop the machine, it vanishes.
In an IaaS model, you’re getting computers as a utility. The unit of the transaction is a virtual machine. It’s still up to you to install an operating system, and software, or at least to choose it from a list. You don’t really have a machine -- you have an image of one, and when you stop the machine, it vanishes.
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
Most applications consist of several machines -- web, app, and database, for example. Each is created from an image, and some, like databases, may use other services from the cloud to store and retrieve data from a disk
If you run out of capacity, you can upgrade to a bigger machine (which is called “scaling vertically.”)
If you run out of capacity, you can upgrade to a bigger machine (which is called “scaling vertically.”)
If you run out of capacity, you can upgrade to a bigger machine (which is called “scaling vertically.”)
Or you can create several machines at each tier, and use a load balancer to share traffic between them. These kinds of scalable, redundant architectures are common -- nay, recommended -- in a cloud computing world where everything is uncertain.
Or you can create several machines at each tier, and use a load balancer to share traffic between them. These kinds of scalable, redundant architectures are common -- nay, recommended -- in a cloud computing world where everything is uncertain.
Or you can create several machines at each tier, and use a load balancer to share traffic between them. These kinds of scalable, redundant architectures are common -- nay, recommended -- in a cloud computing world where everything is uncertain.
The second kind of cloud is called Platform as a Service. In this model, you don’t think about the individual machines—instead, you just copy your code to a cloud, and run it. You never see the machines. In a PaaS cloud, things are very different.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
- You write your code; often it needs some customization.
- That code runs on a share processing platform
- Along with other people’s code
- The code calls certain functions to do things like authenticate a user, handle a payment, store an object, or move something to a CDN
- To keep everything running smoothly (and bill you) the platform has a scheduler (figuring out what to do next) and a governor (ensuring one program doesn’t use up all the resources) as well as a console.
Here’s a shot of some code running in Google App Engine. I only know that I’m paying by CPU-hour, or for units like bandwidth, email, or storage. This could be one machine whose CPU was used 8%, or a hundred, or a thousand. I don’t know.
I can see the logs for my application. But these aren’t for a single machine -- they’re for the application itself, everywhere.
I can even find out what parts of my code are consuming the most CPU, across all machines.
And even their latency when served to people.
It’s a true, pure utility because you pay for what you use.
This is a very different model from IaaS. On the one hand, it’s more liberating, because you don’t have to worry about managing the machines. On the other hand, it’s more restrictive, because you can only do what the PaaS lets you.
In the case of Google’s App Engine, you have to use their functions and store things in the way they want you to. You get great performance from doing so, but it probably means rewriting your code a bit.
PaaS platforms impose usage caps and billing tiers. Here’s Google App Engine’s set of quotas and free caps.
In the case of Salesforce’s Force.com, you have to use an entirely new programming language, called Apex.
The third kind of cloud is called Software as a Service, or SaaS. Some people argue that this isn’t a cloud at all, just a new way of delivering software. But it’s also what the masses—the non-technologists—think cloud computing means.
(Personally, I think this makes the term “cloud” synonymous with “web” or “Internet”, and therefore a bit useless.)
(Personally, I think this makes the term “cloud” synonymous with “web” or “Internet”, and therefore a bit useless.)
(Personally, I think this makes the term “cloud” synonymous with “web” or “Internet”, and therefore a bit useless.)
SaaS and PaaS are blurring, too, with the advent of scripting languages. Nobody would argue that Google Apps is a SaaS offering; but now that you can write code for it -- as in this example of a script that sends custom driving directions to everyone in a spreadsheet -- the distinction is less and less clear.
But the business model of SaaS is the same as PaaS and IaaS: Sell IT on demand, rather than as software or machines.
It’s the form of cloud computing that gets the most lip service in areas like government, particularly with Google Apps.
This division between PaaS and IaaS is a bit of a fiction. In fact, virtual machines are just one of around twenty “cloud services” Amazon offers – called EC2.
The same is true of App Engine - though these are functions called from code, rather than services you pay for separately, they’re still more than just the code.
This is a really important concept: Clouds aren’t just virtual machines. Clouds are on-demand computing services.
To understand this, we need to talk for a minute about “composed designs.”
When IT architects want to build something, they have a set of proven designs for doing so. A database is an example of this—it’s a combination of storage (disk) and a particular way of arranging things (tables and indexes) and language (structured query language, or SQL). We’ve learned that a database is a good prefab building block, so we use it. The alternative is to build it all, from scratch, writing to the disk itself.
When IT architects want to build something, they have a set of proven designs for doing so. A database is an example of this—it’s a combination of storage (disk) and a particular way of arranging things (tables and indexes) and language (structured query language, or SQL). We’ve learned that a database is a good prefab building block, so we use it. The alternative is to build it all, from scratch, writing to the disk itself.
When IT architects want to build something, they have a set of proven designs for doing so. A database is an example of this—it’s a combination of storage (disk) and a particular way of arranging things (tables and indexes) and language (structured query language, or SQL). We’ve learned that a database is a good prefab building block, so we use it. The alternative is to build it all, from scratch, writing to the disk itself.
When IT architects want to build something, they have a set of proven designs for doing so. A database is an example of this—it’s a combination of storage (disk) and a particular way of arranging things (tables and indexes) and language (structured query language, or SQL). We’ve learned that a database is a good prefab building block, so we use it. The alternative is to build it all, from scratch, writing to the disk itself.
When IT architects want to build something, they have a set of proven designs for doing so. A database is an example of this—it’s a combination of storage (disk) and a particular way of arranging things (tables and indexes) and language (structured query language, or SQL). We’ve learned that a database is a good prefab building block, so we use it. The alternative is to build it all, from scratch, writing to the disk itself.
When IT architects want to build something, they have a set of proven designs for doing so. A database is an example of this—it’s a combination of storage (disk) and a particular way of arranging things (tables and indexes) and language (structured query language, or SQL). We’ve learned that a database is a good prefab building block, so we use it. The alternative is to build it all, from scratch, writing to the disk itself.
There are other examples of “composed designs” in IT, many of them made from several components. For instance, consider the “message bus.” This is a thing you put messages into, and anyone who wants them can grab a copy of the message. Stock exchanges use publish-and-subscribe message busses to move data around.
A third example is called a key-value data store. In this case, I put in a key (say, ”username”) and a value (say, “Palin”). Then it’s stored for me. It’s much less fancy than a database, but also much faster and more scalable, and can be backed up more easily so it’s more reliable.
When architects want to build an application today, they don’t do so by building everything from scratch. Today’s applications are built on the shoulders of giants—message busses, data stores, authentication systems, payment tools, content delivery networks, and so on.
As a result, cloud providers offer a variety of these services. Rackspace has a storage product called Jungledisk; Amazon has S3. The machines that Rackspace or Amazon offer “chew” on data from these storage services.
If you equate cloud computing with just virtual machines, you’re missing the real point. Clouds applications are built from composed designs, and one of the components happens to be virtual machines.
So let’s put this in perspective: There are public and private cloud models. Private ones are about the technology; public ones are about the business of outsourcing at scale.And there are Infrastructure, Platform, and Software offerings—IaaS, PaaS, and SaaS.
If someone wants to have a conversation with me about clouds, they need to pick a tier, and a private or public model. Then we can compare facts.
So let’s put this in perspective: There are public and private cloud models. Private ones are about the technology; public ones are about the business of outsourcing at scale.And there are Infrastructure, Platform, and Software offerings—IaaS, PaaS, and SaaS.
If someone wants to have a conversation with me about clouds, they need to pick a tier, and a private or public model. Then we can compare facts.
So let’s put this in perspective: There are public and private cloud models. Private ones are about the technology; public ones are about the business of outsourcing at scale.And there are Infrastructure, Platform, and Software offerings—IaaS, PaaS, and SaaS.
If someone wants to have a conversation with me about clouds, they need to pick a tier, and a private or public model. Then we can compare facts.
So let’s put this in perspective: There are public and private cloud models. Private ones are about the technology; public ones are about the business of outsourcing at scale.And there are Infrastructure, Platform, and Software offerings—IaaS, PaaS, and SaaS.
If someone wants to have a conversation with me about clouds, they need to pick a tier, and a private or public model. Then we can compare facts.
So let’s put this in perspective: There are public and private cloud models. Private ones are about the technology; public ones are about the business of outsourcing at scale.And there are Infrastructure, Platform, and Software offerings—IaaS, PaaS, and SaaS.
If someone wants to have a conversation with me about clouds, they need to pick a tier, and a private or public model. Then we can compare facts.
So let’s put this in perspective: There are public and private cloud models. Private ones are about the technology; public ones are about the business of outsourcing at scale.And there are Infrastructure, Platform, and Software offerings—IaaS, PaaS, and SaaS.
If someone wants to have a conversation with me about clouds, they need to pick a tier, and a private or public model. Then we can compare facts.
Just knowing these two dimensions makes you smarter than nearly everyone in IT right now. And when you’re discussing IT, insist that others are specific about what they mean. Discussions around privacy and security are vital to public clouds, but most people don’t consider security different in private clouds. Similarly, lock-in is a real concern in PaaS but negligible in IaaS.
Just knowing these two dimensions makes you smarter than nearly everyone in IT right now. And when you’re discussing IT, insist that others are specific about what they mean. Discussions around privacy and security are vital to public clouds, but most people don’t consider security different in private clouds. Similarly, lock-in is a real concern in PaaS but negligible in IaaS.
Just knowing these two dimensions makes you smarter than nearly everyone in IT right now. And when you’re discussing IT, insist that others are specific about what they mean. Discussions around privacy and security are vital to public clouds, but most people don’t consider security different in private clouds. Similarly, lock-in is a real concern in PaaS but negligible in IaaS.
Just knowing these two dimensions makes you smarter than nearly everyone in IT right now. And when you’re discussing IT, insist that others are specific about what they mean. Discussions around privacy and security are vital to public clouds, but most people don’t consider security different in private clouds. Similarly, lock-in is a real concern in PaaS but negligible in IaaS.
Just knowing these two dimensions makes you smarter than nearly everyone in IT right now. And when you’re discussing IT, insist that others are specific about what they mean. Discussions around privacy and security are vital to public clouds, but most people don’t consider security different in private clouds. Similarly, lock-in is a real concern in PaaS but negligible in IaaS.
Just knowing these two dimensions makes you smarter than nearly everyone in IT right now. And when you’re discussing IT, insist that others are specific about what they mean. Discussions around privacy and security are vital to public clouds, but most people don’t consider security different in private clouds. Similarly, lock-in is a real concern in PaaS but negligible in IaaS.
Just knowing these two dimensions makes you smarter than nearly everyone in IT right now. And when you’re discussing IT, insist that others are specific about what they mean. Discussions around privacy and security are vital to public clouds, but most people don’t consider security different in private clouds. Similarly, lock-in is a real concern in PaaS but negligible in IaaS.
Lots of people want to move into this space. Some are e-commerce giants (like Amazon) who know how to run many machines well.
Some are software companies with legions of developers (like Microsoft) who want to move from software licenses to recurring revenues.
Some are managed hosting companies (like Rackspace, Terremark, and Gogrid) who want to sell computing by the hour instead of by the month, and want to have more standardized offerings.
Some are giant service companies (like Google) who want people to create millions of applications and keep people using the Web.
Some are big systems integrators (like IBM) who want to design and run IT for enterprises.
Some are hardware vendors (like Dell) who want to stay in the computing business as it shifts.
Some are telecom providers (like AT&T and Verizon) who want to do more than move packets around, and want to make the best use of their existing data centers.
Some are even government organizations aiming to build infrastructure for the use of the government itself
This isn’t a comfy place to be right now. Cloud computing has what I call a “roofrack” problem.
Cloud computing isn’t something you can easily ignore.
For some applications, particularly those that are bursty or seasonal, the economics are overwhelmingly in its favor.
Cloud providers keep making their stuff better. Amazon introduced roughly 40 new features last year; and in a single month they upgraded their network in New York twice.
And clouds make organizations more agile, because they take procurement from weeks to minutes.
They also remove the false sense of security that came from expense limits.
These days, supercomputing is easier (and cheaper) than booking a flight.
Because there’s no investment, the concept of an ROI doesn’t really make sense.
Even if you’re only going to run a private cloud, you’re dealing with expectations set by the public Internet. Consider an ATM – once, we didn’t mind taking all of lunch to get money out; today, we worry when the bank machine fails to give us our money back in 10 minutes. That’s a bad thing for organizations that don’t handle IT automatically; humans simply can’t move that fast. Efficiency isn’t about how fast you do things; it’s about how many things you don’t have to do because they’re automated.
The Internet has a way of routing around obstacles, so if you try to block people from using them, you’ll likely send your stakeholders underground.
The best thing to do is offer people an alternative. Set up self-service computing internally and see what happens.
It also means surrounding them with composed services like storage and message queues. Fortunately, there is a wide variety of offerings to help with this. Hadoop, Cassandra, CouchDB, Hypertable and others are all tools that handle storage, scaling, and parallel tasks, and that you can deploy internally for your users.
It also means setting up platforms (such as a web server that can handle PHP code, or a Drupal platform for creating social sites, or a Status.net instance for microblogging,
or a Wordpress instance for blogs.)
Finally, it means working with SaaS providers when appropriate, but integrating their applications with your internal data and processes
For IT, and governments, cloud computing is a trigger. It means it’s time to rebalance your computing decisions.
With clouds, there’s a spectrum of IT options. Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Different applications live in different places in this new world.
Here’s a five-step plan for embracing clouds.
First, you need to assess your existing applications. Make a list of everything you’ve got, or plan to have. You should also baseline usage, performance, and other “before” metrics so you can compare them to the results of your efforts after you’ve moved.
Then, you need to rebalance your applications. Evaluate each application along two dimensions: how suitable is the application for migration, and what’s the payoff.
Some applications, like legacy ERPs or old mainframe tools, won’t migrate easily. They’re not well suited to a virtualized, on-demand model where users can spin up resources as needed.
Others, like web front-ends or parallel data processing tasks like analytics, that can be split up, work really well in clouds.
At the same time, some applications won’t benefit much from a cloud model. Something that runs constantly may be more affordable to run in-house.
Other applications may have a massive budget savings when they move to the cloud. Something that happens once a year but needs tremendous computing for the three days it runs is a candidate for clouds. So, too, is something that users are constantly requesting, and that your IT team spends a lot of time managing. Automate it!
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Going forward, we’ll see hybrid on-premise/on demand hybrid clouds that can intelligently move processing tasks between private an public infrastructure according to performance requirements, pricing policies, and security restrictions.
Third step: You have to migrate things to the new environments. This means moving stuff around—hopefully the high-payoff, easy-to-move stuff first. There’s no magic here: you’ll need to make your applications portable, which means virtualizing them; and you may need to modify some code.
Step four is to optimize things. In their new homes, some applications won’t perform as well. You’ll need to compare how they’re doing now to how they were doing before, and tweak things to ensure equivalent performance, uptime, security, and scalability.
Finally, in step five you need to operate things differently. Cloud computing is as much about a cultural shift in IT: you’re operating a self-service business.
You’re not doing the IT work any more; you’re managing the scripts and systems that let users do the IT work themselves. You have a very different relationship with your end users.
You’re providing the environment for them to innovate, giving them turnkey sets of services with which to work. Where they come from is immaterial.
You’re ensuring that the systems you’ve built are functioning properly however end users want to use them, rather than running the applications or data within those systems.
Your end users aren’t necessarily technical -- they’re able to build applications easily, and want the tools to experiment.
At the same time, you’re seeing what tools and processes are getting adopted -- what’s working? what’s popular? -- and doubling down on those things.
You’re giving your users places to experiment.
To some extent, you’re “paving the cowpaths.”
This is an old civil engineering trick: Watch where people walk, then put paths there.
One of the fundamentals of a cloud is the separation of the provider from the user at some layer in the stack
Where that separation happens determines economics, responsibilities, risk, and lock-in