Cloud computing skepticism - But i'm sure

578 views
468 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
578
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Details may vary somewhat by site or by moment in time.
  • Video download – external traffic Search application – internal traffice
  • Requests from Internet are IP (layer 3) routed through border and access routers to a layer 2 domain based on destination VIP address Single layer 2 domain – 4000 servers Layer 2 domain divided up into subnets using VLANs configured on Layer 2 switches
  • Target markets for the service
  • Cloud computing skepticism - But i'm sure

    1. 1. Abhishek Verma, Saurabh Nangia
    2. 2. <ul><li>Cloud computing hype </li></ul><ul><li>Cynicism </li></ul><ul><li>MapReduce Vs Parallel DBMS </li></ul><ul><li>Cost of a cloud </li></ul><ul><li>Discussion </li></ul>
    3. 3. Google App Engine (April 2008) Microsoft Azure (Oct 2008) Facebook Platform (May 2007) Amazon EC2 (August 2006) Amazon S3 (March 2006) Salesforce AppExchange (March 2006)
    4. 4.
    5. 5. * From http://en.wikipedia.org/wiki/Hype_cycle Cloud Computing
    6. 6.
    7. 7. <ul><li>“ Cloud computing is simply a buzzword used to repackage grid computing and utility computing, both of which have existed for decades.” </li></ul>whatis.com Definition of Cloud Computing
    8. 8. <ul><li>“ The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. […] </li></ul><ul><li>The computer industry is the only industry that is more fashion-driven than women’s fashion. </li></ul><ul><li>Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?” </li></ul>Larry Ellison During Oracle’s Analyst Day From http://blogs.wsj.com/biztech/2008/09/25/larry-ellisons-brilliant-anti-cloud-computing-rant/
    9. 9. From http://geekandpoke.typepad.com
    10. 10. <ul><li>Many enterprise (necessarily or unnecessarily) set their SLAs uptimes at 99.99% or higher, which cloud providers have not yet been prepared to match </li></ul>* SLAs expressed in Monthly Uptime Percentages; Source : McKinsey & Company <ul><li>Not clear that all applications require such high services </li></ul><ul><li>IT shops do not always deliver on their SLAs but their failures are less public and customers can’t switch easily </li></ul>Amazon’s cloud outages receive a lot of exposure … July 20, 2008 Failure due to stranded zombies, lasts 5 hours Feb 15, 2008 Authentication overload leads to two-hour service outage October 2007 Service failure lasts two days October 2006 Security breach where users could see other users data … and their current SLAs don’t match those of enterprises* Amazon EC2 99.95% Amazon S3 99.9%
    11. 11. Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel J. Abadi, David J. DeWitt, Samuel Madden, Michael Stonebraker To appear in SIGMOD ‘09 *Basic ideas from MapReduce - a major step backwards , D. DeWitt and M. Stonebraker
    12. 12. <ul><li>A giant step backward </li></ul><ul><ul><li>No schemas, Codasyl instead of Relational </li></ul></ul><ul><li>A sub-optimal implementation </li></ul><ul><ul><li>Uses brute force sequential search, instead of indexing </li></ul></ul><ul><ul><li>Materializes O(m.r) intermediate files </li></ul></ul><ul><ul><li>Does not incorporate data skew </li></ul></ul><ul><li>Not novel at all </li></ul><ul><ul><li>Represents a specific implementation of well known techniques developed nearly 25 years ago </li></ul></ul><ul><li>Missing most of the common current DBMS features </li></ul><ul><ul><li>Bulk loader, indexing, updates, transactions, integrity constraints, referential Integrity, views </li></ul></ul><ul><li>Incompatible with DBMS tools </li></ul><ul><ul><li>Report writers, business intelligence tools, data mining tools, replication tools, database design tools </li></ul></ul>
    13. 13. Architectural Element Parallel Databases MapReduce Schema Support Structured Unstructured Indexing B-Trees or Hash based None Programming Model Relational Codasyl Data Distribution Projections before aggregation Logic moved to data, but no optimizations Execution Strategy Push Pull Flexibility No, but Ruby on Rails, LINQ Yes Fault Tolerance Transactions have to be restarted in the event of a failure Yes: Replication, Speculative execution
    14. 14. <ul><li>MapReduce didn't kill our dog, steal our car, or try and date our daughters.  </li></ul><ul><li>MapReduce is not a database system, so don't judge it as one </li></ul><ul><ul><li>Both analyze and perform computations on huge datasets </li></ul></ul><ul><li>MapReduce has excellent scalability; the proof is Google's use </li></ul><ul><ul><li>Does it scale linearly? </li></ul></ul><ul><ul><li>No scientific evidence </li></ul></ul><ul><li>MapReduce is cheap and databases are expensive </li></ul><ul><li>We are the old guard trying to defend our turf/legacy from the young turks </li></ul><ul><ul><li>Propagation of ideas between sub-disciplines is very slow and sketchy </li></ul></ul><ul><ul><li>Very little information is passed from generation to generation </li></ul></ul>* http://www.databasecolumn.com/2008/01/mapreduce-continued.html
    15. 15. <ul><li>Hadoop </li></ul><ul><ul><li>0.19 on Java 1.6, 256MB block size, JVM reuse </li></ul></ul><ul><ul><li>Rack-awareness enabled </li></ul></ul><ul><li>DBMS-X (unnamed) </li></ul><ul><ul><li>Parallel DBMS from a “major relational db vendor” </li></ul></ul><ul><ul><li>Row based, compression enabled </li></ul></ul><ul><li>Vertica (co-founded by Stonebraker) </li></ul><ul><ul><li>Column oriented </li></ul></ul><ul><li>Hardware configuration: 100 nodes </li></ul><ul><ul><li>2.4 GHz Intel Core 2 Duo </li></ul></ul><ul><ul><li>4GB RAM, 2 250GB SATA hard disks </li></ul></ul><ul><ul><li>GigE ports, 128Gbps switching fabric </li></ul></ul>
    16. 16. <ul><li>Hadoop </li></ul><ul><ul><li>Command line utility </li></ul></ul><ul><li>DBMS-X </li></ul><ul><ul><li>LOAD SQL command </li></ul></ul><ul><ul><li>Administrative command to re-organize data </li></ul></ul><ul><li>Grep Dataset </li></ul><ul><ul><li>Record = 10b key + 90b random value </li></ul></ul><ul><ul><li>5.6 million records = 535MB/node </li></ul></ul><ul><ul><li>Another set = 1TB/cluster </li></ul></ul>
    17. 17. SELECT * FROM Data WHERE field LIKE ‘%XYZ%’ ;
    18. 18. SELECT pageURL , pageRank FROM Rankings WHERE pageRank > X ;
    19. 19. SELECT INTO Temp sourceIP , AVG ( pageRank ) as avgPageRank , SUM ( adRevenue ) as totalRevenue FROM Rankings AS R , UserVisits AS UV WHERE R . pageURL = UV . destURL AND UV . visitDate BETWEEN Date ( ‘2000-01-15’ ) AND Date ( ‘2000-01-22’ ) GROUP BY UV.sourceIP ; SELECT sourceIP , totalRevenue , avgPageRank FROM Temp ORDER BY totalRevenue DESC LIMIT 1 ;
    20. 20. <ul><li>DBMS-X 3.2 times, Vertica 2.3 times faster than Hadoop </li></ul><ul><li>Parallel DBMS win because </li></ul><ul><ul><li>B-tree indices to speed the execution of selection operations, </li></ul></ul><ul><ul><li>novel storage mechanisms (e.g., column-orientation) </li></ul></ul><ul><ul><li>aggressive compression techniques with ability to operate directly on compressed data </li></ul></ul><ul><ul><li>sophisticated parallel algorithms for querying large amounts of relational data. </li></ul></ul><ul><li>Ease of installation and use </li></ul><ul><li>Fault tolerance? </li></ul><ul><li>Loading data? </li></ul>
    21. 21. Albert Greenberg, James Hamilton, David A. Maltz, Parveen Patel MSR Redmond Presented by: Saurabh Nangia
    22. 22. <ul><li>Cost of cloud service </li></ul><ul><li>Improving low utilization </li></ul><ul><ul><li>Network agility </li></ul></ul><ul><ul><li>Incentive for resource consumption </li></ul></ul><ul><ul><li>Geo-distributed network of DC </li></ul></ul>
    23. 23. <ul><li>Where does the cost go in today’s cloud service data centers? </li></ul>
    24. 24. Amortized Costs (one time purchases amortized over reasonable lifetimes, assuming 5% cost of money) 45% 25% 15% 15%
    25. 25. <ul><li>Can existing solutions for the enterprise data center work for cloud service data centers? </li></ul>
    26. 26. <ul><li>In enterprise </li></ul><ul><ul><li>Leading cost: operational staff </li></ul></ul><ul><ul><li>Automation is partial </li></ul></ul><ul><ul><li>IT staff : Servers = 1:100 </li></ul></ul><ul><li>In cloud </li></ul><ul><ul><li>Staff costs under 5% </li></ul></ul><ul><ul><li>Automation is mandatory </li></ul></ul><ul><ul><li>IT staff : Servers = 1:1000 </li></ul></ul>
    27. 27. <ul><li>Large economies of scale </li></ul><ul><ul><li>Cloud DC leverage economies of scale </li></ul></ul><ul><ul><li>But up front costs are high </li></ul></ul><ul><li>Scale Out </li></ul><ul><ul><li>Enterprises DC “scale up” </li></ul></ul><ul><ul><li>Cloud DC “scale out” </li></ul></ul>
    28. 28. <ul><li>Mega data centers </li></ul><ul><ul><li>Tens of thousands (or more) servers </li></ul></ul><ul><ul><li>Drawing tens of Mega-Watts of power (at peak) </li></ul></ul><ul><ul><li>Massive data analysis applications </li></ul></ul><ul><ul><ul><li>Huge RAM, Massive CPU cycles, Disk I/O operations </li></ul></ul></ul><ul><ul><li>Advantages </li></ul></ul><ul><ul><ul><li>Cloud services applications build on one another </li></ul></ul></ul><ul><ul><ul><li>Eases system design </li></ul></ul></ul><ul><ul><ul><li>Lowers cost of communication needs </li></ul></ul></ul>
    29. 29. <ul><li>Micro data centers </li></ul><ul><ul><li>Thousands of servers </li></ul></ul><ul><ul><li>Drawing power peaking in 100s of Kilo-Watts </li></ul></ul><ul><ul><li>Highly interactive applications </li></ul></ul><ul><ul><ul><li>Query/response, office productivity </li></ul></ul></ul><ul><ul><li>Advantages </li></ul></ul><ul><ul><ul><li>Used as nodes in content distribution network </li></ul></ul></ul><ul><ul><ul><li>Minimize speed-of-light latency </li></ul></ul></ul><ul><ul><ul><li>Minimize network transit costs to user </li></ul></ul></ul>
    30. 31. <ul><li>Example </li></ul><ul><ul><li>50,000 servers </li></ul></ul><ul><ul><li>$3000 per server </li></ul></ul><ul><ul><li>5% cost of money </li></ul></ul><ul><ul><li>3 year amortization </li></ul></ul><ul><li>Amortized cost = 50000 * 3000 * 1.05 / 3 </li></ul><ul><li>= $52.5 million dollars per year!! </li></ul><ul><li>Utilization remarkably low , ~10% </li></ul>
    31. 32. <ul><li>Uneven Application Fit </li></ul><ul><li>Uncertainty in demand forecasts </li></ul><ul><li>Long provisioning time scales </li></ul><ul><li>Risk Management </li></ul><ul><li>Hoarding </li></ul><ul><li>Virtualization short-falls </li></ul>
    32. 33. <ul><li>Solution: Agility </li></ul><ul><ul><li>to dynamically grow and shrink resources to meet demand, and </li></ul></ul><ul><ul><li>to draw those resources from the most optimal location. </li></ul></ul><ul><li>Barrier: Network </li></ul><ul><ul><li>Increases fragmentation of resources </li></ul></ul><ul><ul><li>Therefore, low server utlization </li></ul></ul>
    33. 34. <ul><li>Infrastructure is overhead of Cloud DC </li></ul><ul><li>Facilities dedicated to </li></ul><ul><ul><li>Consistent power delivery </li></ul></ul><ul><ul><li>Evacuating heat </li></ul></ul><ul><li>Large scale generators, transformers, UPS </li></ul><ul><li>Amortized cost: $18.4 million per year!! </li></ul><ul><ul><li>Infra cost: $200M </li></ul></ul><ul><ul><li>5% cost of money </li></ul></ul><ul><ul><li>15 year amortization </li></ul></ul>
    34. 35. <ul><li>Reason of high cost: requirement for delivering consistent power </li></ul><ul><li>Relaxing the requirement implies scaling out </li></ul><ul><li>Deploy larger numbers of smaller data centers </li></ul><ul><ul><li>Resilience at data center level </li></ul></ul><ul><ul><li>Layers of redundancy within data center can be stripped out (no UPS & generators) </li></ul></ul><ul><li>Geo-diverse deployment of micro data centers </li></ul>
    35. 36. <ul><li>Power Usage Efficiency (PUE) </li></ul><ul><li>= (Total Facility Power)/(IT Equipment Power) </li></ul><ul><li>Typically PUE ~ 1.7 </li></ul><ul><ul><li>Inefficient facilities, PUE of 2.0 to 3.0 </li></ul></ul><ul><ul><li>Leading facilities, PUE of 1.2 </li></ul></ul><ul><li>Amortized cost = $9.3million per year!! </li></ul><ul><ul><li>PUE: 1.7 </li></ul></ul><ul><ul><li>$.07 per KWH </li></ul></ul><ul><ul><li>50000 servers each drawing average 180W </li></ul></ul>
    36. 37. <ul><li>Decreasing power cost -> decrease need of infrastructure cost </li></ul><ul><li>Goal: Energy proportionality </li></ul><ul><ul><li>server running at N% load consume N% power </li></ul></ul><ul><li>Hardware innovation </li></ul><ul><ul><li>High efficiency power supplies </li></ul></ul><ul><ul><li>Voltage regulation modules </li></ul></ul><ul><li>Reduce amount of cooling for data center </li></ul><ul><ul><li>Equipment failure rates increase with temp </li></ul></ul><ul><ul><li>Make network more mesh-like & resilient </li></ul></ul>
    37. 38. <ul><li>Capital cost of networking gear </li></ul><ul><ul><li>Switches, routers and load balancers </li></ul></ul><ul><li>Wide area networking </li></ul><ul><ul><li>Peering: traffic handed off to ISP for end users </li></ul></ul><ul><ul><li>Inter-data center links b/w geo distributed DC </li></ul></ul><ul><ul><li>Regional facilities (backhaul, metro-area connectivity, co-location space) to reach interconnection sites </li></ul></ul><ul><li>Back-of-the-envelope calculations difficult </li></ul>
    38. 39. <ul><li>Sensitive to site selection & industry dynamics </li></ul><ul><li>Solution: </li></ul><ul><ul><li>Clever design of peering & transit strategies </li></ul></ul><ul><ul><li>Optimal placement of micro & mega DC </li></ul></ul><ul><ul><li>Better design of services (partitioning state) </li></ul></ul><ul><ul><li>Better data partitioning & replication </li></ul></ul>
    39. 40. <ul><li>On is better than off </li></ul><ul><ul><li>Server should be engaged in revenue production </li></ul></ul><ul><ul><li>Challenge: Agility </li></ul></ul><ul><li>Build in resilience at systems level </li></ul><ul><ul><li>Stripping out layers of redundancy inside each DC, and instead using other DC to mask DC failure </li></ul></ul><ul><ul><li>Challenge: Systems software & Network research </li></ul></ul>
    40. 41. *http://perspectives.mvdirona.com/2008/11/28/CostOfPowerInLargeScaleDataCenters.aspx
    41. 43. <ul><li>Increasing Network Agility </li></ul><ul><li>Appropriate incentives to shape resource consumption </li></ul><ul><li>Joint optimization of Network & DC resources </li></ul><ul><li>New mechanisms for geo-distributing states </li></ul>
    42. 44. <ul><li>Any server can be dynamically assigned to any service any where in DC </li></ul><ul><li>Conventional DC </li></ul><ul><ul><li>Fragment network & server capacity </li></ul></ul><ul><ul><li>Limit dynamic growth and shrink of server pools </li></ul></ul>
    43. 45. <ul><li>DC network two types of traffic </li></ul><ul><ul><li>Between external end systems & internal servers </li></ul></ul><ul><ul><li>Between internal servers </li></ul></ul><ul><li>Load Balancer </li></ul><ul><li>Virtual IP address (VIP) </li></ul><ul><li>Direct IP address (DIP) </li></ul>
    44. 47. <ul><li>Static Network Assignment </li></ul><ul><ul><li>Individual applications mapped to specific physical switches & routers </li></ul></ul><ul><ul><li>Adv: performance & security isolation </li></ul></ul><ul><ul><li>Disadv: Work against agility </li></ul></ul><ul><ul><ul><li>Policy-overloaded (traffic, security, performance) </li></ul></ul></ul><ul><ul><ul><li>VLAN spanning concentrates traffic on links high in tree </li></ul></ul></ul>
    45. 48. <ul><li>Load Balancing Techniques </li></ul><ul><ul><li>Destination NAT </li></ul></ul><ul><ul><ul><li>All DIPs in a VIPs pool be in the same layer 2 domain </li></ul></ul></ul><ul><ul><ul><li>Under-utilization & fragmentation </li></ul></ul></ul><ul><ul><li>Source NAT </li></ul></ul><ul><ul><ul><li>Servers spread across layer 2 domain </li></ul></ul></ul><ul><ul><ul><li>But server never sees IP </li></ul></ul></ul><ul><ul><ul><ul><li>Client IP required for data mining & response customization </li></ul></ul></ul></ul>
    46. 49. <ul><li>Poor server to server connectivity </li></ul><ul><ul><li>Connection b/w servers in diff layer 2 must go through layer 3 </li></ul></ul><ul><ul><li>Links oversubscribed </li></ul></ul><ul><ul><ul><li>Capacity of links b/w access router & border routers < output capacity of servers connected to access router </li></ul></ul></ul><ul><ul><li>Ensure no saturation in any of network links! </li></ul></ul>
    47. 50. <ul><li>Proprietary hardware scales up, not out </li></ul><ul><ul><li>Load balancers used in pairs </li></ul></ul><ul><ul><li>Replaced when load becomes too much </li></ul></ul>
    48. 51. <ul><li>Location-independent Addressing </li></ul><ul><ul><li>Decouple servers location in DC from its address </li></ul></ul><ul><li>Uniform Bandwidth & Latency </li></ul><ul><ul><li>Servers can be distributed arbitrarily in DC without fear of running into bandwidth choke points </li></ul></ul><ul><li>Security & Performance Isolation </li></ul><ul><ul><li>One service should not affect other’s performance </li></ul></ul><ul><ul><li>DoS attack </li></ul></ul>
    49. 52. <ul><li>Yield management </li></ul><ul><ul><li>to sell the right resources to the right customer at the right time for the right price </li></ul></ul><ul><li>Trough filling </li></ul><ul><ul><li>Cost determined by height of peaks, not area </li></ul></ul><ul><ul><li>Bin packing opportunities </li></ul></ul><ul><ul><ul><li>Leasing committed capacity with fixed minimum cost </li></ul></ul></ul><ul><ul><ul><li>Prices varying with resource availability </li></ul></ul></ul><ul><ul><ul><li>Differentiate demands by urgency of execution </li></ul></ul></ul>
    50. 53. <ul><li>Server allocation </li></ul><ul><ul><li>Large unfragmented servers & Agility </li></ul></ul><ul><ul><ul><li>Less requests for servers </li></ul></ul></ul><ul><ul><li>Eliminating hoarding of servers </li></ul></ul><ul><ul><ul><li>Cost for having a server </li></ul></ul></ul><ul><ul><li>Seasonal peaks </li></ul></ul><ul><ul><ul><li>Internal auctions may be fairest </li></ul></ul></ul><ul><ul><ul><li>But, how to design! </li></ul></ul></ul>
    51. 54. <ul><li>Speed & latency matter </li></ul><ul><ul><li>Google 20% revenue loss for 500ms delay!! </li></ul></ul><ul><ul><li>Amazon 1% sales decrease for 100ms delay!! </li></ul></ul><ul><li>Challenges </li></ul><ul><ul><li>Where to place data centers </li></ul></ul><ul><ul><li>How big to make them </li></ul></ul><ul><ul><li>Using it as a source of redundancy to improve availability </li></ul></ul>
    52. 55. <ul><li>Importance of Geographical Diversity </li></ul><ul><ul><li>Decreasing latency b/w user and DC </li></ul></ul><ul><ul><li>Redundancy (earthquakes, riots, outages, etc) </li></ul></ul><ul><li>Size of data center </li></ul><ul><ul><li>Mega DC </li></ul></ul><ul><ul><ul><li>Extracting maximum benefit from economies of scale </li></ul></ul></ul><ul><ul><ul><li>Local factors like tax, power concessions, etc. </li></ul></ul></ul><ul><ul><li>Micro DC </li></ul></ul><ul><ul><ul><li>Enough servers to provide statistical multiplexing gains </li></ul></ul></ul><ul><ul><ul><li>Given a fixed budget, place closes to each desired population </li></ul></ul></ul>
    53. 56. <ul><li>Network cost </li></ul><ul><ul><li>Performance vs cost </li></ul></ul><ul><ul><li>Latency vs Internet peering & dedicated lines between data centers </li></ul></ul><ul><li>Optimization should also consider </li></ul><ul><ul><li>Dependencies of services offered </li></ul></ul><ul><ul><ul><li>Email -> buddy list maintenance, authentication, etc </li></ul></ul></ul><ul><ul><li>Front end: micro data centers (low latency) </li></ul></ul><ul><ul><li>Back end: mega data centers (greater resources) </li></ul></ul>
    54. 57. <ul><li>Turning geo-diversity to geo-redundancy </li></ul><ul><ul><li>Distribute critical state across sites </li></ul></ul><ul><ul><li>Facebook </li></ul></ul><ul><ul><ul><li>Single master data center replicating data </li></ul></ul></ul><ul><ul><li>Yahoo! Mail </li></ul></ul><ul><ul><ul><li>Partitions data across DCs based on user </li></ul></ul></ul><ul><ul><li>Different solutions for Different data </li></ul></ul><ul><ul><ul><li>Buddy status: replicated weak consistency assurance </li></ul></ul></ul><ul><ul><ul><li>Email: mailbox by user ids, strong consistency </li></ul></ul></ul>
    55. 58. <ul><li>Tradeoffs </li></ul><ul><ul><li>Load distribution vs service performance </li></ul></ul><ul><ul><ul><li>eg Facebook’s single master coordinate replication </li></ul></ul></ul><ul><ul><ul><li>Speeds up lookup but loads on master </li></ul></ul></ul><ul><ul><li>Communication cost vs service performance </li></ul></ul><ul><ul><ul><li>Data replication-more inter data center communication </li></ul></ul></ul><ul><ul><ul><li>Longer latency </li></ul></ul></ul><ul><ul><ul><li>Higher cost message over inter DC links </li></ul></ul></ul>
    56. 59. <ul><li>Data center costs </li></ul><ul><ul><li>Server, Infrastructure, Power, Networking </li></ul></ul><ul><li>Improving efficiency </li></ul><ul><ul><li>Network Agility </li></ul></ul><ul><ul><li>Resource Consumption Shaping </li></ul></ul><ul><ul><li>Geo-diversifying DC </li></ul></ul>
    57. 61. <ul><li>Richard Stallman, GNU founder </li></ul><ul><ul><li>Cloud Computing is a trap </li></ul></ul><ul><ul><li>“ .. cloud computing was simply a trap aimed at forcing more people to buy into locked, proprietary systems that would cost them more and more over time.” </li></ul></ul><ul><ul><li>&quot;It's stupidity. It's worse than stupidity: it's a marketing hype campaign&quot; </li></ul></ul>
    58. 62. <ul><li>Open Cloud Manifesto </li></ul><ul><ul><li>a document put together by IBM, Cisco, AT&T, Sun Microsystems and over 50 others to promote interoperability </li></ul></ul><ul><ul><li>&quot; Cloud providers must not use their market position to lock customers into their particular platforms and limit their choice of providers, ” </li></ul></ul><ul><ul><li>Failed? Google, Amazon, Salesforce and Microsoft, four very big players in the area, are notably absent from the list of supporters </li></ul></ul>
    59. 63. <ul><li>Larry Ellison, Oracle founder </li></ul><ul><ul><li>&quot;fashion-driven&quot; and &quot;complete gibberish” </li></ul></ul><ul><ul><li>“ What is it? What is it? ... Is it - 'Oh, I am going to access data on a server on the Internet.' That is cloud computing?“ </li></ul></ul><ul><ul><li>“ Then there is a definition: What is cloud computing? It is using a computer that is out there. That is one of the definitions: 'That is out there.' These people who are writing this crap are out there. They are insane. I mean it is the stupidest.” </li></ul></ul>
    60. 64. <ul><li>Sam Johnston , Strategic Consultant Specializing in Cloud Computing, </li></ul><ul><ul><li>Oracle would be out badmouthing cloud computing as it has the potential to disrupt their entire business. </li></ul></ul><ul><ul><li>&quot;Who needs a database server when you can buy cloud storage like electricity and let someone else worry about the details? Not me, that's for sure - unless I happen to be one of a dozen or so big providers who are probably using open source tech anyway,” </li></ul></ul>
    61. 65. <ul><li>Marc Benioff , head of salesforce.com </li></ul><ul><ul><li>“ Cloud computing isn't just candyfloss thinking – it's the future . If it isn't, I don't know what is. We're in it. You're going to see this model dominate our industry.&quot; </li></ul></ul><ul><ul><li>Is data really safe in the cloud? &quot;All complex systems have planned and unplanned downtime. The reality is we are able to provide higher levels of reliability and availability than most companies could provide on their own, &quot; says Benioff </li></ul></ul>
    62. 66. <ul><li>John Chambers , Cisco Systems’ CEO </li></ul><ul><ul><li>&quot;a security nightmare.” </li></ul></ul><ul><ul><li>“ cloud computing was inevitable, but that it would shake up the way that networks are secured…” </li></ul></ul>
    63. 67. <ul><li>James Hamilton, VP Amazon Web Services </li></ul><ul><ul><li>“ any company not fully understanding cloud computing economics and not having cloud computing as a tool to deploy where it makes sense is giving up a very valuable competitive edge” </li></ul></ul><ul><ul><li>“ No matter how large the IT group, if I led the team, I would be experimenting with cloud computing and deploying where it make sense” </li></ul></ul>
    64. 69. <ul><li>“ Clearing the air on cloud computing”, McKinsey&Company </li></ul><ul><li>http://geekandpoke.typepad.com/ </li></ul><ul><li>“ Clearing the Air - Adobe Air, Google Gears and Microsoft Mesh” , Farhad Javidi </li></ul><ul><li>http://en.wikipedia.org/wiki/Hype_cycle </li></ul><ul><li>“ A Comparison of Approaches to Large-Scale Data Analysis”, Pavlo et al </li></ul><ul><li>MapReduce - a major step backwards , D. DeWitt and M. Stonebraker </li></ul>

    ×