Google Cloud Computing on Google Developer 2008 Day

4,944 views

Published on

2 Comments
10 Likes
Statistics
Notes
No Downloads
Views
Total views
4,944
On SlideShare
0
From Embeds
0
Number of Embeds
1,287
Actions
Shares
0
Downloads
296
Comments
2
Likes
10
Embeds 0
No embeds

No notes for slide

Google Cloud Computing on Google Developer 2008 Day

  1. 1. Cloud Computing Ping Yeh June 14, 2008
  2. 2. Evolution of Computing with the Network <ul><ul><li>Cluster Computing </li></ul></ul><ul><ul><li>Network Computing </li></ul></ul><ul><ul><li>Grid Computing </li></ul></ul><ul><ul><li>Utility Computing </li></ul></ul>Network is computer (client - server) ‏ Separation of Functionalities Cluster and grid images are from Fermilab and CERN, respectively.
  3. 3. Evolution of Computing with the Network <ul><ul><li>Cluster Computing </li></ul></ul><ul><ul><li>Network Computing </li></ul></ul><ul><ul><li>Grid Computing </li></ul></ul><ul><ul><li>Utility Computing </li></ul></ul>Network is computer (client - server) ‏ Tightly coupled computing resources: CPU, storage, data, etc Usually connected within a LAN Managed as a single resource Separation of Functionalities Commodity, Open Source Cluster and grid images are from Fermilab and CERN, respectively.
  4. 4. Evolution of Computing with the Network <ul><ul><li>Cluster Computing </li></ul></ul><ul><ul><li>Network Computing </li></ul></ul><ul><ul><li>Grid Computing </li></ul></ul><ul><ul><li>Utility Computing </li></ul></ul>Network is computer (client - server) ‏ Tightly coupled computing resources: CPU, storage, data, etc Usually connected within a LAN Managed as a single resource <ul><ul><ul><li>Resource sharing across </li></ul></ul></ul><ul><ul><ul><li>administrative domains </li></ul></ul></ul><ul><ul><ul><li>Decentralized, open standards, </li></ul></ul></ul><ul><ul><ul><li>non-trivial service </li></ul></ul></ul>Separation of Functionalities Commodity, Open Source Global Resource Sharing Cluster and grid images are from Fermilab and CERN, respectively.
  5. 5. Evolution of Computing with the Network <ul><ul><li>Cluster Computing </li></ul></ul><ul><ul><li>Network Computing </li></ul></ul><ul><ul><li>Grid Computing </li></ul></ul><ul><ul><li>Utility Computing </li></ul></ul>Network is computer (client - server) ‏ Tightly coupled computing resources: CPU, storage, data, etc Usually connected within a LAN Managed as a single resource <ul><ul><ul><li>Don't buy computers, lease computing power </li></ul></ul></ul><ul><ul><ul><li>Upload, run, download </li></ul></ul></ul><ul><ul><ul><li>Resource sharing across </li></ul></ul></ul><ul><ul><ul><li>administrative domains </li></ul></ul></ul><ul><ul><ul><li>Decentralized, open standards, </li></ul></ul></ul><ul><ul><ul><li>non-trivial service </li></ul></ul></ul>Separation of Functionalities Commodity, Open Source Global Resource Sharing Ownership Model Cluster and grid images are from Fermilab and CERN, respectively.
  6. 6. The Next Step: Cloud Computing Services and data are in the cloud, accessible with any device connected to the cloud with a browser
  7. 7. The Next Step: Cloud Computing Services and data are in the cloud, accessible with any device connected to the cloud with a browser A key technical issue for developers: Scalability
  8. 8. Applications on the Web internet splat map: http://flickr.com/photos/jurvetson/916142/ , CC-by 2.0 baby picture: http://flickr.com/photos/cdharrison/280252512/ , CC-by-sa 2.0 Your user internet splat map: http://flickr.com/photos/jurvetson/916142/ , CC-by 2.0 baby picture: http://flickr.com/photos/cdharrison/280252512/ , CC-by-sa 2.0 Your Coolest Web Application
  9. 9. Applications on the Web internet splat map: http://flickr.com/photos/jurvetson/916142/ , CC-by 2.0 baby picture: http://flickr.com/photos/cdharrison/280252512/ , CC-by-sa 2.0 Your user internet splat map: http://flickr.com/photos/jurvetson/916142/ , CC-by 2.0 baby picture: http://flickr.com/photos/cdharrison/280252512/ , CC-by-sa 2.0 The Cloud Your Coolest Web Application
  10. 10. <ul><ul><li>松下問童子 </li></ul></ul><ul><ul><li>言師採藥去 </li></ul></ul><ul><ul><li>只在此山中 </li></ul></ul><ul><ul><li>雲深不知處 </li></ul></ul><ul><ul><li>賈島《尋隱者不遇》 </li></ul></ul>I asked the kid under the pine tree, &quot;Where might your master be?&quot; &quot;He is picking herbs in the mountain,&quot; he said, &quot;the cloud is too deep to know where.&quot; Jia Dao, &quot;Didn't meet the master,&quot; written around 800AD picture: http://flickr.com/photos/soylentgreen23/313880255/, CC-by 2.0
  11. 11. How many users do you want to have? The Cloud Your Coolest Web Application
  12. 12. How many users do you want to have? The Cloud Your Coolest Web Application
  13. 13. Google Growth Nov. '98: 10,000 queries on 25 computers Apr. '99: 500,000 queries on 300 computers Sep. '99: 3,000,000 queries on 2100 computers
  14. 14. Scalability matters
  15. 15. Counting the numbers Client / Server One : Many Personal Computer One : One
  16. 16. Counting the numbers Client / Server One : Many Personal Computer One : One Cloud Computing Many : Many Developer transition
  17. 17. What Powers Cloud Computing? <ul><ul><li>Performance : single machine not interesting </li></ul></ul><ul><ul><li>Reliability : </li></ul></ul><ul><ul><ul><li>Most reliable hardware will still fail: fault-tolerant software needed </li></ul></ul></ul><ul><ul><ul><li>Fault-tolerant software enables use of commodity components </li></ul></ul></ul><ul><ul><li>Standardization : use standardized machines to run all kinds of applications </li></ul></ul><ul><ul><li>Commodity Hardware </li></ul></ul><ul><ul><li>Infrastructure Software </li></ul></ul><ul><ul><li>Distributed storage: Google File System (GFS) ‏ </li></ul></ul><ul><ul><li>Distributed semi-structured data system: BigTable </li></ul></ul><ul><ul><li>Distributed data processing system: MapReduce </li></ul></ul>chunk ... chunk ... chunk ... chunk ... /foo/bar
  18. 18. google.stanford.edu (circa 1997)‏
  19. 19. google.com (1999)‏ “ cork boards&quot;
  20. 20. Google Data Center (circa 2000)‏
  21. 21. google.com (new data center 2001)‏
  22. 22. google.com (3 days later)‏
  23. 23. Current Design <ul><ul><li>In-house rack design </li></ul></ul><ul><ul><li>PC-class motherboards </li></ul></ul><ul><ul><li>Low-end storage and networking hardware </li></ul></ul><ul><ul><li>Linux </li></ul></ul><ul><ul><li>+ in-house software </li></ul></ul>
  24. 24. How to develop a web application that scales? Storage Database Serving Google's solution/replacement Google File System BigTable MapReduce Google AppEngine Data Processing
  25. 25. How to develop a web application that scales? Storage Database Serving Google's solution/replacement Google File System BigTable MapReduce Google AppEngine Published papers Opened on 2008/5/28 Data Processing hadoop: open source implementation
  26. 26. Google File System GFS Client Application Replicas Masters GFS Master GFS Master C 0 C 1 C 2 C 5 Chunkserver C 0 C 2 C 5 Chunkserver C 1 Chunkserver … File namespace chunk 2ef7 chunk ... chunk ... chunk ... /foo/bar GFS Client Application C 5 C 3 <ul><ul><li>Files broken into chunks (typically 64 MB)‏ </li></ul></ul><ul><ul><li>Chunks triplicated across three machines for safety (tunable)‏ </li></ul></ul><ul><ul><li>Master manages metadata </li></ul></ul><ul><ul><li>Data transfers happen directly between clients and chunkservers </li></ul></ul>
  27. 27. GFS Usage @ Google <ul><ul><li>200+ clusters </li></ul></ul><ul><ul><li>Filesystem clusters of up to 5000+ machines </li></ul></ul><ul><ul><li>Pools of 10000+ clients </li></ul></ul><ul><ul><li>5+ PB Filesystems </li></ul></ul><ul><li>All in the presence of frequent HW failures </li></ul>
  28. 28. BigTable “ www.cnn.com ” “ contents: ” Rows Columns Timestamps t 3 t 11 t 17 “ <html> …” <ul><ul><li>Distributed multi-level sparse map: fault-tolerant, persistent </li></ul></ul><ul><ul><li>Scalable: </li></ul></ul><ul><ul><ul><li>Thousands of servers </li></ul></ul></ul><ul><ul><ul><li>Terabytes of in-memory data, petabytes of disk-based data </li></ul></ul></ul><ul><ul><li>Self-managing </li></ul></ul><ul><ul><ul><li>Servers can be added/removed dynamically </li></ul></ul></ul><ul><ul><ul><li>Servers adjust to load imbalance </li></ul></ul></ul>Data model: (row, column, timestamp)‏  cell contents
  29. 29. Why not just use commercial DB? <ul><ul><li>Scale is too large or cost is too high for most commercial databases </li></ul></ul><ul><ul><li>Low-level storage optimizations help performance significantly </li></ul></ul><ul><ul><ul><li>Much harder to do when running on top of a database layer </li></ul></ul></ul><ul><ul><ul><li>Also fun and challenging to build large-scale systems :)‏ </li></ul></ul></ul>
  30. 30. System Structure Lock service Bigtable master Bigtable tablet server Bigtable tablet server Bigtable tablet server GFS Cluster scheduling system … holds metadata, handles master-election holds tablet data, logs handles failover, monitoring performs metadata ops + load balancing serves data serves data serves data BigTable Cell
  31. 31. System Structure Lock service Bigtable master Bigtable tablet server Bigtable tablet server Bigtable tablet server GFS Cluster scheduling system … holds metadata, handles master-election holds tablet data, logs handles failover, monitoring performs metadata ops + load balancing serves data serves data serves data Bigtable client Bigtable client library Open() ‏ read/write metadata ops BigTable Cell
  32. 32. BigTable Summary <ul><ul><li>Data model applicable to broad range of clients </li></ul></ul><ul><ul><ul><li>Actively deployed in many of Google’s services </li></ul></ul></ul><ul><ul><li>System provides high performance storage system on a large scale </li></ul></ul><ul><ul><ul><li>Self-managing </li></ul></ul></ul><ul><ul><ul><li>Thousands of servers </li></ul></ul></ul><ul><ul><ul><li>Millions of ops/second </li></ul></ul></ul><ul><ul><ul><li>Multiple GB/s reading/writing </li></ul></ul></ul><ul><ul><li>Currently ~500 BigTable cells </li></ul></ul><ul><ul><li>Largest bigtable cell manages ~3PB of data spread over several thousand machines (larger cells planned)‏ </li></ul></ul>
  33. 33. Distributed Data Processing <ul><li>How do you process 1 month of apache logs to find the usage pattern numRequest[minuteOfTheWeek]? </li></ul><ul><ul><li>Input files: N rotated logs </li></ul></ul><ul><ul><li>Size: O(TB) for popular sites – multiple physical disks </li></ul></ul><ul><ul><li>Processing phase 1: launch M processes </li></ul></ul><ul><ul><ul><li>input: N/M log files </li></ul></ul></ul><ul><ul><ul><li>output: one file of numRequest[minuteOfTheWeek] </li></ul></ul></ul><ul><ul><li>Processing phase 2: merge M output files of step 1 </li></ul></ul>
  34. 34. Pseudo Codes for Phase 1 and 2 def findBucket(requestTime): # return minute of the week numRequest = zeros(1440*7) # an array of 1440*7 zeros for filename in sys.argv[2:]: for line in open(filename): minuteBucket = findBucket(findTime(line)) ‏ numRequest[minuteBucket] += 1 outFile = open(sys.argv[1], 'w') ‏ for i in range(1440*7): outFile.write(&quot;%d %d &quot; % (i, numRequest[i])) ‏ outFile.close() ‏ numRequest = zeros(1440*7) # an array of 1440*7 zeros for filename in sys.argv[2:]: for line in open(filename): col = line.split() ‏ [i, count] = [int(col[0]), int(col[1])] numRequest[i] += count # write out numRequest[] like phase 1
  35. 35. Task Management <ul><ul><li>Logistics: </li></ul></ul><ul><ul><ul><li>Decide which computers to run phase 1, make sure the log files are accessible (NFS-like or copy)‏ </li></ul></ul></ul><ul><ul><ul><li>Similar for phase 2 </li></ul></ul></ul><ul><ul><li>Execution: </li></ul></ul><ul><ul><ul><li>Launch the phase 1 programs with appropriate command line flags, re-launch failed tasks until phase 1 is done </li></ul></ul></ul><ul><ul><ul><li>Similar for phase 2 </li></ul></ul></ul><ul><ul><li>Automation: build task scripts on top of existing batch system (PBS, Condor, GridEngine, LoadLeveler, etc)‏ </li></ul></ul>
  36. 36. Technical Issues <ul><ul><li>File management: where to store files? </li></ul></ul><ul><ul><ul><li>Store all logs on the same file server ➔ Bottleneck! </li></ul></ul></ul><ul><ul><ul><li>Distributed file system: opportunity to run locally </li></ul></ul></ul><ul><ul><li>Granularity: how to decide N and M? </li></ul></ul><ul><ul><ul><li>Performance ➚ when M ➚ until M == N if no I/O contention </li></ul></ul></ul><ul><ul><ul><li>Can M > N? Yes! Careful log splitting. Is it faster? </li></ul></ul></ul><ul><ul><li>Job allocation: assign which task to which node? </li></ul></ul><ul><ul><ul><li>Prefer local job: knowledge of file system </li></ul></ul></ul><ul><ul><li>Fault-recovery: what if a node crashes? </li></ul></ul><ul><ul><ul><li>Redundancy of data a must </li></ul></ul></ul><ul><ul><ul><li>Crash-detection and job re-allocation necessary </li></ul></ul></ul>Performance Robustness Reusability
  37. 37. MapReduce – A New Model and System <ul><ul><li>Map: (in_key, in_value) ➔ { ( key j , value j ) | j = 1, ..., K } </li></ul></ul><ul><ul><li>Reduce: ( key , [ value 1 , ... value L ]) ➔ ( key , f_value )‏ </li></ul></ul>Two phases of data processing
  38. 38. MapReduce Programming Model <ul><ul><li>Borrowed from functional programming </li></ul></ul><ul><li>map(f, [x 1 , x 2 , ...]) = [f(x 1 ), f(x 2 ), ...] </li></ul><ul><li>reduce(f, x 0 , [x 1 , x 2 , x 3 ,...]) = reduce(f, f(x 0 , x 1 ), [x 2 ,...]) = ... (continue until the list is exausted)‏ </li></ul><ul><ul><li>Users implement two functions: </li></ul></ul><ul><ul><ul><li>map (in_key, in_value) ➔ ( key j , value j ) list </li></ul></ul></ul><ul><ul><ul><li>reduce [ value 1 , ... value L ] ➔ f_value </li></ul></ul></ul>
  39. 39. MapReduce Version of Pseudo Code def findBucket(requestTime): # return minute of the week class LogMinuteCounter(MapReduction): def Map(key, value, output): # key is location minuteBucket = findBucket(findTime(value)) ‏ output.collect(str(minuteBucket), &quot;1&quot;) def Reduce(key, iter, output): sum = 0 while not iter.done(): sum += 1 output.collect(key, str(sum)) ‏ <ul><ul><li>Look! mom, no file I/O! </li></ul></ul><ul><ul><li>Only data processing logic... </li></ul></ul><ul><li>... and gets much more than that! </li></ul>
  40. 40. MapReduce Framework <ul><li>For certain classes of problems, the MapReduce framework provides: </li></ul><ul><ul><li>Automatic & efficient parallelization/distribution </li></ul></ul><ul><ul><li>I/O scheduling: Run mapper close to input data (same node or same rack when possible, with GFS)‏ </li></ul></ul><ul><ul><li>Fault-tolerance: restart failed mapper or reducer tasks on the same or different nodes </li></ul></ul><ul><ul><li>Robustness: tolerate even massive failures, e.g. large-scale network maintenance: once lost 1800 out of 2000 machines </li></ul></ul><ul><ul><li>Status/monitoring </li></ul></ul>
  41. 41. Task Granularity And Pipelining <ul><ul><li>Fine granularity tasks: many more map tasks than machines </li></ul></ul><ul><ul><ul><li>Minimizes time for fault recovery </li></ul></ul></ul><ul><ul><ul><li>Can pipeline shuffling with map execution </li></ul></ul></ul><ul><ul><ul><li>Better dynamic load balancing </li></ul></ul></ul><ul><ul><li>Often use 200,000 map/5000 reduce tasks w/ 2000 machines </li></ul></ul>
  42. 53. MapReduce: Adoption at Google MapReduce Programs in Google ’ s Source Tree Summer intern effect New MapReduce Programs Per Month
  43. 54. MapReduce: Uses at Google <ul><ul><li>Typical configuration: 200,000 mappers, 500 reducers on 2,000 nodes </li></ul></ul><ul><ul><li>Broad applicability has been a pleasant surprise </li></ul></ul><ul><ul><ul><li>Quality experiments, log analysis, machine translation, ad-hoc data processing, … </li></ul></ul></ul><ul><ul><ul><li>Production indexing system: rewritten w/ MapReduce </li></ul></ul></ul><ul><ul><ul><ul><li>~10 MapReductions, much simpler than old code </li></ul></ul></ul></ul>
  44. 55. MapReduce Summary <ul><ul><li>MapReduce has proven to be a useful abstraction </li></ul></ul><ul><ul><li>Greatly simplifies large-scale computations at Google </li></ul></ul><ul><ul><li>Fun to use: focus on problem, let library deal with messy details </li></ul></ul><ul><ul><li>Published </li></ul></ul>
  45. 56. A Data Playground <ul><ul><li>Substantial fraction of internet available for processing </li></ul></ul><ul><ul><li>Easy-to-use teraflops/petabytes, quick turn-around </li></ul></ul><ul><ul><li>Cool problems, great colleagues </li></ul></ul>MapReduce + BigTable + GFS = Data playground
  46. 57. Query Frequency Over Time
  47. 58. Learning From Data Searching for Britney Spears...
  48. 59. Open Source Cloud Software: Project Hadoop <ul><ul><li>Google published papers on GFS ('03), MapReduce ('04) and BigTable ('06) </li></ul></ul><ul><ul><li>Project Hadoop </li></ul></ul><ul><ul><ul><li>An open source project with the Apache Software Foundation </li></ul></ul></ul><ul><ul><ul><li>Implement Google's Cloud technologies in Java </li></ul></ul></ul><ul><ul><ul><li>HDFS (&quot;GFS&quot;) and Hadoop MapReduce are available, Hbase (&quot;BigTable&quot;) is being developed. </li></ul></ul></ul><ul><ul><li>Google is not directly involved in the development: avoid conflict of interest. </li></ul></ul>
  49. 60. Industrial interest in Hadoop <ul><ul><li>Yahoo! hired core Hadoop developers </li></ul></ul><ul><ul><ul><li>Announced that their Webmap is produced on a Hadoop cluster with 2000 hosts (dual/quad cores) on Feb 19, 2008. </li></ul></ul></ul><ul><ul><li>Amazon EC2 (Elastic Compute Cloud) supports Hadoop </li></ul></ul><ul><ul><ul><li>Write your mapper and reducer, upload your data and program, run and pay by resource utilisation </li></ul></ul></ul><ul><ul><ul><li>Tiff-to-PDF conversion of 11 million scanned New York Times articles (1851-1922) done in 24 hours on Amazon S3/EC2 with Hadoop on 100 EC2 machines. http://open.nytimes.com/2007/11/01/self-service-prorated-super-computing-fun/ </li></ul></ul></ul><ul><ul><ul><li>Many silicon valley startups are using EC2 and starting to use Hadoop for their coolest ideas on internet-scale of data </li></ul></ul></ul><ul><ul><li>IBM announced &quot;Blue Cloud,&quot; will include Hadoop among other software components </li></ul></ul>
  50. 61. Industrial interest in Hadoop <ul><ul><li>Yahoo! hired core Hadoop developers </li></ul></ul><ul><ul><ul><li>Announced that their Webmap is produced on a Hadoop cluster with 2000 hosts (dual/quad cores) on Feb 19, 2008. </li></ul></ul></ul><ul><ul><li>Amazon EC2 (Elastic Compute Cloud) supports Hadoop </li></ul></ul><ul><ul><ul><li>Write your mapper and reducer, upload your data and program, run and pay by resource utilisation </li></ul></ul></ul><ul><ul><ul><li>Tiff-to-PDF conversion of 11 million scanned New York Times articles (1851-1922) done in 24 hours on Amazon S3/EC2 with Hadoop on 100 EC2 machines. http://open.nytimes.com/2007/11/01/self-service-prorated-super-computing-fun/ </li></ul></ul></ul><ul><ul><ul><li>Many silicon valley startups are using EC2 and starting to use Hadoop for their coolest ideas on internet-scale of data </li></ul></ul></ul><ul><ul><li>IBM announced &quot;Blue Cloud,&quot; will include Hadoop among other software components </li></ul></ul>
  51. 62. Industrial interest in Hadoop <ul><ul><li>Yahoo! hired core Hadoop developers </li></ul></ul><ul><ul><ul><li>Announced that their Webmap is produced on a Hadoop cluster with 2000 hosts (dual/quad cores) on Feb 19, 2008. </li></ul></ul></ul><ul><ul><li>Amazon EC2 (Elastic Compute Cloud) supports Hadoop </li></ul></ul><ul><ul><ul><li>Write your mapper and reducer, upload your data and program, run and pay by resource utilisation </li></ul></ul></ul><ul><ul><ul><li>Tiff-to-PDF conversion of 11 million scanned New York Times articles (1851-1922) done in 24 hours on Amazon S3/EC2 with Hadoop on 100 EC2 machines. http://open.nytimes.com/2007/11/01/self-service-prorated-super-computing-fun/ </li></ul></ul></ul><ul><ul><ul><li>Many silicon valley startups are using EC2 and starting to use Hadoop for their coolest ideas on internet-scale of data </li></ul></ul></ul><ul><ul><li>IBM announced &quot;Blue Cloud,&quot; will include Hadoop among other software components </li></ul></ul>
  52. 63. AppEngine <ul><ul><li>Run your applications on Google infrastructure and data centers </li></ul></ul><ul><ul><ul><li>Focus on your application, forget about machines, operating systems, web server software, database setup / maintenance, load balancing, etc. </li></ul></ul></ul><ul><ul><li>Opened for public sign-up on 2008/5/28 </li></ul></ul><ul><ul><li>Python API to Datastore (on top of BigTable) and Users </li></ul></ul><ul><ul><li>Free to start, pay as you expand </li></ul></ul><ul><ul><li>More details can be found in the AppEngine talks. </li></ul></ul><ul><ul><li>http://code.google.com/appengine/ </li></ul></ul>
  53. 64. Academic Cloud Computing Initiative <ul><ul><li>Google works with top universities on teaching GFS, BigTable and MapReduce in courses </li></ul></ul><ul><ul><ul><li>UW, MIT, Stanford, Berkeley, CMU, Maryland </li></ul></ul></ul><ul><ul><ul><li>First wave in Taiwan: NTU, NCTU </li></ul></ul></ul><ul><ul><ul><ul><li>&quot;Parallel Programming&quot; by Professor Pangfeng Liu (NTU)‏ </li></ul></ul></ul></ul><ul><ul><ul><ul><li>&quot;Web Services and Applications&quot; by Professors Wen-Chih Peng and Jiun-Lung Huang (NCTU) </li></ul></ul></ul></ul><ul><ul><ul><li>Google offers course materials, technical seminars and student mentoring by Google engineers. </li></ul></ul></ul><ul><ul><li>Google and IBM provides a data center for academic use </li></ul></ul><ul><ul><ul><li>Software stack: Linux + Hadoop + IBM's cluster management software </li></ul></ul></ul>
  54. 65. References <ul><ul><li>“ The Google File System,” Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung, Proceedings of the 19th ACM Symposium on Operating Systems Principles, 2003, pp. 20-43. http://research.google.com/archive/gfs-sosp2003.pdf </li></ul></ul><ul><ul><li>“ MapReduce: Simplified Data Processing on Large Clusters,” Jeffrey Dean, Sanjay Ghemawat, Communications of the ACM, vol. 51, no. 1 (2008), pp. 107-113. http://labs.google.com/papers/mapreduce-osdi04.pdf </li></ul></ul><ul><ul><li>“ Bigtable: A Distributed Storage System for Structured Data,” Fay Chang et al, 7th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2006, pp. 205-218. http://research.google.com/archive/bigtable-osdi06.pdf </li></ul></ul><ul><ul><li>Distributed systems course materials (slides, videos): http://code.google.com/edu/parallel </li></ul></ul>
  55. 66. Summary <ul><ul><li>Cloud Computing is about scalable web applications (and data processing needed to make apps interesting)‏ </li></ul></ul><ul><ul><li>Lots of commodity PCs: good for scalability and cost </li></ul></ul><ul><ul><li>Build web applications to be scalable from the start </li></ul></ul><ul><ul><ul><li>AppEngine allows developers to use Google's scalable infrastructure and data centers </li></ul></ul></ul><ul><ul><ul><li>Hadoop enables scalable data processing </li></ul></ul></ul>
  56. 67. The era of Cloud Computing is here! Photo by mr.hero on panoramio (http://www.panoramio.com/photo/1127015)‏ news people book search photo product search video maps e-mails mobile blogs groups calendar scholar Earth Sky web desktop translate messages

×