Cloud computing platforms in a multicore world

  • 1,468 views
Uploaded on

This presentation gives an overview of the long term vision behind cloud …

This presentation gives an overview of the long term vision behind cloud
computing, and introduces the key issues that are currently addressed
by only a few projects that typically examine an issue in isolation but
not in the context of an overarching framework or vision. Elements and
answers that go beyond a scattergun approach are clearly lacking today.

This study seeks to match two disruptive innovations to each other: The
vision and the requirements of cloud computing platforms. The
multi-core challenge that has emerged since approximately 2007. The
study investigates whether existing functional programming languages
could be the right glue to bind these two innovations together, and
thereby create a framework for future cloud computing platforms in a
concurrent multi-core world.

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,468
On Slideshare
0
From Embeds
0
Number of Embeds
3

Actions

Shares
Downloads
62
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Cloud Computing platformsin a multicore world.Joerg Fritsch, NATO C3 AgencySchool of Computer Science & InformaticsCardiff University, 7 Sep. 2011
  • 2. Cloud Computing platforms in a multicore world. AGENDAPage  2
  • 3. Agenda Taxonomy: What is cloud computing? Problem space: What issues does this create? Solution Space: New standards & fads. Research goals: What subset does my research address and how do we go about? Links to other research areas.Page  3
  • 4. Cloud Computing platforms in a multicore world. TAXONOMYPage  4
  • 5. NIST SP-800-145Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computingresources (e.g., networks, servers, storage, applications, and services) thatcan be rapidly provisioned and released with minimal management effort orservice provider interaction. This cloud model promotes availability and iscomposed of five essential characteristics, three service models, and fourdeployment models.Page  5
  • 6. NIST SP-800-145 (continued1) Five key concepts: Self Service, Broad network access, Resource pooling, Rapid Elasticity, Measured service. Three service models (delivery models): Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS). Four deployment models: Private cloud, Community cloud, Public cloud, Hybrid cloud.Page  6
  • 7. NIST SP-800-145 (continued2) Adapted from Craig-Wood 2010.Page  7
  • 8. Cloud Computing platforms in a multicore world. FIVE KEY CONCEPTSPage  8
  • 9. Key concept 1: SELF-SERVICE Customers provision resources via the internet. Prevailing subscription and payment model is “pay as you go” (PAYG). Clouds that focus on the provisioning of Virtual Machines (VMs) and Storage benefit from an entire business ecosystem (Moore, 1996). Cloud ecosystem facilitates e.g. the configuration, deployment, movement or monitoring of VMs in a self-service manner. There is no comparable PaaS ecosystem.Page  9
  • 10. Key Concept 2: BROAD NETWORK ACCESS NIST SP-800-145 uses the word “ubiquitous”. "Now that data can stream through the Internet at the speed of light, the full power of computers can be delivered to users from afar. It doesnt matter much whether the server computer running your program in the data center down the hall or in somebody elses data center on the other side of the country. All the machines are connected and shared - theyre one machine.” (Carr, 2009). Technical choices of tomorrow: providerless networks? Ubiquitous networks?Page  10
  • 11. Key Concept 3: RESOURCE POOLING Often misunderstood as synonym for hypervisor based virtualization. Hardware can also be pooled using .g. Hadoop or Erlang. Resources are location independent and abstracted from servers, hard drives or types (SCSI. SATA), etc. Pooling of resources builds economies of scale (Gartner). Implies multi-tenancy.Page  11
  • 12. Key Concept 4: ELASTICITY According to the book resources can be rapidly provisioned, quickly scaled and released. NIST SP-800-145 wording: “in some cases automatically”. Elasticity is not an out of the box feature, but at best a function of the application interfacing with an exposed API of the cloud. Cloud ecosystem partially solves this issue. “Plastic”? Monolithic applications do not play well.Page  12
  • 13. Key Concept 5: MEASURED SERVICE Usage based billing schemes. Complex billing/metering schemes only interesting for the cloud/utility provider. Need to be greatly simplified for the subscriber.Page  13
  • 14. Cloud Computing platforms in a multicore world. THREE+1 SERVICE MODELSPage  14
  • 15. Service Model 1: Infrastructure as a Service (IaaS) Currently prevailing delivery model. Server-centric. Based on hypervisor technology. Units of scale: VM, Means of scale: Add VMs. VMs are not very elastic and have fixed computing power and RAM (S, M, L, XL) that can only be changed with cumbersome work. Limitations: physical limits of the underlying server (product dependent definite sizes of node-clusters, PODs and data centers).Page  15
  • 16. Service Model 1: IaaS (continued) VM based technologies have numerous large scale implications: For the service provider: IP addressing/L3 separation vs VLAN ID For the customer: n operating systems must be configured from templates, licensed, updated and patched, virus scanned, identity and access management … . IT staff is split from the responsibility for their hardware and Data Center. Everything else stays (nearly) the same.Page  16
  • 17. Service Model 2: Platform as a Service (PaaS) Unit of Scale: Thread, Process or Node, Means of Scale: spawning. ∞ number of threads/processes thinkable Limitations: Message passing performance in distributed architectures (e.g. MPI, Erlang) Process does not need an IP address nor a VID. May need a “safe” PID and/or URI. Examples: Google App Engine, Oracle Exalogic, Appscale, MS AZUREPage  17
  • 18. Elasticity: IaaS vs PaaSIaaS PaaS  Holistic.  Borders of building blocks are Currently prevailing. dissolved. Emulating legacy experiences  Real concurrency. (server, load balancer, data center).  Requires new software, new programming languages, new designs.Page  18
  • 19. Service Model 3: Software as a Service (SaaS) Every service we buy over the internet can be “cloud washed” by categorizing it as SaaS. The three delivery models are not necessarily based on each other. Although frequently depictured as stack/layers or pyramid. “Traditionally built software just won’t cut it in the cloud.” (Perry, 2008). Largest players tend to develop their own (PaaS) middleware.Page  19
  • 20. By the way: What is utility computing? A fourth service model Carr, N. (2006). The end of Corporate Computing. MIT SMR Corporate IT does not contribute to a competitive advantage in business Every firm has the same standardized hard- and softwarePage  20
  • 21. Utility Computing (continued) In the future IT will be purchased as utility. IT will be produced in the most economic way by a few large suppliers. Both, cloud- and utility computing eventually lead to concentration of hardware, services and data in large Data Centers. Cloud scale = massive volumes in massively distributed architectures. The rise of the Mega Data Center!Page  21
  • 22. Cloud Computing platforms in a multicore world. PROBLEM SPACE: ISSUESPage  22
  • 23. (Mega) Data Centers Must be built green (energy efficient). Unit of measure: Power Usage Effectiveness (PUE). PUE is de facto standard. Proposed by The Green Grid in 2007. Server concentration and density. Core issues: Power, Cooling, Placement & Sizing. Innovative designs: containerized, geothermal cooling, sea/lake coolingPage  23
  • 24. Cost Costs explode once you go away from commodity hardware (vendor lock- in, mandatory service contracts, product life cycles). Costs explode when you have to license third party applications in large scale. Avoid vendor lock-in. Avoid proprietary technologies unless it is your own intellectual property.Page  24
  • 25. Asynchronous I/O and Asynchronousprocessing All disadvantages of any one technology are greatly amplified at cloud scale. At largest scale all synchronous and/or blocking actions lead to roadblocks. Asynchrony can go down to the microprocessor level: AMULET Processor (Furber, S.), GreenArrays GA144 Chips (2011)! “It’s time for clockless” (Tristram, 2001).Page  25
  • 26. Distribution Everything “inside a cloud” is physically distributed (data, processing). Large amounts of distributed data shared between servers. “Big Data”. After 25 years of Von Neumann Machines, computer architecture will finally change (Armstrong, 2008). Large scale distributed processing. “Many Core”. 2007: The term “Multicore Challenge” appears.Page  26
  • 27. An easy example of problems in many core &distributed environments (Siebert, Hunt 2011)int counter; Thread1void increment () r1 = counter;{ r2 = r1 + 1;counter++; counter = r2; An increment} can get lost! Thread2 r1 = counter; r2 = r1 + 1; counter = r2;Page  27
  • 28. An easy example of problems in many core &distributed environments (continued1) Code lacks synchronization On multicore environments chances of failure increase (“explode”)! On a single core it practically always works “solution”: synchronization Messaging comes into playPage  28
  • 29. An easy example of problems in many core &distributed environments (continued2)int counter;synchronized void increment ( ){counter++;} Frequent synchronization can kill performance! Lots of other issues to look at: memory models, optimizations e.g. re-ordering, loop optimizations, scheduling, blocking, semaphore implementation, thread-priorityPage  29
  • 30. Cloud Computing platforms in a multicore world. SOLUTION SPACE: NEW STANDARDS & FADSPage  30
  • 31. Node.js Asynchronous I/O framework (event driven). Server-side java script runtime. Compiled with the V8 engine. Claims that it will never dead-lock. Claims to handle thousands of concurrent connections on a single process. Idea conceived in early 2009, first available version in 2010.Page  31
  • 32. Facing the multicore challenge Functional languages: Erlang, clojure, Scala frequently discussed as starting points to solve the multicore challenge. Haskell could be a runner-up. New hardware architectures & circuit designs: Tilera, GPUs, FPGAs, 3D RAM, clockless designs? – what happens to “dark silicon”? Applying parallel programming models (CUDA, OPenCL, OpenMP) to many core environments?Page  32
  • 33. Hadoop & Hadoop File System (HDFS) Open source java framework for parallel processing large data on clusters of commodity hardware (low cost!). Core is the HDFS, a very large nonrelational distributed file system. NameNodes (contains HDFS metadata) & DataNodes (executes read/write operations etc.). DataNodes have Direct Attached Storage (DAS).Page  33
  • 34. Hadoop & HDFS (continued) All data is triplicated. Benefits from highly parallel read/write enabled by DAS. Processing power is added with every DataNode! Excels in batch oriented analytics. Blends well with e.g. MapReduce and Bigtable derivates: HBase, HypertablePage  34
  • 35. Not only SQL (nosql) Non relational data bases. Focusing on huge amounts of data. Simple operations (key lookup, read, write), no complex queries, no joins. Deploys well in RESTful contexts. “shared nothing” (RAM, Disk), data and load distributed over many commodity servers. No Atomicity, Consistency, Isolation, and Durability (ACID).Page  35
  • 36. Cloud Computing platforms in a multicore world. RESEARCH GOALS & OBJECTIVESPage  36
  • 37. Goals Identify and solve real problems by building a real system. Devise a theoretical architecture for a perfectly elastic cloud platform (PaaS) that scales through spawning. Write demonstrator code addressing key issues such as concurrency, asynchrony, safe code execution in multi tenant cloud environments. Determine the most viable approach to cloud-based concurrency.Page  37
  • 38. Evolution of IT(or why the research focus on PaaS is a good idea)Page  38
  • 39. Objectives: Foundation Evaluate functional programming languages (e.g. Erlang, Haskell) as a starting point. Have the cloud platform look like Appscale, GAE or MS AZURE. Have it born asynchronous: use asynchronous processing and designs wherever possible. Scale through spawning, address the many core challenge.Page  39
  • 40. Erlang “It’s hard to catch up with big companies” (AMULET Group, 2000). Capitalizing on 20+ years of Erlang related R&D. Extremely low-cost processes (featherweight processes) can afford large number of processes. Has everything to match the issues distribution and scalability: Asynchrony, Messaging, Event Handling, … “Shared Nothing”: No shared code, No shared statePage  40
  • 41. Erlang (continued) Process spawning works seamlessly (but not transparent) across nodes. Messages purely asynchronous and location transparent. ErlangVM is not on top of further technologies (eg. Java/node.js) or hypervisor based clouds (eg. Appscale).Page  41
  • 42. Erlang, but … No notion of security (Brown, Sahlin, 1999): – Functions can access other processes. – Functions can access external resources. – Functions can access permanent data. All processes within an Erlang Node have the same privileges and same access to all resources. No means to limit resources utilized by a process apart from putting restrictions on the whole node.Page  42
  • 43. Erlang, but … (continued) Only means of partitioning is to run a separate node in a separate heavyweight OS process. Similar to Java on the GAE and Appscale some Erlang modules will need to be blacklisted, e.g. os:cmd(“rm –rf *”). In contrast to Haskell, Erlang is a “finished” product. In order to build a multi tenant platform security would need to be retrofitted into Erlang!Page  43
  • 44. Haskell, an alternative to Erlang? Both are functional programming languages. Immutability of data. Under the hood featherweight processes (Erlang) and Sparks (Haskell) are probably pretty similar. A wide range of concurrency (Haskell) vs a one trick pony (Erlang)? Haskell has no clear distributed model. Actor model is implementable. Not as popular as Erlang, no big (commercial) success stories.Page  44
  • 45. Haskell vs Erlang Haskell Erlang  Academic “product”.  Commercial product but open  Sparks. The compiler decides source. what you eventually get.  Lightweight threads. The VM  par combinator. f ‘par’ (e + f) pretends that you get what you order.  Lazy, functions are only evaluated when needed.  spawn(FUN) -> pid().  Statically typed.  Strict, functions are always evaluated.  No instant answers to distributed programming.  Dynamically typed.  Actor model. - MessagingPage  45
  • 46. The testbedPage  46
  • 47. Embedded Cloud Computing Testbed:μcloud Based on ARM Cortex-A8. Embedded Linux/Erlang version available. Based on a Linux Kernel with a Busybox. Good start at a time where limited/no funds are available. Distributed systems with 100+ nodes can easily be build without deploying a huge hardware base. μcloud allows to test and demonstrate relevance of research and findings. Implementation later transferable to ARM based data center products.Page  47
  • 48. ARM (Advanced RISC Machine) Cloud computing is a disruptive innovation that might have consequences down to a processor level. 32bit reduced instruction set computer (RISC) architecture. Low cost and very low power consumption. ARM CPUs allow ultra high density servers/data centers. Very strong in the embedded market segment. Many development platforms available (eg Gumstix, Beagleboards).Page  48
  • 49. ARM (continued) Asynchronous version ARM996HS (clockess) “available” in 2006. Only released as Verilog model. No silicon produced. By numbers ARM CPUs have outsold all others! Rumors of 2RU servers with 480 CPU cores for the Data Center market. - Large amounts of synchronous cores are not new on GPUs! Cloud Computing might eventually converge with Embedded Computing (ARM, 2009)! Nicely links to the Ubicomp world.Page  49
  • 50. Ubiquitous Computing Adapted from Baker, 2009. Based on data from the World Wireless Research Forum 2006Page  50
  • 51. Objectives: Design Obey the five cloud computing key concepts. Expose the advantages of functional languages at a sensible level of abstraction (low-cost processes or sparks leave more up to the developer than e.g. Java or .NET developers are used to). Expose two units of scale to developers: process (spawning) and SOAP/RESTful Services (duplication). Platform should ideally be able to host applications in a variety of programming languages (e.g. .NET, Java).Page  51
  • 52. Wait.The developer takes care ofelasticity? Yes. Reality does support her. “The world is concurrent.” (Armstrong, 2008). “Things” tend to come in in parallel. Guess how much will be coming in in parallel n the year 2017?Page  52
  • 53. Objectives: Implementation Save resources: Linux Kernel with as little operating system as possible: e.g. Just enough OS (JeOS), Busybox. - Alternatively use the EC2 API. Alternatives: run on a Hypervisor (e.g. XEN/HaLVM) or tailor an FPGA. Start with an embedded cloud computing test bed “μcloud”. Deliverables are open source, registered on GitHUB, expressed in UML, VHDL or Verilog. Have a stack-oriented architecture? – Maybe. (UML handout, can be downloaded from: )Page  53
  • 54. Research questions Can Erlang (or Haskell) be modified to provide safe and partitioned code execution for multiple tenants? – SSErl (Brown, Sahlin, 1997). – ErlHive (Armstrong, Wiger, 2007). Can a sustainable notion of security for execution of code on a multi tenant PaaS platform easier be retrofitted into Erlang or into Haskell? Can a dependable distributed model (based on the actor model) be added to Haskell using a standard messaging bus such as RabbitMQ?Page  54
  • 55. Research questions (continued1) Is the actor model the most viable model to achieve elasticity in cloud- based (distributed) environments? Can Erlang be modified to achieve transparent many core elasticity ? – intra-node: spawn(FUN) -> pid(). – inter-node: spawn(FUN@Node) -> pid(). Higher cost! – Distribution cost must be low in every case. – New scheduler should distinct between local and remote system.Page  55
  • 56. Research questions (continued2) What is the most sensible level of resource saving for the application virtual machine? – Run the application VM » In the OS. » With as little OS as possible (e.g. JeOS, Busybox). » On a hypervisor. » On bare metal. – Let the application VM sink into the silicon (e.g. FPGAs).Page  56
  • 57. Research questions (continued3) Can a hardware architecture/circuit design be developed to execute Erlang, clojure, Scala or Haskell? – ECOMP based on FPGAs (Lundell, Tjärnström). – Erlang Optimized Processor (EOP) similar to Java Optimized Processor (JOP) (Schöberl, 2005). – Implement selected functions into Microcode instructions. – Use byte code as starting point.Page  57
  • 58. Thank you
  • 59. Cloud Computing platforms in a multicore world. REFERENCESPage  59
  • 60. ReferencesAMULET Group. (2000, October). AMULET 3i – an Asynchronous System-on-Chip or Adventures in self-timed microrpocessors. Retrieved August 7, 2011 from The Advanced Processor Technologies Group:http://apt.cs.man.ac.uk/projects/processors/amulet/AMULET3i_seminar.pdfAzevedo et al. (2011, February). Data Center Efficiency Metrics. Retrieved August 6, 2011 from The GreenGrid:http://www.thegreengrid.org/~/media/TechForumPresentations2011/Data_Center_Efficiency_Metrics_2011.pdfBaker, D. (2009, March). Embedded devices will soon outnumber people by orders of magnitude!, RetrievedAugust 7, 2011 from MSDN Blogs: http://blogs.msdn.com/b/davbaker/archive/2009/03/31/embedded-devices-will-soon-outnumber-people-by-orders-of-magnitude.aspxBittman, J.T. (2009, October). Server Virtualization: One Path That Leads to Cloud Computing. Gartner RASCore Research Note G00171730Carr, N. (2008, January). The Big Switch. New York: W. W. Norton & CompanyCarr, N. (2006, Spring). The end of corporate computing. MIT SMR, 46(3)Graig-Wood, K. (2010, February). DEFINITION OF CLOUD COMPUTING, INCORPORATING NIST AND G-CLOUD VIEWS. Retrieved August 6, 2011 from Kates Comment: http://www.katescomment.com/definition-of-cloud-computing-nist-g-cloud/Page  60
  • 61. References (continued1)Mell, P. Grance,T. (2011, January). The NIST Definition of Cloud Computing (Draft). Retrieved August 6,2011, from NIST: http://csrc.nist.gov/publications/drafts/800-145/Draft-SP-800-145_cloud-definition.pdfMoore, J. F. (1996). The Death of Competition: Leadership & Strategy in the Age of Business Ecosystems.New York: HarperBusiness.Ostinelli, R. (2009, April). Boost message passing between Erlang nodes. Retrieved August 7, 2011 from:http://www.ostinelli.net/boost-message-passing-between-erlang-nodes/Ozer, E. (2009, November). The State of the Future: ARM’s Perspective. Retrieved August 7, 2011 fromCORDIS, The gateway to European research and development:ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/computing/arm-emre-ozer_en.pdfPerry, G. (2008, February). How Cloud & Utility Computing Are Different. Retrieved August 6, 2011 fromGIGAOM: http://gigaom.com/2008/02/28/how-cloud-utility-computing-are-different/Siebert, F. and Hunt, J. (2011, May). Multicore for Real-Time and Safety-Critical Software: Avoid the Pitfalls.Retrieved August 6, 2011 from Slideshare: http://www.slideshare.net/dbeberman/multi-coreexpo-20110505fridiusingtemplateTristram, C. (2001, October). Its Time for Clockless Chips, MITs Technology Review Magazine, 104(8), pp.36-41.Page  61
  • 62. References (continued2)Uusitalo, M.A. (2006). Global Visions of a Wireless World from the WWRF, IEEE Vehicular TechnologyMagazine, 1(2): 4-8Page  62
  • 63. Cloud Computing platforms in a multicore world. SPARE SLIDESPage  63
  • 64. Examples of the cloud ecosystem Rightscale (automated deployment of cloud-ready images, elasticity). UShareSoft, UForge (tailored cloud-ready system images). Enomaly (cloud broker, elasticity). Jumpbox (library with virtual appliances). Cloudkick (system monitoring).Page  64
  • 65. Messaging 1 Cloud Computing implementations frequently need to send messages over a network (Small) messages, - e.g. synchronization, commands, semaphores -, have large overhead when passed with TCP. Better use 10G/40G Ethernet together with new standards e.g. RDMA over Converged Ethernet (RoCE). VMWare VMCI sockets limited to messages between guest VMs on the same node (e.g. ESX host).Page  65
  • 66. Messaging 2 Harmonization of commands and data? External Data Representation (XDR) RFCs 1832 & 4506?Page  66
  • 67. Time for cLoCkLeSs ? Asynchrony and clockless are good friends. GA144.1-2 integrate an array of 144 GA F18-A “computers”. All work uncklocked and fully asynchronous. Currently programmed in polyFORTH. FORTH silicon stack machine does not execute C well. – E.g. Erlang Built in Functions (BIFs) are in C. Eight F18-A computers fit one square millimeter. Smart Dust? However: Arrays and a large number of synchronous cores are not new on a GPU.Page  67
  • 68. μcloud (continued) What can be researched. What might be difficult to tackle.  Messaging & thrashing.  Technologies that require large  Distribution & subscription amounts of data. patterns.  File systems.  Byzantine error conditions.  Commercial implementations &  Asynchronous architectures software. (down to a microprocessor level).  Elasticity.  Process state monitoring.Page  68
  • 69. Remote Direct Memory Access overConverged Ethernet (RoCE) 10GbE technology competing with Infiniband. Interactive traffic (eg messages) suffer from large TCP/IP overhead (20- byte IP header + 20-byte TCP header). Severely impacting performance in physically distributed environments. Depending on OS & Message Queuing the throughput can go down up to 75% compared to a single multicore node (Ostinelli, 2009). Goal is to develop an own alternative high performance carrier for the Erlang distribution.Page  69
  • 70. Cloud Service Bus (CSB) Smart switching of (Erlang/OTP) Messages. Not looking at network architecture but at inter process communication. CSB mostly command driven, network mostly data driven. Transport and delivery of commands should be prioritized. Programmable CSB, e.g. optimize routes/directions between systems or processes that frequently messages each other.Page  70