• Like
Distributed Computing
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Distributed Computing

  • 8,119 views
Published

Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or …

Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime. In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers.

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
8,119
On SlideShare
0
From Embeds
0
Number of Embeds
3

Actions

Shares
Downloads
558
Comments
0
Likes
6

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. DISTRIBUTED COMPUTING
    Presented by
    Prashant Tiwari and ArchanaSahu
  • 2. DISTRIBUTED COMPUTING
    • Folding@Home, as of August 2009, is sustaining over 7 PFLOPS , the first computing project of any kind to cross the four petaFLOPS milestone. This level of performance is primarily enabled by the cumulative effort of a vast array of PlayStation 3 and powerful GPU units.
    • 3. The entire BOINC averages over 1.5 PFLOPS as of March 15, 2009.
    • 4. SETI@Home computes data averages more than 528 TFLOPS
    • 5. Einstein@Home is crunching more than 150 TFLOPS
    • 6. As of August 2008, GIMPS is sustaining 27 TFLOPS.
    The illustration
    Consider The Facts
  • 7. DISTRIBUTED COMPUTING
    This What The Power of Distributed Computing Is.
    The illustration
    This What Distributed Computing Is.
  • 8. OVERVIEW
    DISTRIBUTED COMPUTING
    1 petaFLOPS = 10^15 flops or 1000 teraflops.
    No computer has achieved this performance yet.
    PETAFLoating point OPerations per Second
    One quadrillion floating point operations per second
    As of 2008, the fastest PC processors (quad-core) perform over 70 GFLOPS (Intel Core i7 965 XE)
    The illustration
    What is PetaFLOPS?
  • 9. Introduction to DISTRIBUTED COMPUTING
    The Definition , The Concept, The Processes
  • 10. DISTRIBUTED COMPUTING
    The Text
    Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime. In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing
    Common Distributed Computing Model
    Introduction To Distributed Computing
  • 11. The Elaboration
    DISTRIBUTED COMPUTING
    In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network
    The Elaboration
    PROBELEM INSTRUCTION SET
    TASK 2
    T
    A
    S
    K
    5
    TASK 5
    TASK 4
    TASK 1
    T 2
    T4
    T3
    T5
    T1
    THE CONCEPT
  • 12. DISTRIBUTED COMPUTING
    Consider If There Are n Systems Connected In A Network, Then We Can Split One Program Instruction Into n Different Tasks And Compute Them Concurrently.
    The illustration
    ReConsider The Facts
  • 13. Why DISTRIBUTED COMPUTING ?
    Why we need Distributed Computing?
  • 14. DISTRIBUTED COMPUTING
    • Computation requirements are ever increasing
    • 15. Silicon based (sequential) architectures reaching their limits in processing capabilities (clock speed) as they are constrained by.
    • 16. Significant development in networking technology is paving a way for network-based cost-effective parallel computing.
    • 17. The parallel processing technology is mature and is being exploited commercially.
    The Elaboration
    Need Of Distributed Computing
  • 18. DISTRIBUTED COMPUTING
    S
    log2P
    P
    Speedup achieved by distributed computing
    Speedup = log2(no. of processors)
    The Elaboration
    Speedup Factor
  • 19. Implementing DISTRIBUTED COMPUTING
    The Organization, The Architecture
  • 20. DISTRIBUTED COMPUTING
    The Text
    Organizing the interaction between the computers that execute distributed computations is of prime importance.
    In order to be able to use the widest possible variety of computers, the protocol or communication channel should be universal.
    Software Portability
    Motivation Factor
    The human brain consists of a large number (more than a billion) of neural cells that process information. Each cell works like a simple processor and only the massive interaction between all cells and their parallel processing makes the brain's abilities possible.
    Implementing Distributed Computing
  • 21. DISTRIBUTED COMPUTING
    There are many different types of distributed computing systems and many challenges to overcome in successfully designing one. The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way. Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems.
    The Elaboration
    Implementing Distributed Computing
  • 22. DISTRIBUTED COMPUTING
    The Elaboration
    Processor A
    Processor A
    Processor A
    MEM. Bus
    MEM. Bus
    MEM. Bus
    Memory System A
    Memory System A
    Memory System A
    Distributed Memory MIMD
  • 23. Architectures of DISTRIBUTED COMPUTING
    Possible ways to Implement Distributed Computing
  • 24. DISTRIBUTED COMPUTING
    Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely-coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.
    The Text
    The Architectures
  • 25. DISTRIBUTED COMPUTING
    Client-server — Smart client code contacts the server for data, then formats and displays it to the user.
    3-tier architecture — Three tier systems move the client intelligence to a middle tier so that stateless clients can be used. Most web applications are 3-Tier.
    N-tier architecture — N-Tier refers typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
    Tightly coupled (clustered) — refers typically to a cluster of machines that closely work together, running a shared process in parallel.
    Peer-to-peer — architecture where there is no special machine or machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and servers.
    The Elaboration
    The Architectures
  • 26. DISTRIBUTED COMPUTING
    Distributed computing implements a kind of concurrency. It interrelates tightly with concurrent programming so much that they are sometimes not taught as distinct subjects.
    The Text
    The Concurrency
  • 27. DISTRIBUTED COMPUTING
    Multiprocessor systems
    A multiprocessor system is simply a computer that has more than one CPU on its motherboard.
    Multicore Systems
    Intel CPUs from the late Pentium 4 era (Northwood and Prescott cores) employed a technology called Hyper-threading that allowed more than one thread (usually two) to run on the same CPU.
    Multicomputer Systems
    Computer Clusters
    A cluster consists of multiple stand-alone machines acting in parallel across a local high speed network.
    Grid computing
    A grid uses the resources of many separate computers, loosely connected by a network (usually the Internet), to solve large-scale computation problems.
    The Elaboration
    The Concurrency
  • 28. Language that Use or make a distributed system and projects that been implemented
    Technical Issues
  • 29. DISTRIBUTED COMPUTING
    The Text
    If not planned properly, a distributed system can decrease the overall reliability of computations if the unavailability of a node can cause disruption of the other nodes.
    Leslie Lamport famously quipped that: "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable."
    Troubleshooting and diagnosing problems in a distributed system can also become more difficult, because the analysis may require connecting to remote nodes or inspecting communication between nodes.
    The Text
    Technical Issues
  • 30. Language and Projects
    Language that Use or make a distributed system and projects that been implemented
  • 31. DISTRIBUTED COMPUTING
    The Text
    Remote procedure calls distribute operating system commands over a network connection. Systems like CORBA, Microsoft DCOM, Java RMI and others, try to map object oriented design to the network.
    Loosely coupled systems communicate through intermediate documents that are typically human readable (e.g. XML, HTML, SGML, X.500, and EDI).
    The Text
    The Organization
  • 32. DISTRIBUTED COMPUTING
    Projects
    Folding@Home
    • Stanford University Chemistry Department Folding@home project
    • 33. Focused on simulations of protein folding to find disease cures and to understand biophysical systems.
    • 34. Folding@Home, as of August 2009, is sustaining over 7 PFLOPS.
    SETI@Home
    • Space Sciences Laboratory at the University of California, Berkeley
    • 35. Focused on analyzing radio-telescope data to find evidence of intelligent signals from space
    • 36. SETI@Home computes data averages more than 528 TFLOPS
    ReConsider The Facts
  • 37. DISTRIBUTED COMPUTING
    ReConsider The Facts
  • 38. Conclusion And Summary
    Implemented Distributed Computing
  • 39. DISTRIBUTED COMPUTING
    The Text
    • Distributed Computing has become a reality:
    • 40. Threads concept utilized everywhere.
    • 41. Clusters have emerged as popular data centers and processing engine:
    • 42. E.g., Google search engine.
    • 43. The emergence of commodity high-performance CPU, networks, and OSs have made parallel computing applicable to enterprise applications.
    • 44. E.g., Oracle {9i,10g} database on Clusters/Grids.
    The Text
    The Organization
  • 45. DISTRIBUTED COMPUTING
    Questions ?
    Thank You For Listening
    Any Questions ?