parallel computing, parallel processing

  • 11,877 views
Uploaded on

parallel computing brief.

parallel computing brief.

More in: Education , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
  • downloading is not possible since the author disabled the feature
    Are you sure you want to
    Your message goes here
  • how to download this ppt???
    Are you sure you want to
    Your message goes here
No Downloads

Views

Total Views
11,877
On Slideshare
0
From Embeds
0
Number of Embeds
3

Actions

Shares
Downloads
0
Comments
2
Likes
28

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Parallel Computing
  • 2.
    • Traditionally, software has been written for serial computation:
      • To be run on a single computer having a single Central Processing Unit (CPU)
      • A problem is broken into a discrete series of instructions.
      • Instructions are executed one after another.
      • Only one instruction may execute at any moment in time.
  • 3.
    • Limits to serial computing - both physical and practical reasons pose significant
    • constraints to simply building ever faster serial computers:
    • Transmission speeds - the speed of a serial computer is directly dependent upon
    • how fast data can move through hardware. Absolute limits are the speed
    • of light (30 cm/nanosecond) and the transmission limit of copper wire
    • (9 cm/nanosecond). Increasing speeds necessitate increasing proximity
    • of processing elements.
    • Limits to miniaturization - processor technology is allowing an increasing number
      • of transistors to be placed on a chip. However, even with molecular or
      • Atomic-level components, a limit will be reached on how small
      • components can be.
    • Economic limitations - it is increasingly expensive to make single processor faster.
      • Using a larger number of moderately fast commodity processors to
      • achieve the same (or better) performance is less expensive.
  • 4.
    • What is Parallel Computing?
    • In the simplest sense, parallel computing is the simultaneous use of multiple compute
    • resources to solve a computational problem.
    • To be run using multiple CPUs
    • A problem is broken into discrete parts that can be solved concurrently
    • Each part is further broken down to a series of instructions
    • Instructions from each part execute simultaneously on different CPUs
  • 5. Why Do Parallel Computing?
    • Limits of single CPU computing
      • Available memory
      • Performance
    • Parallel computing allows:
      • Solve problems that don’t fit on a single CPU’s memory space
      • Solve problems that can’t be solved in a reasonable time
    • We can run…
      • Larger problems
      • Faster
      • More cases
  • 6.
    • Parallel computing: use of multiple computers or processors working together on a common task.
      • Each processor works on its section of the problem
      • Processors can exchange information
    Grid of Problem to be solved CPU #1 works on this area of the problem CPU #3 works on this area of the problem CPU #4 works on this area of the problem CPU #2 works on this area of the problem y x exchange exchange
  • 7.  
  • 8. Basic Components of a Parallel (or Serial) Computer CPU MEM C PU M EM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM
  • 9. Classes of parallel computers Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually Exclusive.
  • 10. Multicore computing A multicore processor is a processor that includes multiple execution units ("cores"). These processors differ from superscalar processors, which can issue multiple instructions per cycle from one instruction stream (thread); by contrast, a multicore processor can issue multiple instructions per cycle from multiple instruction streams. Each core in a multicore processor can potentially be superscalar as well—that is, on every cycle, each core can issue multiple instructions from one instruction stream.
    • Symmetric multiprocessing
    • A symmetric multiprocessor (SMP) is a computer system with multiple identical processors that share memory and connect via a bus. Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors." Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a sufficient amount of memory bandwidth exists.”
  • 11. Distributed computing A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. Cluster computing  A cluster is a group of loosely coupled computers that work together closely, so that in some respects they can be regarded as a single computer. Clusters are composed of multiple standalone machines connected by a network. While machines in a cluster do not have to be symmetric, load balancing is more difficult if they are not. The most common type of cluster is the Beowulf cluster, which is a cluster implemented on multiple identical commercial off-the-shelf computers connected with a TCP/IP Ethernet local area network.
  • 12. Massive parallel processing A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but they are usually larger, typically having "far more" than 100 processorsIn an MPP, “each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect. Grid computing Grid computing is the most distributed form of parallel computing. It makes use of computers communicating over the Internet to work on a given problem. Because of the low bandwidth and extremely high latency available on the Internet, grid computing typically deals only with embarrassingly parallel problems. Most grid computing applications use middleware, software that sits between the operating system and the application to manage network resources and standardize the software interface.
  • 13.  
  • 14. Types of Parallelism : Two Extremes
    • Data parallel
      • Each processor performs the same task on different data
      • Example - grid problems
    • Task parallel
      • Each processor performs a different task
      • Example - signal processing
    • Most applications fall somewhere on the continuum between these two extremes
    • There are two other types of parallelism:
    • Bit-level parallelism
    • Instruction-level parallelism
  • 15.
    • Parallel Computers
    • 􀂄 Massively Parallel Processors (MPPs)
      • 􀂅 Large number of processors connected to gigabytes of memory via proprietary hardware
      • 􀂅 Enormous computing power ‘in a box’
      • 􀂅 Made by the likes of IBM, Cray, SGI,
      • Very costly!
    • 􀂄 Distributed / Cluster Computing
      • 􀂅 A number of computers connected by a network are used to solve a single large problem
      • 􀂅 Could consist of ordinary workstations or even MPPs
      • 􀂅 Considerably cheaper to set up
  • 16.
    • Why Use Parallel Computing?
    • The primary reasons for using parallel computing:
      • Save time - wall clock time
      • Solve larger problems
      • Provide concurrency (do multiple things at the same time)
      • Other reasons might include:
      • Taking advantage of non-local resources –
      • using available compute resources on a wide area network, or even the Internet when local compute resources are scarce.
      • Cost savings –
            • using multiple "cheap" computing resources instead of paying for time on a supercomputer.
      • Overcoming memory constraints –
      • single computers have very finite memory resources. For large problems, using the memories of multiple computers may overcome this obstacle.
  • 17.
    • Currently the speed of off-the shelf micro processors is within one order of magnitude of the fastest serial computers. Micro processors cost many orders of magnitude less.
    • Connect only a few micro processors together to form a parallel computer with speed comparable to fastest serial computers. Cost of such parallel computer is less.
    • Connect a large number of processors into parallel computer overcomes the saturation rate of serial computers.
    • Parallel computers can provide much higher raw computation power than fastest serial computes.
    • Need to have actual applications that can take advantage of this high power.
    Cost Effectiveness of Parallel Processors
  • 18. Parallel Computing Today Small class Beowulf cluster Japanese Earth Simulator machine