• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Lecture1
 

Lecture1

on

  • 1,377 views

 

Statistics

Views

Total Views
1,377
Views on SlideShare
1,377
Embed Views
0

Actions

Likes
1
Downloads
32
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Lecture1 Lecture1 Presentation Transcript

    • CS-416 Parallel and Distributed Systems
      JawwadShamsi
    • Course Outline
      Parallel Computing Concepts
      Parallel Computing Architecture
      Algorithms
      Parallel Programming Environments
    • Introduction
      parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:
      To be run using multiple CPUs
      A problem is broken into discrete parts that can be solved concurrently
      Each part is further broken down to a series of instructions
      Instructions from each part execute simultaneously on different resources : Source llnl.gov
    • Types of Processes
      Sequential processes: that occur in a strict order, where it is not possible to do the next step until the current one is completed. Parallel processes: are those in which many events happen simultaneously.
    • Need for Parallelism
      A huge complex problems
      Super Computers
      Hardware
      Use Parallelization techniques
    • Motivation
      Solve complex problems in a much shorter time
      Fast CPU
      Large Memory
      High Speed Interconnect
      The interconnect, or interconnection network, is made up of the wires and cables that define how the multiple processors of a parallel computer are connected to each other and to the memory units
    • Applications
      Large data set or Large eguations
      Seismic operations
      Geological predictions
      Financial Market
    • Parallel computing: more than one computation at a time using more than one processor.
      If one processor can perform the arithmetic in time t.
      Then ideally p processors can perform the arithmetic in time t/p.
    • Parallel Programming Environments
      MPI
      Distributed Memory
      OpenMP
      Shared Memory
      Hybrid Model
      Threads
    • How Much of Parallelism
      Decomposition:The process of partitioning a computer program into independent pieces that can be run simultaneously (in parallel).
      Data Parallelism
      Task Parallelism
    • Data Parallelism
      Same code segment runs concurrently on each processor
      Each processor is assigned its own part of the data to work on
    • SIMD: Single Instruction Multiple data
    • Increase speed processor
      Greater no. of transistors
      Operation can be done in fewer clock cycles
      Increased clock speed
      More operations per unit time
      Example
      8088/8086 : 5 Mhz, 29000 transistors
      E6700 Core 2 Duo: 2.66 GHz, 291 million transistor
    • Multicore
      A multi-core processor is one processor that contains two or more complete functional units. Such chips are now the focus of Intel and AMD. A multi-core chip is a form of SMP
    • Symmetric Multi-Processing
      SMP Symmetric multiprocessing is where two or more processors have equal access to the same memory. The processors may or may not be on one chip.