3. Chapter 2
Classifications of Parallel Systems
• 2.1 Classification of the parallel computer
systems
• 2.2 SISD: Single Instruction Single Data; The
Cray-1 Super Computer
• 2.3 MISD
• 2.4 SIMD Systems; Synchronous
parallelism>MPP (Massively Parallel Processors),
Data parallel system, DAP (The distributed
array processors) and The connection machine
• 2.5 MIMD System;Asynchronous parallelism>
Transputers, SHARC and Cray T3E
4. • 2.6 Hybrid parallel computer; systems,
Multiple-pipeline, Multiple-SIMD and
Systolic array, Waveform arrays, Very Long
Instruction Word (VLIW) and Same Program
Multiple Data (SPMD)
• 2.7 Some parameters in parallel computers;
Speedup , Efficiency, Latency and Grain s
ize
• 2.8 Levels of Parallelism; Bit level parallelism,
Instruction level parallelism, Procedure
level and Job or program level parallelism
• 2.9 Parallel operations; Nomadic and Dyadic
operations
6. Chapter 4
Parallel Processing Concepts
• 4.1 Program flow mechanism
• 4.2 Control flow versus data flow; A data flow
Architecture
• 4.3 Demand driven mechanism; Reduction machine
model
• 4.4 Comparison of flow mechanisms
• 4.5 Coroutunes; Fork and Join, Data flow,
ParBegin and ParEnd
• 4.6 Processes; Remote Procedure Call
• 4.7 Implicit Parallelism; Explicit versus implicit
parallelism
7. Chapter 5
Network Structures
• 5.1 Introduction
• 5.2 System interconnection architectures
• 5.3 Network properties and routing
• 5.4 Node degree and Network diameter; Node
degree, Network diameter and Average Distance
• 5.5 Bisection width
• 5.6 Data routing functions; Perfect shuffle and
exchange, Hypercube routing function, Broadcast and
Multicast and Network throughput
8. • 5.7 Network performance
• 5.8 Static networks ; Point to point Networks>
Binary Tree, ternary tree and quadtree, Fat tree,
Linear arrays, Rings, Complete graph, Grid and
Torus, AMP (A Minimum Path Systems and
Hexagonal Grid
• 5.9 Dynamic networks; Bus networks and Switch
networks> Switch modules, Multi-stage
networks, Delta networks or Omega
networks, Closs networks and Crossbar networks
• 5.10 Comparison of networks
• 5.11 Summary of networks
9. Chapter 5
Network Structures
• 5.1 Introduction
• 5.2 System interconnection architectures
• 5.3 Network properties and routing; Node degree and
Network diameter; Node degree, Network diameter, Average
Distance and Bisection width
• 5.4 Data routing functions; Perfect shuffle and
exchange, Hypercube routing function, Broadcast and
Multicast and Network throughput
10. • 5.5 Network performance
• 5.6 Static networks ; Point to point Networks>
Binary Tree, ternary tree and quadtree, Fat tree,
Linear arrays, Rings, Complete graph, Grid and
Torus, AMP (A Minimum Path Systems and
Hexagonal Grid
• 5.7 Dynamic networks; Bus networks and Switch
networks> Switch modules, Multi-stage
networks, Delta networks or Omega
networks, Closs networks and Crossbar networks
• 5.8 Comparison of networks
11. Chapter 6
Basic Parallelism and CPU
• 6.1 Introduction
• 6.2 SISD Computers
• 6.3 Hardware and software parallelism; Hardware
parallelism and Software parallelism
• 6.4 The role of compilers
• 6.5 Communication latency
• 6.6 Grain packing and scheduling
• 6.7 Static multiprocessors scheduling
• 6.8 Node duplication
12. Chapter7
Superscalar and Superpipeline Processors
• 7.1 MISD; Pipelining
• 7.2 Pipelining and Super Scalar
Techniques.
• 7.3 Linear Pipeline Processors; Asynchronous and
synchronous Models, Asynchronous Model,
Synchrones Model, Clocking and Timing Control,
Clock Cycle and throughput, Speedup Efficiency and
Optimal Number of stages
• 7.4 Nonlinear Pipelines; Reservation and latency analysis,
Reservation table, Latency Analysis, Collision Free
scheduling and Collision vector
• 7.5 Instruction Pipeline Design; Instruction Execution
Phase, Pre-fetching buffers, Loop buffers
13. • 7.6 Arithmetic pipelines
• 7.7 Superscalar and Superpipeline design; Pipeline
design parameters, Superscalar pipeline design, Super
scalar performance, Superpipeline design and
Superpipelined superscalar design
• 7.8 Super-symmetry and design Tradeoffs
24. Chapter 18
Other Models of Parallelism
• 18.1 Automatic parallelization and vectorization
• 18.2 Condition of parallelism; data, resource and
control dependence
• 18.3 Bernstein’s Conditions
• 18.4 Data dependence (in loop operations)
• 18.5 Vectorization of a loop
• 18.6 Parallelization of a loop
• 18.7 Solving complex data dependencies
25. Chapter 19
Non-procedural Parallel
Programming Languages
• 19.1 Introduction
• 19.2 Lisp; Parallel language construct
• 19.3 FP; Object domain comprises, Primitive
functions available and Program formation
operations available
• 19.4 Concurrent Prolog; Unification, or parallelism and
parallelism
• 19.4 SQL
26. Chapter 20
Performance of Parallel Systems
• 20.1 Speedup (algorithmic penalty,
implementation penalty, Amdahl and Gustafson)
• 20.2 Efficiency
• 20.2 Optimal speedup
• 20.3 Scaleup
• 20.4 MIMD versus SIMD
• 20.5 Validity of performance data.