GROUP 6
ROLEX ZHYRONNE BATICAN
JHON VINCENT PEJANER
HANNAH GINGOYON
TOPIC
• PARALLEL PROCESSING
• TYPICAL MULTI PROCESSING CONFIGURATION
PARALLEL
PROCESSING
WHAT IS PARALLEL
PROCESSING?
Parallel processing is a computing technique that uses
multiple CPUs to handle different tasks, reducing
program time. It can be performed on any system with
multiple CPUs.
HOW PARALLEL
PROCESSING
WORK?
• Parallel processing divides a task between at least two
microprocessors, using specialized software to break down
complex problems into components and designate processors for
each part, reassembling data for solving the initial challenge.
• Parallel processing breaks large tasks into smaller ones, utilizing
software to keep each processor updated on their progress,
ensuring efficient use of available processing units.
TYPES OF
PARALLEL
PROCESSING
• SINGLE INSTRUCTION, SINGLE DATA (SISD)
• MULTIPLE INSTRUCTION, SINGLE DATA (MISD)
• SINGLE INSTRUCTION, MULTIPLE DATA (SIMD)
• MULTIPLE INSTRUCTION, MULTIPLE DATA (MIMD)
• SINGLE PROGRAM, MULTIPLE DATA (SPMD)
• MASSIVE PARALLEL PROCESSING (MPP)
SINGLE INSTRUCTION, SINGLE DATA (SSID)
SISD computing involves a single processor managing a single
algorithm and data source, similar to a serial computer. It may or
may not support parallel processing.
MULTIPLE INSTRUCTION, SINGLE DATA (MISD)
MISD computers use multiple processors, sharing input data,
allowing simultaneous operations on the same batch, with the
number of processors affecting the number of operations.
SINGLE INSTRUCTION, MULTIPLE DATA (SIMD)
SIMD architecture computers use multiple processors with identical
instructions, each providing unique data sets. These components are
supervised by a single control unit, allowing simultaneous
communication with each CPU.
MULTIPLE INSTRUCTION, MULTIPLE DATA (MIMD)
MIMD computers have multiple processors that can accept
instructions and draw data from different streams. They can run
multiple tasks simultaneously, but developing sophisticated
algorithms is challenging. MIMD computers incorporate
interactions between multiprocessors, making them more adaptable
than SIMD or MIMD computers.
SINGLE PROGRAM , MULTIPLE DATA (SPMD)
SPMD systems, a subset of MIMD, are message passing
programming used in distributed memory computer systems. These
systems consist of nodes that launch applications and use
send/receive routines to communicate. They can also use messages
for barrier synchronization and transfer via various communication
techniques.
MASSIVE PARALLEL PROCESSING (MPP)
Massively Parallel Processing (MPP) is a storage structure that
coordinates program execution by multiple processors, allowing for
faster data handling and analysis. MPP systems use SIMD and
MIMD computers, with clusters created by connecting multiple
processors. In more extensive parallel systems, each CPU can
function as a processor, creating a single supercomputer called grid
computing.
TYPICAL MULTI
PROCESSING
CONFIGURATION
WHAT IS TYPICAL
MULTI PROCESSING
CONFIGURATION?
Multiprocessing involves multiple processors working
together, requiring proper configuration to prevent
problems. There are three common configuration types:
Master/Slave, Loosely Coupled, and Symmetric.
MASTER SLAVE
The master/slave configuration is a single processor system where
extra slave processors work, managed by the primary master
processor. It's an asymmetrical system, suitable for computing
environments where processing time divides between front and back
end processors.
MASTER SLAVE
ADVANTAGE :
• IT IS SIMPLE TO UNDERSTAND
DISADVANTAGE :
• It is as reliable as a single processor system, if the master
processor fails the entire system fails.
• It creates more overhead charges. There would be situations
when the slave processors would be free before the master
processor could assign them another task. Then it takes the
valuable time of processing.
• After each task completed by the slave processors, it interrupts
the master processor for some operating system intervention, like
I/O requests. This creates long queues at master level processor.
LOOSELY COUPLED CONFIGURATION
In this type of configuration, there are several complete computer
systems with their own memory, I/O devices, CPU and operating
system.
ADVANTAGE :
• It isn’t prone to catastrophic failure
DISADVANTAGE :
• It is difficult to detect if a processor has failed.
LOOSELY COUPLED CONFIGURATION
SYMMETRIC CONFIGURATION
In symmetric configuration processor scheduling is decentralized. A
single copy of the OS and a table listing each process and it’s status
is stored in memory common and accessible to all the processors, so
that each processor can use the algorithms to decide which job to
run next.
ADVANTAGE :
• It is more reliable than loosely coupled configuration.
• It uses the resources effectively.
• It well manages the load of jobs.
• It can degrade gracefully at the time of failure.
DISADVANTAGE :
• Interrupted processes cause processors to update process lists, causing conflicts
between them. This increases the likelihood of simultaneous execution of tasks
like I/O requests.
• Implementing this configuration is challenging due to the need for optimal
system synchronization to prevent races or deadlocks.
SYMMETRIC CONFIGURATION
SUMMARY OF THE
REPORT
NO COMMENT

Components of Computer PARALLEL-PROCESSING.pptx

  • 1.
    GROUP 6 ROLEX ZHYRONNEBATICAN JHON VINCENT PEJANER HANNAH GINGOYON
  • 2.
    TOPIC • PARALLEL PROCESSING •TYPICAL MULTI PROCESSING CONFIGURATION
  • 3.
  • 4.
  • 5.
    Parallel processing isa computing technique that uses multiple CPUs to handle different tasks, reducing program time. It can be performed on any system with multiple CPUs.
  • 6.
  • 7.
    • Parallel processingdivides a task between at least two microprocessors, using specialized software to break down complex problems into components and designate processors for each part, reassembling data for solving the initial challenge. • Parallel processing breaks large tasks into smaller ones, utilizing software to keep each processor updated on their progress, ensuring efficient use of available processing units.
  • 9.
  • 10.
    • SINGLE INSTRUCTION,SINGLE DATA (SISD) • MULTIPLE INSTRUCTION, SINGLE DATA (MISD) • SINGLE INSTRUCTION, MULTIPLE DATA (SIMD) • MULTIPLE INSTRUCTION, MULTIPLE DATA (MIMD) • SINGLE PROGRAM, MULTIPLE DATA (SPMD) • MASSIVE PARALLEL PROCESSING (MPP)
  • 11.
    SINGLE INSTRUCTION, SINGLEDATA (SSID) SISD computing involves a single processor managing a single algorithm and data source, similar to a serial computer. It may or may not support parallel processing.
  • 12.
    MULTIPLE INSTRUCTION, SINGLEDATA (MISD) MISD computers use multiple processors, sharing input data, allowing simultaneous operations on the same batch, with the number of processors affecting the number of operations.
  • 13.
    SINGLE INSTRUCTION, MULTIPLEDATA (SIMD) SIMD architecture computers use multiple processors with identical instructions, each providing unique data sets. These components are supervised by a single control unit, allowing simultaneous communication with each CPU.
  • 14.
    MULTIPLE INSTRUCTION, MULTIPLEDATA (MIMD) MIMD computers have multiple processors that can accept instructions and draw data from different streams. They can run multiple tasks simultaneously, but developing sophisticated algorithms is challenging. MIMD computers incorporate interactions between multiprocessors, making them more adaptable than SIMD or MIMD computers.
  • 15.
    SINGLE PROGRAM ,MULTIPLE DATA (SPMD) SPMD systems, a subset of MIMD, are message passing programming used in distributed memory computer systems. These systems consist of nodes that launch applications and use send/receive routines to communicate. They can also use messages for barrier synchronization and transfer via various communication techniques.
  • 16.
    MASSIVE PARALLEL PROCESSING(MPP) Massively Parallel Processing (MPP) is a storage structure that coordinates program execution by multiple processors, allowing for faster data handling and analysis. MPP systems use SIMD and MIMD computers, with clusters created by connecting multiple processors. In more extensive parallel systems, each CPU can function as a processor, creating a single supercomputer called grid computing.
  • 18.
  • 19.
    WHAT IS TYPICAL MULTIPROCESSING CONFIGURATION?
  • 20.
    Multiprocessing involves multipleprocessors working together, requiring proper configuration to prevent problems. There are three common configuration types: Master/Slave, Loosely Coupled, and Symmetric.
  • 21.
    MASTER SLAVE The master/slaveconfiguration is a single processor system where extra slave processors work, managed by the primary master processor. It's an asymmetrical system, suitable for computing environments where processing time divides between front and back end processors.
  • 22.
  • 23.
    ADVANTAGE : • ITIS SIMPLE TO UNDERSTAND DISADVANTAGE : • It is as reliable as a single processor system, if the master processor fails the entire system fails. • It creates more overhead charges. There would be situations when the slave processors would be free before the master processor could assign them another task. Then it takes the valuable time of processing. • After each task completed by the slave processors, it interrupts the master processor for some operating system intervention, like I/O requests. This creates long queues at master level processor.
  • 24.
    LOOSELY COUPLED CONFIGURATION Inthis type of configuration, there are several complete computer systems with their own memory, I/O devices, CPU and operating system.
  • 25.
    ADVANTAGE : • Itisn’t prone to catastrophic failure DISADVANTAGE : • It is difficult to detect if a processor has failed.
  • 26.
  • 27.
    SYMMETRIC CONFIGURATION In symmetricconfiguration processor scheduling is decentralized. A single copy of the OS and a table listing each process and it’s status is stored in memory common and accessible to all the processors, so that each processor can use the algorithms to decide which job to run next.
  • 28.
    ADVANTAGE : • Itis more reliable than loosely coupled configuration. • It uses the resources effectively. • It well manages the load of jobs. • It can degrade gracefully at the time of failure. DISADVANTAGE : • Interrupted processes cause processors to update process lists, causing conflicts between them. This increases the likelihood of simultaneous execution of tasks like I/O requests. • Implementing this configuration is challenging due to the need for optimal system synchronization to prevent races or deadlocks.
  • 29.
  • 30.