2. MULTIPLE–PROCESSOR TRACKS
It can be either a shared-memory multiprocessor or a distributed-memory
multicomputer
Shared-memory track
Single address space in the entire system
The track started with C.mmp
3. NYU Ultrcomputer & Illinois Cedar
They were developed with a single address space
Both systems used multistage networks as a system interconnect
Ultracomputer developed the combining network for fast synchronization among
multiple processors
NYU developed by Allan Gottlieb in 1983
4. The major achievements in Cedar project are in parallel compilers and
performance benchmarking experiments
Illinois Cedar project was developed by David Kuck in 1987
Cedar is a cluster-based shared memory multiprocessor
The system consists of four clusters connected through two unidirectional
interconnection networks to a globally shared memory
5. Stanford Dash
NUMA multiprocessor
Distributed memories forming a global address space
Cache coherence is enforced with distributed directories
6. Fujitsu VPP500
222-processor system with a crossbar interconnect
Shared memories are distributed to all processor nodes
KSR 1
It’s a typical COMA model
Kendall Square Research,1990
7. IBM RP3 & BBN Butterfly
These are two large-scale multiprocessors
Both use multistage networks but with different interstage connections
RP3, the Research Parallel Processing Prototype, was a research vehicle for
exploring the hardware and software aspects of highly parallel computation.
RP3 was a shared-memory machine that was designed to be scalable to 512
processors; a 64-processor machine was in operation from October 1988
through March 1991
8. The BBN Butterfly was a massively parallel computer built by Bolt, Beranek
and Newman in the 1980s.
Each machine had up to 512 CPUs, each with local memory, which could be
connected to allow every CPU access to every other CPU's memory, although
with a substantially greater latency (roughly 15:1) than for its own. The CPUs
were commodity microprocessors.
The memory address space was shared.
10. Cosmic Cube
The Caltech Cosmic Cube was a parallel computer, developed by Charles
Seitz and Geoffrey from 1981 onward
It leads the development of message-passing multicomputers
It was an early attempt to capitalise on VLSI to speed up scientific
calculations at a reasonable cost
11. Intel has produced a series of medium –grain hypercube computers(the iPSCs)
The nCUBE 2 is also a hypercube configuration
Paragon is the latest Intel system
Mosaic and the MIT J-Machine are on the research track which are the two fine-grain
multicomputers
12.
13. MULTIVECTOR & SIMD TRACKS
Multivector track
These are traditional vector supercomputers
The CDC 7600 was the first vector dual-processor system
The Cray and Japanese supercomputers followed the register-to-register
architecture
Cray 1 leads the multivector development in 1978
The latest Cray/MPP is a massively parallel with distributed shared memory
Its work as a back-end accelerator with Cray Y-MP Series
14.
15.
16. The SIMD Track
The Illiac IV leads the construction of SIMD computers
The Goodyear MPP, the AMT/DAP610, and the TMC/CM-2,are all SIMD
machines built with bit-slice PEs
The CM-5 is a synchronized MIMD machine executing in a multiple-SIMD mode
17. The word-wide PEs is used in :
BSP was a shared-memory SIMD machine built with 16 processors
updating a group of 17 memory modules synchronously
The GF11 was developed at the IBM Watson Laboratory for scientific
simulation research use
The MasPar MP1 is the only medium-grain SIMD computer currently in
production use