3. Parallel computing refers to the process of breaking down larger problems into smaller, independent, often
similar parts that can be executed simultaneously by multiple processors communicating via shared memory,
the results of which are combined upon completion as part of an overall algorithm. The primary goal of
parallel computing is to increase available computation power for faster application processing and problem
solving
7. 2. NUMA (NON- UNIFORM MEMORY ACCESS)
NUMA multiprocessor model, the access time varies with the location of
the memory word. Here the shared memory is physically distributed among all the
processors called local memories.
8. 3. COMA (CACHE ONLY MEMORY)
COMA model is a special case of a non-uniform memory access model;
here all the distributed local memories are converted into cache memories. Data can
migrate and can be replicated in various memories but cannot be permanently or
temporarily stored.
9. Distributed memory
refers to a multiprocessor computer system in which each processor has its own private memory.