Upcoming SlideShare
Loading in …5
×

# parallel computing

520 views

Published on

Published in: Technology
• Full Name
Comment goes here.

Are you sure you want to Yes No
Your message goes here
• Be the first to comment

• Be the first to like this

### parallel computing

1. 1. 1
2. 2. K.NARAYANA08Q61A0575 2
3. 3. ExecutionEXAMPLE :-• main(){• for (int i = 0; d.get_meaning(i,s) != 0; ++i)• cout << (i+1) << ": " << s << "n";• return 0;}
4. 4. 4
5. 5. 5
6. 6. For example:
7. 7. Traditionally software has been written for serial computation. 7
8. 8. 8
9. 9. Load Balancing 10
10. 10. 11
11. 11. Parallel Computer Memory Architectures:
12. 12. Parallel Computer Memory Architectures: Distributed Memory
13. 13. There are different ways to classify parallel computers• classified along the two independent dimensions of Instruction and Data• SISD – Single Instruction, Single Data• SIMD – Single Instruction, Multiple Data• MISD – Multiple Instruction, Single Data• MIMD – Multiple Instruction, Multiple Data
14. 14. SISD 15
15. 15. SIMD 16
16. 16. MISD 17
17. 17. MIMD 18
18. 18. ADVANTAGES•In the simplest sense, parallel computing is To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs
19. 19. Parallel computingOverheads  Synchronization  Problem decomposition  Data Dependencies 20
20. 20. 21
21. 21. Problem decomposition
22. 22. Data Dependencies• 1: function Dep(a, b) • 1: function NoDep(a, b)• 2: c := a·b • 2: c := a·b• 3: d := 3·c • 3: d := 3·b• 4: end function • 4: e := a+b • 5: end function
23. 23. Conclusion• Parallel computing is fast.• There are many different approaches and models of parallel computing.• Parallel computing is the future of computing.• Solve larger problems
24. 24. 25