Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

High-Performance Computing and OpenSolaris

What is high performance Computing, examples of OpenMP, MPI and Pthreads. How obtain HPC benefits using OpenSolaris.
This version was presented at OpenSolaris Day in Porto Alegre, Brazil, in April 16, 2008.

Related Books

Free with a 30 day trial from Scribd

See all
  • Be the first to comment

High-Performance Computing and OpenSolaris

  1. 1. High-Performance Computing and OpenSolaris <ul><li>Silveira Neto </li></ul><ul><li>Sun Campus Ambassador </li></ul><ul><li>Federal University of Ceará </li></ul><ul><li>ParGO - Paralellism, Graphs, and Combinatorial Optimization Research Group </li></ul>
  2. 2. Agenda <ul><li>Why programs should run faster? </li></ul><ul><li>How programs can run faster? </li></ul><ul><li>High Performance Computing </li></ul><ul><ul><li>Motivating </li></ul></ul><ul><ul><li>Computer Models </li></ul></ul><ul><ul><li>Approachs </li></ul></ul><ul><li>OpenSolaris </li></ul><ul><ul><li>What is, Advantages and Tools. </li></ul></ul>
  3. 3. Aplication Area Share <ul><ul><li>stats Application area share for November/2007 </li></ul></ul>
  4. 4. Computational fluid dynamics
  5. 5. Finite Element Analysis
  6. 6. Serial Computation problem instructions CPU
  7. 7. Serial Computation <ul><li>Single computer, single CPU. </li></ul><ul><li>Problem broken into discrete series of instructions. </li></ul><ul><li>One instruction per time. </li></ul>instructions CPU
  8. 8. Parallel Computing problem CPU CPU CPU parts instructions
  9. 9. Parallel Computing <ul><li>Simultaneous use of multiple compute resources to solve a computational problem. </li></ul><ul><li>Compute resources can include </li></ul><ul><ul><li>Single computer with multiple processors </li></ul></ul><ul><ul><li>Multiple computers connected by a network </li></ul></ul><ul><ul><li>or both </li></ul></ul>
  10. 10. Flynn's Taxonomy <ul><li>Instruction or Data </li></ul><ul><li>Single or Multiple </li></ul>SISD S ingle I nstruction, S ingle D ata SIMD S ingle I nstruction, M ultiple D ata MISD M ultiple I nstruction, S ingle D ata MIMD M ultiple I nstruction, M ultiple D ata
  11. 11. SISD <ul><li>Single Instruction, Single Data </li></ul>LOAD A LOAD B C = A+B STORE C CPU
  12. 12. SIMD <ul><li>Single Instruction, Multiple Data </li></ul>LOAD A[0] LOAD B[0] C[0] = A[0]+B[0] STORE C[0] LOAD A[1] LOAD B[1] C[1] = A[1]+B[1] STORE C[1] LOAD A[n] LOAD B[n] C[n] = A[n]+B[n] STORE C[n] CPU 1 CPU 2 CPU n
  13. 13. MISD <ul><li>Multiple Instruction, Single Data </li></ul>LOAD A[0] C[0] = A[0] *1 STORE C[0] LOAD A[1] C[1] = A[1] *2 STORE C[1] LOAD A[n] C[n] = A[n] *n STORE C[n] CPU 1 CPU 2 CPU n
  14. 14. MIMD <ul><li>Multiple Instruction, Multiple Data </li></ul>LOAD A[0] C[0] = A[0] *1 STORE C[0] X=sqrt(2) C[1] = A[1] *X method(C[1]); something(); W = C[n]**X C[n] = 1/W CPU 1 CPU 2 CPU n
  15. 15. Parallel Programming Models <ul><li>Shared Memory </li></ul><ul><li>Threads </li></ul><ul><li>Message Passing </li></ul><ul><li>Data Parallel </li></ul><ul><li>Hybrid </li></ul>
  16. 16. Threads Model Memory CPU CPU CPU CPU
  17. 17. Threads Model
  18. 18. Message Passing Model Memory CPU Memory CPU Memory CPU Memory CPU x=10
  19. 19. Hybrid Model x=10 Memory CPU Memory CPU Memory CPU CPU Memory CPU CPU
  20. 20. Amdahl's Law fraction of code that can be parallelized
  21. 21. Amdahl's Law parallel fraction number of processors serial fraction
  22. 22. Integration CPU CPU a b c
  23. 23. Pi calculation <ul><li>Madhava of Sangamagrama (1350-1425) </li></ul>CPU CPU
  24. 24. MPI <ul><li>Message Passing Interface </li></ul><ul><li>A computer specification </li></ul><ul><li>An implementation that allows many computers to communicate with one another. It is used in computer clusters. </li></ul>
  25. 25. MPI Simplest Example $ mpicc hello.c -o hello $ mpirun -np 5 hi
  26. 26. MPI_Send
  27. 27. MPI_Recv
  28. 28. OpenMP <ul><li>Open Multi-Processing </li></ul><ul><li>Multi-platform shared memory multiprocessing programming in C/C++ and Fortran on many architectures </li></ul><ul><li>Available on GCC 4.2 </li></ul><ul><li>C/C++/Fortran </li></ul>
  29. 29. Creating a OpenMP Thread $ gcc -openmp hello.c -o hello $ ./hello
  30. 30. Creating multiples OpenMP Threads
  31. 31. Pthreads <ul><li>POSIX Threads </li></ul><ul><li>POSIX standard for threads </li></ul><ul><li>Defines an API for creating and manipulating threads </li></ul>
  32. 32. Pthreads g++ -o threads threads.cpp -lpthread ./threads
  33. 33. Java Threads <ul><li>java.lang.Thread </li></ul><ul><li>java.util.concurrent </li></ul>
  34. 34. What is OpenSolaris? <ul><li>Open development effort based on the source code for the Solaris Operating System. </li></ul><ul><li>Collection of source bases (consolidations) and projects. </li></ul>
  35. 35. Opensolaris <ul><li>ZFS </li></ul><ul><li>Dtrace </li></ul><ul><li>Containers </li></ul><ul><li>HPC Tools </li></ul><ul><li> </li></ul><ul><ul><li>Dtrace for Open MPI </li></ul></ul>
  36. 36. Sun HPC ClusterTools <ul><li>Comprehensive set of capabilities for parallel computing. </li></ul><ul><li>Integrated toolkit that allows developers to create and tune MPI applications that run on high performance clusters and SMPs. </li></ul><ul><li> </li></ul>
  37. 37. Academy, industry and market
  38. 38. References <ul><li>Introduction to Parallel Computing, Blaise Barney, Livermore Computing, 2007. </li></ul><ul><li>Flynn, M., Some Computer Organizations and Their Effectiveness, IEEE Trans. Comput., Vol. C-21, pp. 948, 1972. </li></ul><ul><li>Top500 Supercomputer sites, Application Area share for 11/2007, </li></ul><ul><li>Wikipedia's article on Finite Element Analysis, </li></ul><ul><li>Wikipedia's article on Computational Fluid Dynamics, </li></ul><ul><li>Slides An Introduction to OpenSolaris, Peter Karlson. </li></ul><ul><li>Wikipedia's article on OpenMP, </li></ul>
  39. 39. References <ul><li>Wikipedia's article on MPI, </li></ul><ul><li>Wikipedia's article on PThreads, </li></ul><ul><li>Wikipedia's article on Madhave of Sangamagrama, </li></ul><ul><li>Distributed Systems Programming, Heriot-Watt University </li></ul>
  40. 40. Thank you! José Maria Silveira Neto Sun Campus Ambassador [email_address] “ open” artwork and icons by chandan: