Secrets of supercomputing

660 views

Published on

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
660
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Secrets of supercomputing

  1. 1. Secrets of Supercomputing The Conservation Laws Supercomputing Challenge Kickoff October 21-23, 2007 I. Background to Supercomputing II. Get Wet! With the Shallow Water Equations Bob Robey - Los Alamos National Laboratory Randy Roberts – Los Alamos National Laboratory Cleve Moler -- Mathworks LA-UR-07-6793 Approved for public release; distribution is unlimited
  2. 2. Introductions <ul><li>Bob Robey -- Los Alamos National Lab, X division </li></ul><ul><ul><li>[email_address] , 665-9052 or home: [email_address] , 662-2018 </li></ul></ul><ul><ul><li>3D Hydrocodes and parallel numerical software </li></ul></ul><ul><ul><li>Helped found UNM and Maui High Performance Computing Centers and Supercomputing Tutorials </li></ul></ul><ul><li>Randy Roberts -- Los Alamos National Lab, D Division </li></ul><ul><ul><li>Java, C++, Numerical and Agent Based Modeling </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul><ul><li>Cleve Moler </li></ul><ul><ul><li>Matlab Founder </li></ul></ul><ul><ul><li>Former UNM CS Dept Chair </li></ul></ul><ul><ul><li>SIAM President </li></ul></ul><ul><ul><li>Author of “Numerical Computing with Matlab” and “Experiments with Matlab” </li></ul></ul>
  3. 3. Conservation Laws <ul><li>Formulated as a conserved quantity </li></ul><ul><ul><li>mass </li></ul></ul><ul><ul><li>momentum </li></ul></ul><ul><ul><li>energy </li></ul></ul><ul><li>Good reference is Leveque’s book and his freely available software package CLAWPACK (Fortran/MPI) and a 2D shallow water version Tsunamiclaw </li></ul>Leveque, Randall, Numerical Methods for Conservation Laws Leveque, Randall, Finite Volume Methods for Hyperbolic Problems CLAWPACK http://www.amath.washington.edu/~claw/ Tsunamiclaw http://www.math.utah.edu/~george/tsunamiclaw.html Conserved variable Change
  4. 4. I. Intro to Supercomputing <ul><li>Classical Definition of Supercomputing </li></ul><ul><ul><li>Harnessing lots of processors to do lots of small calculations </li></ul></ul><ul><li>There are many other definitions which usually include any computing beyond the norm </li></ul><ul><ul><li>Includes new techniques in modeling, visualization, and higher level languages. </li></ul></ul><ul><li>Question for thought: With greater CPU resources is it better to save programmer work or to make the computer do bigger problems? </li></ul>
  5. 5. II. Calculus Quickstart Decoding the Language of Wizards
  6. 6. Calculus Quickstart Goals <ul><li>Calculus is a language of mathematical wizards. It is convenient shorthand, but not easy to understand until you learn the secrets to the code. </li></ul><ul><li>Our goal is for you to be able to </li></ul><ul><li>READ calculus and TALK calculus. </li></ul><ul><li>Goal is not to ANALYTICALLY SOLVE calculus using traditional methods. In supercomputing we generally solve problems by brute force. </li></ul>
  7. 7. Calculus Terminology <ul><li>Two branches of Calculus </li></ul><ul><ul><li>Integral Calculus </li></ul></ul><ul><ul><li>Derivative Calculus </li></ul></ul><ul><li>P = f(x, y, t) </li></ul><ul><ul><li>Population is a function of x, y, and t </li></ul></ul><ul><li>∫ f(x)dx – definite integral, area under the curve, or summation </li></ul><ul><li>dP/dx – derivative, instantaneous rate of change, or slope of a function </li></ul><ul><li>∂ P/∂x – partial derivative implying that P is a function of more than one variable </li></ul>
  8. 8. Matrix Notation The first set of terms are state variables at time t and usually called U. The second set of terms are the flux variables in space x and usually referred to as F. This is just a system of equations a + c = 0 b + d = 0 U F
  9. 9. <ul><li>Parallel Algorithms </li></ul><ul><li>Data Parallel -– most common with MPI </li></ul><ul><li>Master/Worker – one process hands out the work to the other processes – great load balance, good with threads </li></ul><ul><li>Pipeline – bucket brigade </li></ul><ul><li>Implementation Patterns </li></ul><ul><li>Message Passing </li></ul><ul><li>Threads </li></ul><ul><li>Shared Memory </li></ul><ul><li>Distributed Arrays, Global Arrays </li></ul>Patterns for Parallel Programming Patterns for Parallel Programming, Mattson, Sanders, and Massingill, 2005
  10. 10. Writing a Program Data Parallel Model Serial operations are done on every processor so that replicated data is the same on every processor. This may seem like a waste of work, but it is easier than synchronizing data values. Sections of distributed data are “owned” by each processor. This is where the parallel speedups occur. Often ghost cells around each processor’s data is a way to handle communication. P(400) – distributed Ptot -- replicated Proc 1 P(1-100) Ptot Proc 2 P(101-200) Ptot Proc 3 P(201–300) Ptot Proc 4 P(301-400) Ptot
  11. 11. 2007-2008 Sample Supercomputing Project <ul><li>Evaluation Criteria – Expo (Report slightly different). Use these to evaluate the following project. </li></ul><ul><ul><li>15% Problem Statement </li></ul></ul><ul><ul><li>25% Mathematical/Algorithmic Model </li></ul></ul><ul><ul><li>25% Computational Model </li></ul></ul><ul><ul><li>15% Results and Conclusions </li></ul></ul><ul><ul><li>10% Code </li></ul></ul><ul><ul><li>10% Display </li></ul></ul>Evaluate Us!!
  12. 12. Get Wet! With the Shallow Water Equations <ul><li>The shallow water model for wave motion is important for water flow, seashore waves, and flooding </li></ul><ul><li>Goal of this project is to model the wave motion in the shallow water tank </li></ul><ul><li>With slight modifications this model can be applied to: </li></ul><ul><ul><li>ocean or lake currents </li></ul></ul><ul><ul><li>weather </li></ul></ul><ul><ul><li>glacial movement </li></ul></ul>
  13. 13. Output from a shallow water equation model of water in a bathtub. The water experiences 5 splashes which generate surface gravity waves that propagate away from the splash locations and reflect off of the bathtub walls. Wikipedia commons, Author Dan Copsey Go to shallow water movie. http:// en.wikipedia.org/wiki/Image:Shallow_water_waves.gif
  14. 14. Mathematical Equations Mathematical Model Conservation of Mass Conservation of Momentum Shallow Water Equations Notes: mass equals height because width, depth and density are all constant h -> height u -> velocity g -> gravity References: Leveque, Randall, Finite Volume Methods for Hyperbolic Problems, p. 254 Note: Force term, Pressure P=½gh 2
  15. 15. Shallow Water Equations Matrix Notation The maximum time step is calculated so as to keep a wave from completely crossing a cell.
  16. 16. Numerical Model <ul><li>Lax-Wendroff two-step, a predictor-corrector method </li></ul><ul><ul><li>Predictor step estimates the values at the zone boundaries at half a time step advanced in time </li></ul></ul><ul><ul><li>Corrector step fluxes the variables using the predictor step values </li></ul></ul><ul><li>Mathematical Notes for next slide: </li></ul><ul><ul><li>U is a state variable such as mass or height. </li></ul></ul><ul><ul><li>F is a flux term – the velocity times the state variable at the interface </li></ul></ul><ul><ul><li>superscripts are time </li></ul></ul><ul><ul><li>subscripts are space </li></ul></ul>
  17. 17. The Lax-Wendroff Method Half Step Whole Step Explanation graphic courtesy of Jon Robey and Dov Shlacter, 2006-2007 Supercomputing Challenge
  18. 18. Explanation of Lax-Wendroff Model Physical model Original Half-step Full step t i t+1 i t+.5 i+.5 Explanation graphic courtesy of Jon Robey and Dov Shlacter, 2006-2007 Supercomputing Challenge. See appendix for 2D index explanation. Ghost cell Data assumed to be at the center of cell. Space index
  19. 19. Extension to 2D <ul><li>The extension of the shallow water equations to 2D is shown in the following slides. </li></ul><ul><ul><li>First slide shows the matrix form of the 2D shallow water equations </li></ul></ul><ul><ul><li>Second slide shows the 2D form of the Lax-Wendroff numerical method </li></ul></ul>
  20. 20. 2D Shallow Water Equations Note the addition of fluxes in the y direction and a flux cross term in the momentum equation. The U, F, and G are shorthand for the numerical equations on the next slide. The U terms are the state variables. F and G are the flux terms in x and y. U F G
  21. 21. The Lax-Wendroff Method Half Step Whole Step
  22. 22. 2D Shallow Water Equations Transformed for Programming Letting H = h, U = hu and V = hv so that our main variables are the state variables in the first column gives the following set of equations. H is height (same as mass for constant width, depth and density) U is x momentum (x velocity times mass) V is y momentum (y velocity times mass)
  23. 23. Sample Programs <ul><li>The numerical method was extracted from the McCurdy team’s model (team 62) from last year and reprogrammed from serial Fortran to C/MPI using the programming style from one of the Los Alamos team’s project (team 51) with permission from both teams. </li></ul><ul><li>Additional versions of the program were made in Java/Threads and Matlab </li></ul>
  24. 24. Programming Tools Three options <ul><li>Matlab </li></ul><ul><ul><li>Computation and graphics integrated into Matlab desktop </li></ul></ul><ul><li>Java/Threads </li></ul><ul><ul><li>Eclipse or Netbeans workbench </li></ul></ul><ul><ul><li>Graphics via Java 2D and Java Free Chart </li></ul></ul><ul><li>C/MPI </li></ul><ul><ul><li>Eclipse workbench -- An open-source Programmers Workbench http:// www.eclipse.org . </li></ul></ul><ul><ul><li>PTP (parallel tools plug-in) – adds MPI support to Eclipse (developed partly at LANL) </li></ul></ul><ul><ul><li>OpenMPI – a MPI implementation (developed partly at LANL) </li></ul></ul><ul><ul><li>MPE -- graphics calls that come with MPICH. Graphics calls are done in parallel from each processor! </li></ul></ul>
  25. 25. Initial Conditions and Boundary Conditions <ul><li>Initial conditions </li></ul><ul><ul><li>velocity (u and v) are 0 throughout the mesh </li></ul></ul><ul><ul><li>height is 2 with a ramp to the height of 10 at the right hand boundary starting at the mid-point in the x dimension </li></ul></ul><ul><li>Boundary conditions are reflective, slip </li></ul><ul><ul><li>h bound =h interior ; u xbound =0; v xbound =v interior </li></ul></ul><ul><ul><li>h bound =h interior ; u ybound =u interior ; v ybound =0 </li></ul></ul><ul><ul><li>If using ghost cells, force zero velocity at the boundary by setting U xghost = -U interior </li></ul></ul>
  26. 26. Results/Conclusions <ul><li>The Lax-Wendroff model accurately models the experimental wave tank </li></ul><ul><ul><li>matches wave speed across the tank </li></ul></ul><ul><li>Some of the oscillations in the simulation are an artifact of the numerical model </li></ul><ul><ul><li>OK as long as initial wave is not too steep </li></ul></ul><ul><ul><li>numerical damping technique could be added but is beyond the scope of this effort </li></ul></ul>
  27. 27. Acknowledgements <ul><li>Work used by permission: </li></ul><ul><li>Awash: Modeling Wave Movement in a Ripple Tank, Team 62, McCurdy High School, 2006-2007 Supercomputing Challenges </li></ul><ul><li>A Lot of Hot Air: Modeling Compressible Fluid Dynamics, Team 51, Los Alamos High School, 2006-2007 Supercomputing Challenge </li></ul><ul><li>We all have bugs and thanks to those who found mine </li></ul><ul><li>Randy Roberts and Jon Robey for finding and fixing a bug in the second pass </li></ul><ul><li>Randy Leveque for finding a missing square in the gravity forcing term </li></ul>
  28. 28. Lab Exercises <ul><li>TsunamiClaw </li></ul><ul><li>Matlab </li></ul><ul><li>Experimental demonstration </li></ul><ul><li>Java Serial </li></ul><ul><li>Java Parallel </li></ul><ul><li>C/MPI </li></ul>
  29. 29. Java Wave Structure <ul><li>Wave class does most of the work </li></ul><ul><ul><li>main(String[] args) calls start() ‏ </li></ul></ul><ul><ul><li>start() creates a WaveProblemSetup </li></ul></ul><ul><ul><li>start() calls methods to do initialization and boundary conditions </li></ul></ul><ul><ul><li>start() calls methods to iterate and update the display </li></ul></ul>
  30. 30. Java Wave Structure (continued) ‏ <ul><li>WaveProblemSetup stores the new and old arrays </li></ul><ul><li>swaps the new and old arrays when asked to by Wave </li></ul>
  31. 31. Java Wave Program Flow <ul><li>Create arrays for new, old, and temporary data </li></ul><ul><li>Initialize data </li></ul><ul><li>Set boundary data to represent correct boundary conditions </li></ul><ul><li>Iterate for the given number of iterations </li></ul>
  32. 32. Java Wave Iteration Flow <ul><li>Update physics into new arrays from data in old arrays </li></ul><ul><li>Set boundary data to represent correct boundary conditions with updated arrays </li></ul><ul><li>Update display </li></ul><ul><li>Swap new arrays with old arrays </li></ul>
  33. 33. Java Threads <ul><li>How do you take advantage of new Multi-Core processors? </li></ul><ul><li>Run parts of the problem on different cores at the same time! </li></ul>
  34. 34. Java Threads (continued) ‏ <ul><li>WaveThreaded program </li></ul><ul><ul><li>partitions the problem into domains using SubWaveProblemSetup objects </li></ul></ul><ul><ul><li>runs calculations on each domain in separate threads using WaveWorker objects </li></ul></ul><ul><ul><li>adds complexity with synchronization of thread's access to data </li></ul></ul>
  35. 35. C/MPI Program Diagram Update Boundary Cells MPI Communication External Boundaries First Pass x half step y half step Second Pass Swap new/old Graphics Output Conservation Check Calculate Runtime Close Display, MPI & exit Allocate memory Set Initial Conditions Initial Display Repeat
  36. 36. MPI Quick Start <ul><li>#include <mpi.h> </li></ul><ul><li>MPI_Init(&argc, &argv) </li></ul><ul><li>MPI_Comm_size(Comm, &nprocs) // get number of processors </li></ul><ul><li>MPI_Comm_rank(Comm, &myrank) // get processor rank 0 to nproc-1 </li></ul><ul><li>// Broadcast from source processor to all processors </li></ul><ul><li>MPI_Bcast(buffer, count, MPI_type, source, Comm) </li></ul><ul><li>// Used to update ghost cells </li></ul><ul><li>MPI_ISend(buffer, count, MPI_type, dest, tag, Comm, req) </li></ul><ul><li>MPI_IRecv(buffer, count, MPI_type, source, tag, Comm, req+1) </li></ul><ul><li>MPI_Waitall(num, req, status) </li></ul><ul><li>// Used for sum, max, and min such as total mass or minimum timestep </li></ul><ul><li>MPI_Allreduce(&num_local, &num_global, count, MPI_type, MPI_op, Comm) </li></ul><ul><li>MPI_Finalize() </li></ul><ul><li>Web pages for MPI and MPE at Argonne National Lab (ANL) -- http://www- unix.mcs.anl.gov/mpi/www / </li></ul>
  37. 37. Setup <ul><li>The software is already setup on the computers </li></ul><ul><li>For setup on home computers, there are two parts. First download the files from the Supercomputing Challenge website for the lab in C/MPI if you haven’t already done that. </li></ul><ul><li>Untar the lab files with </li></ul><ul><ul><li>“ tar –xzvf Wave_Lab.tgz” </li></ul></ul>
  38. 38. Setting up Software Instructions in the README file <ul><li>Setting up System Software </li></ul><ul><li>Need Java, OpenMPI and MPE package from MPICH </li></ul><ul><li>Download and install according to instructions in openmpi_setup.sh </li></ul><ul><li>Can install in user’s directory with some modifications </li></ul><ul><li>Setting up User’s workspace </li></ul><ul><li>Download eclipse software including eclipse, PTP and PLDT </li></ul><ul><li>Install according to instructions in eclipse_setup.sh </li></ul><ul><li>Import wave source files and setup eclipse according to instructions in eclipse_setup.sh </li></ul>
  39. 39. Lab Exercises <ul><li>Try modifying the sample program (Java and/or C versions) </li></ul><ul><ul><li>Change initial distribution. How sharp can it be before it goes unstable? </li></ul></ul><ul><ul><li>Change number of cells </li></ul></ul><ul><ul><li>Change graphics output </li></ul></ul><ul><ul><li>Try running 1, 2, or 4 processes and time the runs. Note that you can run 4 processes even if you are on a one processor system. </li></ul></ul><ul><ul><li>Switch to PTP debug or Java debug perspective and try stepping through the program </li></ul></ul><ul><li>Comparing to data is critical </li></ul><ul><ul><li>Are there other unrealistic behaviors of the model? </li></ul></ul><ul><ul><li>Design an experiment to isolate variable effects. This can greatly improve your model. </li></ul></ul>
  40. 40. Appendix A. Calculus and Supercomputing <ul><li>Calculus and Supercomputing are intertwined. Why? </li></ul><ul><li>Here is a simple problem – Add up the volume of earth above sea-level for an island 500 ft high by half a mile wide and twenty miles long. </li></ul><ul><li>Typical science homework problem using simple algebra. Can be done by hand. Not appropriate for supercomputing. Not enough complexity. </li></ul>
  41. 41. Add Complexity <ul><li>The island profile is a jagged mountainous terrain cut by deep canyons. How do we add up the volume? </li></ul><ul><li>Calculus – language of complexity </li></ul><ul><ul><li>Addition – summing numbers </li></ul></ul><ul><ul><li>Multiplication – summing numbers with a constant magnitude </li></ul></ul><ul><ul><li>Integration – summing numbers with an irregular magnitude </li></ul></ul>
  42. 42. Divide and Conquer <ul><li>In discrete form </li></ul><ul><li>Divide the island into small pieces and sum up the volume of each piece. </li></ul><ul><li>Approaches the solution as the size of the intervals grows smaller for a jagged profile. </li></ul>∑ -- Summation symbol ∆ -- delta symbol or x 2 -x 1
  43. 43. Divide and Conquer <ul><li>In Continuous Form – Integration </li></ul><ul><li>Think of the integral symbols as describing a shape that is continuously varying </li></ul><ul><li>The accuracy of the solution can be improved by summing over smaller increments </li></ul><ul><li>Lots of arithmetic operations – now you have a “computing” problem. Add more work and you have a “supercomputing” problem. </li></ul>
  44. 44. Derivative Calculus Describing Change <ul><li>Derivatives describe the change in a variable (numerator or top variable) relative to another variable (denominator or bottom). These three derivatives describe the change in population versus time, x-direction and y-direction. </li></ul>
  45. 45. Appendix B. Computational Methods <ul><li>Eulerian and Lagrangian </li></ul><ul><li>Explicit and Implicit </li></ul>
  46. 46. Two Main Approaches to Divide up Problem <ul><li>Eulerian – divide up by spatial coordinates </li></ul><ul><ul><li>Track populations in a location </li></ul></ul><ul><ul><li>Observer frame of reference </li></ul></ul><ul><li>Lagrangian – divide up by objects </li></ul><ul><ul><li>Object frame of reference </li></ul></ul><ul><ul><li>Easier to track attributes of population since they travel with the objects </li></ul></ul><ul><ul><li>Agent based modeling of Star Logo uses this approach </li></ul></ul><ul><ul><li>Can tangle mesh in 2 and 3 dimensions </li></ul></ul>
  47. 47. Eulerian <ul><li>Eulerian – The area stays fixed and has a Population per area. We observe the change in population across the boundaries of the area. </li></ul><ul><li>Lagrangian – The population stays constant. The population moves with velocity vx and vy and we move with them. The size of the area will change if the four vertexes of the rectangle move at different velocities. Changes in area will result in different densities. </li></ul>
  48. 48. Explicit versus Implicit <ul><li>Explicit – In mathematical shorthand, U n+1 = f(U n ). This means that the next timestep values can be expressed entirely on the previous timestep values. </li></ul><ul><li>Implicit – U n+1 =f(U n+1 ,U n ). Next timestep values must be solved iteratively. Often uses a matrix or iterative solver. </li></ul><ul><li>We will stick with explicit methods here. You need more math to attempt implicit methods. </li></ul>
  49. 49. Appendix C Index Explanation for 2D Lax Wendroff
  50. 50. Programming <ul><li>Most difficult part of programming this method is to keep track of indices – half step grid indices cannot be represented by ½ in the code so they have to be offset one way or the other. </li></ul><ul><li>Errors are very difficult to find so it is important to be very methodical in the coding. </li></ul><ul><li>Next two slides show the different sizes of the staggered half-step grid and the relationships between the indices in the calculation (courtesy Jon Robey). </li></ul>
  51. 51. 0,0 -- 1,0 | 1,1 j,i -- j+1,i | j+1,i+1 0,0 -- 0,1 | 1,1 j,i -- j,i+1 | j+1,i+1 1 st Pass y y y y y y y y y y y y x x x x x x x x x x x x 0 1 2 3 4 j 0 1 2 3 4 i X step grid Main grid Y step grid Main grid
  52. 52. 1,1 1,1 -- 0,0 | 1,0 -- 0,0 | 0,1 j,i -- j-1,i-1 | j,i-1 j,i -- j-1,i-1 | j-1,i 2 nd Pass y y y y y y y y y y y y x x x x x x x x x x x x 0 1 2 3 4 j 0 1 2 3 4 i Main grid X step grid Main grid Y step grid

×