Icg hpc-user


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Icg hpc-user

  1. 1. Portsmouth ICG – HPC The “ Sciama ” Environment G Burton - Nov 10 - Version 1.1
  2. 2. SCIAMA (pronounced shama) S EPNet C omputing I nfrastructure for A strophysical M odeling and A nalysis
  3. 3. What we need from SEPNet partners:- Named “superuser” Required for initial testing Required for initial user training Will require local IP range for firewall access Required software packages to be installed Approximation of number of likely users
  4. 4. Sciama Building Blocks
  5. 5. In the “good-ol-days” things were simple ……….
  6. 6. In the “good-ol-days” things were simple ……….
  7. 7. … then more sockets were added Two main players are Intel and AMD Single operating system controlling both sockets
  8. 8. … then more cores to the sockets. The basic building block for the Sciama cluster Intel Xeon X5650 2.66Ghz six-core (Westmere core)
  9. 9. Total ICG Compute Pool > 1000 Cores
  10. 10. Sciama Basic Concept
  11. 11. Basic Concept of Cluster
  12. 12. A bit about Storage ……… NB. The storage is transient - IT WILL NOT BE BACKED UP
  13. 13. Lustre Storage – V Large Files – High Performance
  14. 14. Networking -Three Independent LAN’S
  15. 15. Some users are at remote locations ..
  16. 16. Use of Remote Login Client
  17. 17. ICG-HPC Stack
  18. 18. Installed S/W Licensed Software:- Intel Cluster Toolkit (compiler edition for Linux) Intel Thread Checker Intel Vtune Performance Analyser IDL use ICG license pool ? Restrict access ?) Matlab use UoP floating licenses ? Restrict access ? )
  19. 19. Installed S/W Will install similar to Cosmos / Universe :- OpenMPI, OpenMP, MPICH Opens source C, C++ and Fortran compiler suites Maths Libs – ATLAS, BLAS, (Sca)LAPACK, FFTW
  20. 20. Running Applications on the Sciama
  21. 21. 12 cores per Nodes Multiple cores allow for multi treaded applications. OpenMP is an enabler
  22. 22. Inter node memory sharing not (usually ) possible Gives rise to “distributed memory” Model Need the likes of OpenMPI (Message Passing Interface)
  23. 23. Largest (sensible) job is 24Gbytes in this distributed memory model
  24. 24. MPI allows parallel programming in distributed memory model MPI enables parallel computation Message Buffers are used to pass data between processes Standard tcp-ip network used
  25. 25. Hybrid OpenMP and MP programming possible
  26. 26. Comparing Sciama with Cambridge COMOS / Universe Environments
  27. 27. Shared Memory Model Sciama is a distributed memory systems. Cosmos – Universe environments are SGI Altix shared memory systems.
  28. 28. Shared Memory Models Can Support very large processes
  29. 29. Shared Memory Model Supports OpenMP and MPI (and Hybrid) Altix systems have an MPI Offload Engine for speeding up MPI comms.
  30. 30. Binary Compatibility COSMOS and Universe are not binary compatible (Intel vs Itanium processors). Universe is compatible with Sciama but some libraries may be SGI Specific (MPI offload engine)