Presentació a càrrec d'Adrián Macía (tècnic líder d'Aplicacions al CSUC) duta a terme a la "2a Jornada de formació sobre l'ús del servei de càlcul" celebrada el 19 de febrer de 2020 al CSUC.
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
HPC matters
• Nowadays simulation is a fundamental tool to
solve and uderstand problems in science and
engineering
Theory
SimulationExperiment
HPC role in science and engineering
• HPC allows the researchers to solve problems
that otherwise cannot be afforded
• Numerical simulationsare used in a wide
variety of fields like:
Chemistry and
materials sciences
Life and health
sciences
Mathematics,
physics and
engineering
Astronomy, space
and Earth sciences
Main applications per knowledge area
Chemistry
and materials
science
Vasp
Siesta
Gaussian
ADF
CP2K
Life and
health
sciences
Amber
Gromacs
NAMD
Schrödinger
VMD
Mathematics,
physics and
engineering
OpenFOAM
FDS
Code Aster
Paraview
Astronomy
and Earth
sciences
WRF
WPS
Software available
• In the following link you can find a detailed list
of the software
installed: https://confluence.csuc.cat/display/
HPCKB/Installed+software
• If you don't find your application ask for it to
the support team and we will be happy to
install it for you or help you in the installation
process
Demography of the service: users
• 32 research projects from 14 different
institutions are using our HPC service.
• These projects are distributed in:
– 11 Large HPC projects (> 500.000 UC)
– 3Medium HPC project (250.000 UC)
– 13 Small HPC projects (100.000 UC)
– 1 XSmall HPC project (40.000 UC)
Canigó
• Shared memory
machines (2 nodes)
• 33.18 Tflop/s peak
performance (16,59
per node)
• 384 cores (8 cpus Intel
SP Platinum 8168 per
node)
• Frequency of 2,7 GHz
• 4,6 TB main memory
per node
• 20 TB disk storage
4 nodes with 2 x GPGPU
• 48 cores (2x Intel SP Platinum 8168,
2.7 GHz)
• 192 GB main memory
• 4.7 Tflop/s per GPGPU
4 Intel KNL nodes
• 1 x Xeon-Phi 7250 (68 cores @
1.5 GHz, 4 hw threads)
• 384 GB main memory per node
• 3.5 Tflop/s per node
Pirineus II
Standard nodes (44 nodes)
• 48 cores (2x Intel SP Platinum
6148, 2.7 GHz)
• 192 GB main memory (4 GB/core)
• 4 TB disk storage per node
High memory nodes (6 nodes)
• 48 cores (2x Intel SP Platinum 6148, 2.7 GHz)
• 384 GB main memory (8 GB/core)
• 4 TB disk storage per node
Pirineus II
High performace scratch system
• High performance storage available based
on BeeGFS
• 180 TB total space available
• Very high read / write speed
• Infiniband HDR direct connection (100 Gbps)
between the BeeGFS cluster and the compute
nodes.
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
Working environment
• The working environment is shared between
all the users of the service.
• Each machine is managed by GNU/Linux
operating system (Red Had).
• Computational resources are managed by the
Slurm Workload manager.
• Compilers and development tools availble:
Intel, GNU and PGI
Batch manager: Slurm
• Slurm manages the available resources in
order to have an optimal distributionbetween
all the jobs in the system
• Slurm assign different priority to each job
depending on a lot of factors
… more on this after the coffee!
Storage units
(*) There is a limit per project depending on the project category. Group I: 200
GB, group II 100 GB, group III 50 GB, group IV 25 GB
Name Variable Availability Quota Time limit Backup
/home/$USER $HOME Global
25- 200
GB (*)
Unlimited Yes
/scratch/$USER/ − Global 1 TB 30 days No
/scratch/$USER/tmp/$JOBI
D
$SCRATCH /
$SHAREDSCRATCH
Global 1 TB 7 days No
/tmp/$USER/$JOBID
$SCRATCH /
$LOCALSCRATCH
Local node −
Job
execution
No
How to access to our services?
• If you are not granted with a RES project or
you are not interested in applying for it you
can still work with us. More info
in https://www.csuc.cat/ca/supercomputacio
/sollicitud-d-us
HPC Service price
Academic project¹
Initial block
- Group I: 500.000 UC 8.333,33 €
- Group II: 250.000 UC 5.555,55 €
- Group III: 100.000 UC 3.333,33 €
- Group IV: 40.000 UC 1.666,66€
Additional 50.000 UC block
- When you have paid for 500.000 UC 280 €/block
- When you have paid for 250.000 UC 1.100 €/block
- When you have paid for 100.000 UC 1.390 €/block
- When you have paid for 40.000 UC 2.000 €/block
DGR discount for catalan academic
groups
-10 %
Accounting HPC resources
• In order to quantify the used resources we introduce the UC
as a unit.
• UC: Computational unit. It is defined as UC =
HC(Computacional Hour) x factor
– For standard nodes, 1HC = 1UC. Factor = 1.
– For standard fat nodes, 1HC = 1.5 UC. Factor = 1.5
– For GPU nodes, 1HC = 1UC. Factor = 1. (*)
– For KNL nodes, 1HC = 0,5 UC. Factor = 0,5. (**)
– Per a canigó (SMP), 1HC = 2UC. Factor = 2
(*) You need to allocate a full socket (24 cores) at minimum
(**) You need to allocate the full node (68 cores)
Access through RES project
• You can apply for a RES (red española de
supercomputación) project asking to work at
CSUC (in pirineus II or canigo). More
information about this
on https://www.res.es/es/acceso-a-la-res
Choosing your architecture: HPC
partitions // queues
• We have 5 partitions available for the users:
std, std-fat, gpu, knl, mem working on
standard, standard fat, gpu, knl or shared
memory nodes.
• Initially the user can only use std and std-fat
partition but if any user wants to use a
different architecture only need to request
permission and it will be granted.
… more on this later...
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
Development tools @ CSUC HPC
• Compilers available for the users:
– Intel compilers
– PGI compilers
– GNU compilers
• MPI libraries:
– Open MPI
– Intel MPI
– MPICH
– MVAPICH
Development tools @ CSUC HPC
• Intel Advisor, VTune, ITAC, Inspector
• Scalasca
• Mathematical libraries:
– Intel MKL
– Lapack
– Scalapack
– FFTW
• If you need anything that is not installed let us
know