SlideShare a Scribd company logo
NSCC High
Performance
Computing Cluster
Introduction
[16-May-2016]
• Introduction to NSCC
• About HPC
• More about NSCC HPC cluster
• PBS Pro (Scheduler)
• Compilers and Libraries
• DeveloperTools
• Co-processor / Accelerators
• Environment Modules
• Applications
• User registration procedures
• Feedback
2
The Discussion
3
Introduction to NSCC
• State-of-the-art national facility with computing, data and
resources to enable users to solve science and technological
problems, and stimulate industry to use computing for
problem solving, testing designs and advancing technologies.
• Facility will be linked by high bandwidth networks to connect
these resources and provide high speed access to users anywhere
and everyone.
Introduction:
The National Supercomputing Centre (NSCC)
4
Introduction:Vision & Objectives
Vision:“Democratising Access to Supercomputing”
5
Making Petascale Supercomputing accessible to the
ordinary researcher
1
Bringing Petascale Computing and Storage and
Gigabit speed networking to the ordinary person
2
Supporting National R&D Initiatives1
Objectives of NSCC
Attracting Industrial Research Collaborations2
Enhancing Singapore’s Research Capabilities3
6
What is HPC?
7
What is HPC?
• Term HPC stands for High Performance Computing or High
Performance Computer
• Tightly coupled personal computers with high speed interconnect
• Measured in FLOPS (FLoating point Operations Per Second)
• Architectures
– NUMA (Non-uniform memory access)
Major Domains where HPC is used
Engineering
Analysis
• Fluid
Dynamics
• Materials
Simulation
• Crash
simulations
• Finite Element
Analysis
Scientific
Analysis
• Molecular
modelling
• Computational
Chemistry
• High energy
physics
• Quantum
Chemistry
Life Sciences
• Genomic
Sequencing
and Analysis
• Protein
folding
• Drug design
• Metabolic
modelling
Seismic
analysis
• Reservoir
Simulations
and modelling
• Seismic data
processing
8
Major Domains where HPC is used
Chip design &
Semiconductor
• Transistor
simulation
• Logic Simulation
• Electromagnetic
field solver
Computational
Mathematics
• Monte-Carlo
methods
• Time stepping and
parallel time
algorithms
• Iterative methods
Media and
Animation
• VFX and
visualization
• Animation
Weather research
• Atmospheric
modelling
• Seasonal time-
scale research
• -
Major Domains where HPC is used
9
Major Domains where HPC is used
• And More
– Bigdata
– Information Technology
– Cyber security
– Banking and Finance
– Data mining
10
11
Introduction to NSCC HPC Cluster
Executive Summary
• 1 Petaflop System
– About 1300 nodes
– Homogeneous and Heterogeneous architectures
• 13 Petabytes of Storage
– One of the Largest and state of the art Storage architecture
• Research and Industry
– A*STAR, NUS, NTU, SUTD
– And many more commercial and academic organizations
12
HPC Stack in NSCC
Mellanox 100 Gbps Network
Intel Parallel
studio
Allinea Tools
PBSPro
Scheduler
Lustre & GPFS
HPC Application software
Operating System
RHEL 6.6 and CentOS 6.6
Fujitsu x86 Servers NVidia Tesla K40 GPUDDN Storage
Application
Modules
13
14
NSCC Supercomputer Architecture
Base Compute Nodes (1160 nodes) Accelerated Nodes (128 nodes)
Parallel File system /
Tiered storage
InfiniBand network - Fully non-
blocking
Ethernet NW
GIS FAT node
NUS Peripheral
Servers
NTU Peripheral
Servers
NSCC Peripheral
Servers
NSCC Direct
users
VPN
Login architecture
15
Login
cluster
80Gb/s
Link
NSCC cluster
16
Customized Solution
17
Genomic Institute of
Singapore (GIS)
National
Supercomputing
Center (NSCC)
2km
Connection between GIS and NSCC
Large memory
node (1TB),
Ultra high speed
500Gbps
enabled
2012:
300 Gbytes/week
2015:
4300 Gbytes/week
x 14
NGSP Sequencers at B2
(Illumina + PacBio)
NSCC
Gateway
STEP 2: Automated
pipeline analysis once
sequencing completes.
Processed data resides in
NSCC
500Gbps
Primary
Link
Data Manager
STEP 3: Data manager index
and annotates processed data.
Replicate metadata to GIS.
Allowing data to be search and
retrieved from GIS
Data ManagerCompute Tiered Storage
POLARIS, Genotyping &
other Platforms in L4~L8
Tiered Storage
STEP 1: Sequencers
stream directly to
NSCC Storage
(NO footprint in GIS)
Compute
1 Gbps per
sequencer
10 Gbps
1 Gbps per
machine
100 Gbps
10 Gbps
A*CRC-NSCC
GIS
A*CRC: A*Star Computational Resource Center
GIS: Genome Institute of Singapore
Direct streaming of Sequence Data from GIS
to remote Supercomputer in NSCC
2km
The Hardware
EDR Interconnect
• Mellanox EDR Fat
Tree within cluster
• InfiniBand connection
to all end-points (login
nodes) at three campuses
• 40/80/500 Gbps
throughput network
extend to three campuses
(NUS/NTU/GIS)
Over13PB Storage
• HSM Tiered, 3 Tiers
• I/O 500 GBps flash
burst buffer , 10x
Infinite Memory
Engine (IME)
~1 PFlops System
• 1,288 nodes (dual socket,
12 cores/CPU E5-2690v3)
• 128 GB DDR4 / node
• 10 Large memory
nodes (1x6TB, 4x2TB, 5x
1TB)
19
Compute nodes
20
• Large Memory Nodes
– 9 Nodes configured with high memory
– FUJITSU Server PRIMERGY RX4770 M2
– Intel(R) Xeon(R) CPU E7-4830 v3 @ 2.10GHz
– 4 x 1TB, 4x 2TB, and 1x 6TB Memory
configuration
– EDR Infiniband
• Standard Compute nodes
– 1160 nodes
– Fujitsu Server PRIMERGY CX2550 M1
– 27840 CPU Cores
– Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
– 128 GB / Server
– EDR InfiniBand
– Liquid cooling system
Accelerate your computing
Accelerators nodes
• 128 nodes with NVIDIA GPUs (identical to the compute nodes)
• NVIDIA K40 (2880 cores)
• 368,640 total GPU cores
Visualization nodes
• 2 nodes Fujitsu Celsius R940 graphic workstations
• Each with 2 x NVIDIA Quadro K4200
• NVIDIA Quadro Sync support
21
NSCC Data Centre – Green features
Warm water cooling for CPUs
– First free-cooling system in Singapore and
South-East Asia.
– Water is maintained at a temperature of 40ºC.
Enters the racks at 40ºC, exits the racks at
45ºC.
– Equipment placed in a technical floor(18th)
cool down the water down only using fans.
– The system can easily be extended for future
expansion.
Green features of Data Centre
– PUE of 1.4 (average for Singapore is above 2.5)
22
Cool-Central® Liquid Cooling
technology
Parallel file system
• Components
– Burst Buffer
• 265TB Burst Buffer
• 500 GB/s throughput
• Infinite Memory Engine (IME)
– Scratch
• 4 PB scratch storage
• 210 GB/s
• SFA12KX EXAScalar storage
• Lustre file system
– home and secure
• 4 PB Persistent storage
• GridScalar storage
• 100 GB/s throughput
• IBM Spectrum Scale (formerly GPFS)
– Archive storage
• 5 PB storage
• Archive purpose only
• WOS based archive system
23
IME Architecture
24
Tiered File system
25
NSCC Storage
26
Tier0
BurstBuffer
Tier0
ScratchFS
Tier1
HomeFS
Tier1
ProjectFS
Tier2
Archive
265 TB
500 GB/s
4 PB
210 GB/s
4 PB
100 GB/s
WOS Active
Archive
Infinite Memory
Engine GRIDScaler
GPFS® Storage
HSM
5PB
20TB/h
EXAScaler Lustre® Storage
Software Stack
Operating System
CentOS 6.6
Scheduler
PBS Pro
Compilers
GCC
Intel Parallel Studio
Libraries
GNU, Intel MKL
Allinea tools
GPGPU CUDA
Toolkit 7.5
Environment
Modules
27
PBS Professional (Job Scheduler)
28
Why PBS Professional (Scheduler)?
29
 Workload management solution that maximizes the efficiency and
utilization of high-performance computing (HPC) resources and improves
job turnaround
RobustWorkload
Management
 Floating licenses
 Scalability, with flexible queues
 Job arrays
 User and administrator interface
 Job suspend/resume
 Application checkpoint/restart
 Automatic file staging
 Accounting logs
 Access control lists
Advanced Scheduling
Algorithms
 Resource-based scheduling
 Preemptive scheduling
 Optimized node sorting
 Enhanced job placement
 Advance & standing reservations
 Cycle harvesting across workstations
 Scheduling across multiple complexes
 Network topology scheduling
 Manages both batch and interactive work
 Backfilling
Reliability,Availability and
Scalability
 Server failover feature
 Automatic job recovery
 System monitoring
 Integration with MPI solutions
 Tested to manage 1,000,000+ jobs per day
 Tested to accept 30,000 Jobs per minute
 EAL3+ security
 Checkpoint support
Process Flow of a PBS Job
1. User submits job
2. PBS server returns a job ID
3. PBS scheduler requests a list of resources from the server *
4. PBS scheduler sorts all the resources and jobs *
5. PBS scheduler informs PBS server which host(s) that job can run on *
6. PBS server pushes job script to execution host(s)
7. PBS MoM executes job script
8. PBS MoM periodically reports resource usage back to PBS server *
9.When job is completed PBS MoM copies output and error files
10. Job execution completed/user notification sent
HOST A HOST B HOST C
PBS SCHEDULER
PBS SERVER
pbsworks
ncpus
mem
host
pbsworks on HOST A
pbsworks
Note: * This information is for debugging purposes
only. It may change in future releases.
30
Cluster Network
Compute Manager GUI: Job Submission Page
• Applications panel
– Displays the applications available on the registered PAS server
• Submission Form panel
– Displays a job submission form for the application selecting the Applications panel
• Directory Structure panel
– Displays the directory structure of the location specified in the Address box
– Files panel
– Displays the contents of the directory, files, and subdirectories selected in the Directory Structure panel
31
Directory Structure
Files
Applications
Job Queues & Scheduling Policies
32
Queue Name Queue type Job run
time limit
No of cores
available
Description
Long Batch 240 Hours 1024
Jobs are expected
to run longer time
Development Interactive 24 Hours 48
Coding, profiling
and debugging
Normal Default Batch 3 Days 27000 Default queue
Large Memory Batch - 360
Jobs dispatched
based on memory
requirement
GPU GPU batch -
368,640
(CUDA)
Specific for GPU
jobs
Visualization Interactive 8 Hours 1
High end graphics
card
Production Batch - 480 Cores GIS queue
Compilers & Libraries
33
34
Compilers and Libraries at a glance
Parallel programming OpenMP
• Available compilers (gcc/gfortran/icc/ifort)
– OpenMP (not openmpi, Used mainly in SMP programming)
• OpenMP (Open Multi-Processing)
• OpenMP is an approach and OpenMPI is an implementation of MPI
• An API for shared-memory parallel programming in C/C++ and Fortran
• Parallelization in OpenMP achieved through threads
• Programming OpenMP is easier as it involves only pragma directive
• OpenMP program cannot communicate to the processor over network
• Different stages of the program uses different number of threads
• A typical approach is demonstrated through the below image
35
Parallel Programming MPI
• MPI
– MPI stands for Messaging Passing Interface
– MPI is a library specification
– MPI implementation is typically a wrapper to standard compilers
such as C/Fortran/Java/Python
– Typically used in Distributed memory communication
36
37
Developer Tools
38
Allinea DDT
• DDT – Distributed Debugging tool from Allinea
• Graphical interface for debugging
– Serial applications/codes
– OpenMP applications/codes
– MPI applications/codes
– CUDA applications/codes
• You control the pace of the code execution and examine
execution flow and variables
• Typical Scenario
– Set a point in your code where you want execution to stop
– Let your code run until the point is reached
– Check the variables of concern
39
Allinea MAP
• MAP – Application Profiling tool from Allinea
• Graphical interface for profilling
– Serial applications/codes
– OpenMP applications/codes
– MPI applications/codes
40
Allinea MAP
• Running your code with MAP
– $ module load impi/5.1.2
– $ mpiicc -g -O0 -o wave_c wave_c.c
– $ module load map/a.b.c
– $ map mpiexec –n 4 ./wave_c 20
41
Allinea MAP
42
Co-processor / Accelerators
GPU
• GPUs – Graphic Processing Units were initially made to
render better graphics performance
• With the amount of research put on GPUs, it was identified
that GPUs can perform better with Floating Point Operations
as well
• The term GPU changed to GPGPUs (General Purpose GPUs)
• CUDAToolkit includes compiler, math libraries, tools, and
debuggers
43
GPU in NSCC
• GPU Configuration
– Total 128 GPU nodes
– Each server with 1 Tesla K40 GPU
– 128 GB host memory per server
– 12GB device memory
– 2880 CUDA Cores
• Connect to GPU server
– To compile GPU application:
• Submit interactive job requesting for GPU resource
• Compile job using NVCC compiler
– To submit GPU job
• Flexible to among qsub for login nodes
• OR login to compute manager
44
45
Environment Modules
What is Environment modules
• Environment modules helps to dynamically load/unload
environment variables such as PATH, LD_LIBRARY_PATH, etc.,
• Environment modules are based on module files which are
written in TCL language
• Environment modules are shell independent
• Helpful to maintain different version of same software
• Flexibility to create module files by the users
46
Applications
47
Molecular Dynamics
Computational Chemistry
Compatible Applications
48
Compatible Applications
Engineering Applications
Quasiparticle calculationQuantum Chemistry
Numerical Analysis Weather research
49
August 27, 2015 50
https://help.nscc.sg/software-list/
Managed Services offered
52
• Computational resources
• Storage management
Infrastructure Services
• Hardware break fix
• Software incident resolution
Incident Resolution
• Data management
• Job management
• Software installation etc.,
General Service Requests
• Code Optimization
• Special queue configuration, etc.
Specialized Service Requests
• Introductory class
• Code optimization techniques
• Parallel Profiling etc.
Training Services
• Portal/e-Mail/Phone
• Request for a service via portal
• Interactive Job submission portal
Helpdesk
Where is NSCC
• NSCC Petascale
supercomputer in Connexis
building
• 40Gbps links extended to
NUS, NTU and GIS
• Login nodes are placed in
NUS, NTU and GIS
datacenters
• Access to NSCC is just like
your local HPC system
53
1 Fusionopolis Way, Level-17 Connexis South
Tower, Singapore 138632
Supported Login methods
• How do I login
– SSH
From aWindows PC use Putty or any standard SSH client software hostname is nscclogin.nus.edu.sg,
use NSCC Credentials
From Linux machine, use ssh username@login-astar.nscc.sg / ssh username@login-astar.nscc.sg
From MAC, open terminal and ssh username@login-astar.nscc.sg / ssh username@login-astar.nscc.sg
– File Transfer
SCP or any other secure shell file transfer software fromWindows
Use the command scp to transfer files from MAC/Linux
– Compute Manager
Open any standard web browser
In the address bar, type https://loginweb-astar.nscc.sg
Use NSCC credentials to login
– Outside campus
Connect to CampusVPN gain above mentioned services
54
NSCC HPC Support (Proposed to be available by 15th Mar)
• Corporate Info – web portal
http://nscc.sg
• NSCC HPC web portal
http://help.nscc.sg
• NSCC support email
help@nscc.sg
• NSCC Workshop portal
http://workshop.nscc.sg
55
56
Help us improve.Take the online survey!
Visit: http://workshop.nscc.sg >> Survey
Help portal
57
FAQs of
NSCC
Enroll to
NSCC
https://help.nscc.sg/
Registration Procedures
58
Registration Procedure
59
Web Site : http://nscc.sg
Helpdesk : https://help.nscc.sg
Email : help@nscc.sg
Phone : +65 6645 3412
60
User Enrollment
Instructions:
• Open https://help.nscc.sg
• Navigate User services -> Enrollment
• Click on Login
• Select your organization (NUS/NTU/A*Star) from the drop
down
• Input your credentials
Ref: https://help.nscc.sg -> User Guides -> User Enrollment guide
62
Login to NSCC Login nodes
• Download Putty form internet
• Open Putty
• Type login server name (login.nscc.sg)
• Input your credentials to login
63
Compute manager
• OpenWeb Browser (Firefox or IE)
• Type https://nusweb.nscc.sg / https://ntuweb.nscc.sg /
https://loginweb-astar.nscc.sg
• Use your credentials to login
• Submit a sample job
64
Transfer files
• Use FileZilla to transfer files
65
Creating PBS Job submission script
• Use the below sample script
cat submit.pbs
#!/bin/bash
#PBS -q dev
#PBS -l select=1:ncpus=24:mpiprocs=24
#PBS -l place=scatter
cd ${PBS_O_WORKDIR}
sleep 30
qsub submit.pbs
66
Environment module
• Open Putty
• Type module avail
• Type module load
67
Compiling simple C Program
• Use putty to login
• Create helloworld.c
#include<stdio.h>
void main()
{
printf("Helloworldn");
}
• Use module load composerxe/2016.1.150
• Type icc heloworld.c -o helloworld.o
68
Submit job
cat submit.pbs
#!/bin/bash
#PBS -q dev
#PBS -l select=1:ncpus=1
cd ${PBS_O_WORKDIR}
./helloworld.o
69
Compiling mpi C Program
• Use putty to login
• Create helloworld.c
#include <mpi.h>
#include <stdio.h>
#include <string.h>
#include <mpi.h>
#include <stdio.h>
#include <unistd.h>
int main(int argc, char **argv)
{
int rank;
char hostname[256];
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
gethostname(hostname,255);
printf("Hello world! I am process number: %d on host %sn", rank,
hostname);
MPI_Finalize();
return 0;
}
• Use module load composerxe/2016.1.150
• Type icc heloworld.c -o mpihello.o
70
Submit job
cat submit.pbs
#!/bin/bash
#PBS -q dev
#PBS -l select=1:ncpus=24:mpiprocs=24
#PBS –l place=scatter
cd ${PBS_O_WORKDIR}
mpirun ./mpihello.o
71
Submit pre-compiled applicatin
72
cat submit.pbs
#!/bin/bash
#PBS -q dev
#PBS -l select=1:ncpus=24:mpiprocs=24
#PBS –l place=scatter
cd ${PBS_O_WORKDIR}
mpirun ./mpihello.o
Using Scratch space
#!/bin/bash
#PBS -N My_Job
# Name of the job
#PBS -l select=1:ncpus=24:mpiprocs=24
# Setting number of nodes and CPUs to use
#PBS -W sandbox=private
# Get PBS to enter private sandbox
#PBS -W stagein=file_io@wlm01:/home/adm/sup/fsg1/<my input directory>
# Directory name where all the input files are alvailable
# files in the input directory will be copied to scratch space creating a directory file_io
#PBS -W stageout=*@wlm01:/home/adm/sup/fsg1/<myoutput directory>
# Output directory path in my home directory
# Once the job is finished, the files from file_io in scratch will be copied back to <myoutput
directory>
#PBS -q normal
cd ${PBS_O_WORKDIR}
echo " PBS_WORK_DIR is : $PBS_O_WORKDIR"
echo "PBS JOB DIR is: $PBS_JOBDIR"
#Notice that the output of pwd will be in lustre scratch space
echo "PWD is : `pwd`"
sleep 30
#mpirun ./a.out < input_file > output_file
73

More Related Content

What's hot

Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red_Hat_Storage
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Odinot Stanislas
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
Jonathan Long
 
Improving Hadoop Performance via Linux
Improving Hadoop Performance via LinuxImproving Hadoop Performance via Linux
Improving Hadoop Performance via Linux
Alex Moundalexis
 
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedPGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
Equnix Business Solutions
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
Patrick Quairoli
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
Colleen Corrice
 
Ncar globally accessible user environment
Ncar globally accessible user environmentNcar globally accessible user environment
Ncar globally accessible user environment
inside-BigData.com
 
Treasure Data on The YARN - Hadoop Conference Japan 2014
Treasure Data on The YARN - Hadoop Conference Japan 2014Treasure Data on The YARN - Hadoop Conference Japan 2014
Treasure Data on The YARN - Hadoop Conference Japan 2014Ryu Kobayashi
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Colleen Corrice
 
Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011
GlusterFS
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Dive
joshdurgin
 
Gluster Storage
Gluster StorageGluster Storage
Gluster Storage
Raz Tamir
 
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky HaryadiPGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
Equnix Business Solutions
 
[Hadoop Meetup] Yarn at Microsoft - The challenges of scale
[Hadoop Meetup] Yarn at Microsoft - The challenges of scale[Hadoop Meetup] Yarn at Microsoft - The challenges of scale
[Hadoop Meetup] Yarn at Microsoft - The challenges of scale
Newton Alex
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red_Hat_Storage
 
Ceph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Day Melabourne - Community Update
Ceph Day Melabourne - Community Update
Ceph Community
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
Sage Weil
 

What's hot (18)

Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Improving Hadoop Performance via Linux
Improving Hadoop Performance via LinuxImproving Hadoop Performance via Linux
Improving Hadoop Performance via Linux
 
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedPGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
Ncar globally accessible user environment
Ncar globally accessible user environmentNcar globally accessible user environment
Ncar globally accessible user environment
 
Treasure Data on The YARN - Hadoop Conference Japan 2014
Treasure Data on The YARN - Hadoop Conference Japan 2014Treasure Data on The YARN - Hadoop Conference Japan 2014
Treasure Data on The YARN - Hadoop Conference Japan 2014
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011Introduction to GlusterFS Webinar - September 2011
Introduction to GlusterFS Webinar - September 2011
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Dive
 
Gluster Storage
Gluster StorageGluster Storage
Gluster Storage
 
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky HaryadiPGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
PGConf.ASIA 2019 - High Availability, 10 Seconds Failover - Lucky Haryadi
 
[Hadoop Meetup] Yarn at Microsoft - The challenges of scale
[Hadoop Meetup] Yarn at Microsoft - The challenges of scale[Hadoop Meetup] Yarn at Microsoft - The challenges of scale
[Hadoop Meetup] Yarn at Microsoft - The challenges of scale
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 
Ceph Day Melabourne - Community Update
Ceph Day Melabourne - Community UpdateCeph Day Melabourne - Community Update
Ceph Day Melabourne - Community Update
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 

Viewers also liked

NSCC Training Introductory Class
NSCC Training Introductory Class NSCC Training Introductory Class
NSCC Training Introductory Class
National Supercomputing Centre Singapore
 
NSCC Training Introductory Class
NSCC Training  Introductory ClassNSCC Training  Introductory Class
NSCC Training Introductory Class
National Supercomputing Centre Singapore
 
HPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and WorkflowsHPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and Workflows
inside-BigData.com
 
Asat book0-fresh blood
Asat book0-fresh bloodAsat book0-fresh blood
Asat book0-fresh blood
Ashraf Ali
 
Special quadrilaterals proofs ans constructions
Special quadrilaterals  proofs ans constructions Special quadrilaterals  proofs ans constructions
Special quadrilaterals proofs ans constructions cristufer
 
Pass Love Charity Foundation (PLCF)
Pass Love Charity Foundation (PLCF)Pass Love Charity Foundation (PLCF)
Pass Love Charity Foundation (PLCF)
PassLoveCharity
 
How to create the life you want
How to create the life you wantHow to create the life you want
How to create the life you want
Self-employed
 
Unit 1.my school
Unit 1.my schoolUnit 1.my school
Unit 1.my school
Alexandre Bárez
 
Caring for Sharring
 Caring for Sharring  Caring for Sharring
Caring for Sharring
faleulaaoelua
 
Score A - Dunia Study Dot Com
Score A - Dunia Study Dot ComScore A - Dunia Study Dot Com
Score A - Dunia Study Dot Com
weirdoux
 
Ten Little Candy Canes
Ten Little Candy CanesTen Little Candy Canes
Ten Little Candy Canes
Deborah Stewart
 
Mini-Training: Let's have a rest
Mini-Training: Let's have a restMini-Training: Let's have a rest
Mini-Training: Let's have a rest
Betclic Everest Group Tech Team
 
Apuntes
ApuntesApuntes
Apuntes
tumamawey
 
Editioning use in ebs
Editioning use in  ebsEditioning use in  ebs
Editioning use in ebs
pasalapudi123
 
Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...
Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...
Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...
SAP PartnerEdge program for Application Development
 
REST dojo Comet
REST dojo CometREST dojo Comet
REST dojo Comet
Carol McDonald
 
Yellow Slice Design Profile - 2016
Yellow Slice Design Profile - 2016Yellow Slice Design Profile - 2016
Yellow Slice Design Profile - 2016
Yellow Slice
 
Jamison Door Company Catalog
Jamison Door Company CatalogJamison Door Company Catalog
Jamison Door Company CatalogTom Lewis
 
SERTIFIKAT HSE EXPRESS
SERTIFIKAT HSE EXPRESSSERTIFIKAT HSE EXPRESS
SERTIFIKAT HSE EXPRESS
sertifikatSMK3
 

Viewers also liked (20)

NSCC Training Introductory Class
NSCC Training Introductory Class NSCC Training Introductory Class
NSCC Training Introductory Class
 
NSCC Training Introductory Class
NSCC Training  Introductory ClassNSCC Training  Introductory Class
NSCC Training Introductory Class
 
HPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and WorkflowsHPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and Workflows
 
Asat book0-fresh blood
Asat book0-fresh bloodAsat book0-fresh blood
Asat book0-fresh blood
 
Special quadrilaterals proofs ans constructions
Special quadrilaterals  proofs ans constructions Special quadrilaterals  proofs ans constructions
Special quadrilaterals proofs ans constructions
 
Pass Love Charity Foundation (PLCF)
Pass Love Charity Foundation (PLCF)Pass Love Charity Foundation (PLCF)
Pass Love Charity Foundation (PLCF)
 
How to create the life you want
How to create the life you wantHow to create the life you want
How to create the life you want
 
Unit 1.my school
Unit 1.my schoolUnit 1.my school
Unit 1.my school
 
Caring for Sharring
 Caring for Sharring  Caring for Sharring
Caring for Sharring
 
Score A - Dunia Study Dot Com
Score A - Dunia Study Dot ComScore A - Dunia Study Dot Com
Score A - Dunia Study Dot Com
 
Ten Little Candy Canes
Ten Little Candy CanesTen Little Candy Canes
Ten Little Candy Canes
 
Mini-Training: Let's have a rest
Mini-Training: Let's have a restMini-Training: Let's have a rest
Mini-Training: Let's have a rest
 
Ggdds
GgddsGgdds
Ggdds
 
Apuntes
ApuntesApuntes
Apuntes
 
Editioning use in ebs
Editioning use in  ebsEditioning use in  ebs
Editioning use in ebs
 
Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...
Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...
Microsoft Technical Webinar: Doing more with MS Office, SharePoint and Visual...
 
REST dojo Comet
REST dojo CometREST dojo Comet
REST dojo Comet
 
Yellow Slice Design Profile - 2016
Yellow Slice Design Profile - 2016Yellow Slice Design Profile - 2016
Yellow Slice Design Profile - 2016
 
Jamison Door Company Catalog
Jamison Door Company CatalogJamison Door Company Catalog
Jamison Door Company Catalog
 
SERTIFIKAT HSE EXPRESS
SERTIFIKAT HSE EXPRESSSERTIFIKAT HSE EXPRESS
SERTIFIKAT HSE EXPRESS
 

Similar to NSCC Training - Introductory Class

ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
Hitoshi Sato
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journey
Peter Clapham
 
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
OpenStack
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
HPCC Systems
 
Introduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AIIntroduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AI
Tyrone Systems
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
SUSE Italy
 
DPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles ShiflettDPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles Shiflett
Jim St. Leger
 
HPC Infrastructure To Solve The CFD Grand Challenge
HPC Infrastructure To Solve The CFD Grand ChallengeHPC Infrastructure To Solve The CFD Grand Challenge
HPC Infrastructure To Solve The CFD Grand Challenge
Anand Haridass
 
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
Redis Labs
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
From the Archives: Future of Supercomputing at Altparty 2009
From the Archives: Future of Supercomputing at Altparty 2009From the Archives: Future of Supercomputing at Altparty 2009
From the Archives: Future of Supercomputing at Altparty 2009
Olli-Pekka Lehto
 
2018 03 25 system ml ai and openpower meetup
2018 03 25 system ml ai and openpower meetup2018 03 25 system ml ai and openpower meetup
2018 03 25 system ml ai and openpower meetup
Ganesan Narayanasamy
 
Design installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttuDesign installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttu
Alan Sill
 
Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systems
inside-BigData.com
 
NWU and HPC
NWU and HPCNWU and HPC
NWU and HPC
Wilhelm van Belkum
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale SystemsDesigning HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
inside-BigData.com
 
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...
"Performance Evaluation,  Scalability Analysis, and  Optimization Tuning of A..."Performance Evaluation,  Scalability Analysis, and  Optimization Tuning of A...
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...Altair
 

Similar to NSCC Training - Introductory Class (20)

ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journey
 
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 
Introduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AIIntroduction to HPC & Supercomputing in AI
Introduction to HPC & Supercomputing in AI
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
 
DPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles ShiflettDPDK Summit 2015 - Aspera - Charles Shiflett
DPDK Summit 2015 - Aspera - Charles Shiflett
 
HPC Infrastructure To Solve The CFD Grand Challenge
HPC Infrastructure To Solve The CFD Grand ChallengeHPC Infrastructure To Solve The CFD Grand Challenge
HPC Infrastructure To Solve The CFD Grand Challenge
 
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
From the Archives: Future of Supercomputing at Altparty 2009
From the Archives: Future of Supercomputing at Altparty 2009From the Archives: Future of Supercomputing at Altparty 2009
From the Archives: Future of Supercomputing at Altparty 2009
 
2018 03 25 system ml ai and openpower meetup
2018 03 25 system ml ai and openpower meetup2018 03 25 system ml ai and openpower meetup
2018 03 25 system ml ai and openpower meetup
 
Design installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttuDesign installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttu
 
Designing HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale SystemsDesigning HPC & Deep Learning Middleware for Exascale Systems
Designing HPC & Deep Learning Middleware for Exascale Systems
 
NWU and HPC
NWU and HPCNWU and HPC
NWU and HPC
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale SystemsDesigning HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
 
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...
"Performance Evaluation,  Scalability Analysis, and  Optimization Tuning of A..."Performance Evaluation,  Scalability Analysis, and  Optimization Tuning of A...
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...
 

Recently uploaded

Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 

Recently uploaded (20)

Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 

NSCC Training - Introductory Class

  • 2. • Introduction to NSCC • About HPC • More about NSCC HPC cluster • PBS Pro (Scheduler) • Compilers and Libraries • DeveloperTools • Co-processor / Accelerators • Environment Modules • Applications • User registration procedures • Feedback 2 The Discussion
  • 4. • State-of-the-art national facility with computing, data and resources to enable users to solve science and technological problems, and stimulate industry to use computing for problem solving, testing designs and advancing technologies. • Facility will be linked by high bandwidth networks to connect these resources and provide high speed access to users anywhere and everyone. Introduction: The National Supercomputing Centre (NSCC) 4
  • 5. Introduction:Vision & Objectives Vision:“Democratising Access to Supercomputing” 5 Making Petascale Supercomputing accessible to the ordinary researcher 1 Bringing Petascale Computing and Storage and Gigabit speed networking to the ordinary person 2 Supporting National R&D Initiatives1 Objectives of NSCC Attracting Industrial Research Collaborations2 Enhancing Singapore’s Research Capabilities3
  • 7. 7 What is HPC? • Term HPC stands for High Performance Computing or High Performance Computer • Tightly coupled personal computers with high speed interconnect • Measured in FLOPS (FLoating point Operations Per Second) • Architectures – NUMA (Non-uniform memory access)
  • 8. Major Domains where HPC is used Engineering Analysis • Fluid Dynamics • Materials Simulation • Crash simulations • Finite Element Analysis Scientific Analysis • Molecular modelling • Computational Chemistry • High energy physics • Quantum Chemistry Life Sciences • Genomic Sequencing and Analysis • Protein folding • Drug design • Metabolic modelling Seismic analysis • Reservoir Simulations and modelling • Seismic data processing 8
  • 9. Major Domains where HPC is used Chip design & Semiconductor • Transistor simulation • Logic Simulation • Electromagnetic field solver Computational Mathematics • Monte-Carlo methods • Time stepping and parallel time algorithms • Iterative methods Media and Animation • VFX and visualization • Animation Weather research • Atmospheric modelling • Seasonal time- scale research • - Major Domains where HPC is used 9
  • 10. Major Domains where HPC is used • And More – Bigdata – Information Technology – Cyber security – Banking and Finance – Data mining 10
  • 11. 11 Introduction to NSCC HPC Cluster
  • 12. Executive Summary • 1 Petaflop System – About 1300 nodes – Homogeneous and Heterogeneous architectures • 13 Petabytes of Storage – One of the Largest and state of the art Storage architecture • Research and Industry – A*STAR, NUS, NTU, SUTD – And many more commercial and academic organizations 12
  • 13. HPC Stack in NSCC Mellanox 100 Gbps Network Intel Parallel studio Allinea Tools PBSPro Scheduler Lustre & GPFS HPC Application software Operating System RHEL 6.6 and CentOS 6.6 Fujitsu x86 Servers NVidia Tesla K40 GPUDDN Storage Application Modules 13
  • 14. 14 NSCC Supercomputer Architecture Base Compute Nodes (1160 nodes) Accelerated Nodes (128 nodes) Parallel File system / Tiered storage InfiniBand network - Fully non- blocking Ethernet NW GIS FAT node NUS Peripheral Servers NTU Peripheral Servers NSCC Peripheral Servers NSCC Direct users VPN
  • 17. 17 Genomic Institute of Singapore (GIS) National Supercomputing Center (NSCC) 2km Connection between GIS and NSCC Large memory node (1TB), Ultra high speed 500Gbps enabled 2012: 300 Gbytes/week 2015: 4300 Gbytes/week x 14
  • 18. NGSP Sequencers at B2 (Illumina + PacBio) NSCC Gateway STEP 2: Automated pipeline analysis once sequencing completes. Processed data resides in NSCC 500Gbps Primary Link Data Manager STEP 3: Data manager index and annotates processed data. Replicate metadata to GIS. Allowing data to be search and retrieved from GIS Data ManagerCompute Tiered Storage POLARIS, Genotyping & other Platforms in L4~L8 Tiered Storage STEP 1: Sequencers stream directly to NSCC Storage (NO footprint in GIS) Compute 1 Gbps per sequencer 10 Gbps 1 Gbps per machine 100 Gbps 10 Gbps A*CRC-NSCC GIS A*CRC: A*Star Computational Resource Center GIS: Genome Institute of Singapore Direct streaming of Sequence Data from GIS to remote Supercomputer in NSCC 2km
  • 19. The Hardware EDR Interconnect • Mellanox EDR Fat Tree within cluster • InfiniBand connection to all end-points (login nodes) at three campuses • 40/80/500 Gbps throughput network extend to three campuses (NUS/NTU/GIS) Over13PB Storage • HSM Tiered, 3 Tiers • I/O 500 GBps flash burst buffer , 10x Infinite Memory Engine (IME) ~1 PFlops System • 1,288 nodes (dual socket, 12 cores/CPU E5-2690v3) • 128 GB DDR4 / node • 10 Large memory nodes (1x6TB, 4x2TB, 5x 1TB) 19
  • 20. Compute nodes 20 • Large Memory Nodes – 9 Nodes configured with high memory – FUJITSU Server PRIMERGY RX4770 M2 – Intel(R) Xeon(R) CPU E7-4830 v3 @ 2.10GHz – 4 x 1TB, 4x 2TB, and 1x 6TB Memory configuration – EDR Infiniband • Standard Compute nodes – 1160 nodes – Fujitsu Server PRIMERGY CX2550 M1 – 27840 CPU Cores – Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz – 128 GB / Server – EDR InfiniBand – Liquid cooling system
  • 21. Accelerate your computing Accelerators nodes • 128 nodes with NVIDIA GPUs (identical to the compute nodes) • NVIDIA K40 (2880 cores) • 368,640 total GPU cores Visualization nodes • 2 nodes Fujitsu Celsius R940 graphic workstations • Each with 2 x NVIDIA Quadro K4200 • NVIDIA Quadro Sync support 21
  • 22. NSCC Data Centre – Green features Warm water cooling for CPUs – First free-cooling system in Singapore and South-East Asia. – Water is maintained at a temperature of 40ºC. Enters the racks at 40ºC, exits the racks at 45ºC. – Equipment placed in a technical floor(18th) cool down the water down only using fans. – The system can easily be extended for future expansion. Green features of Data Centre – PUE of 1.4 (average for Singapore is above 2.5) 22 Cool-Central® Liquid Cooling technology
  • 23. Parallel file system • Components – Burst Buffer • 265TB Burst Buffer • 500 GB/s throughput • Infinite Memory Engine (IME) – Scratch • 4 PB scratch storage • 210 GB/s • SFA12KX EXAScalar storage • Lustre file system – home and secure • 4 PB Persistent storage • GridScalar storage • 100 GB/s throughput • IBM Spectrum Scale (formerly GPFS) – Archive storage • 5 PB storage • Archive purpose only • WOS based archive system 23
  • 26. NSCC Storage 26 Tier0 BurstBuffer Tier0 ScratchFS Tier1 HomeFS Tier1 ProjectFS Tier2 Archive 265 TB 500 GB/s 4 PB 210 GB/s 4 PB 100 GB/s WOS Active Archive Infinite Memory Engine GRIDScaler GPFS® Storage HSM 5PB 20TB/h EXAScaler Lustre® Storage
  • 27. Software Stack Operating System CentOS 6.6 Scheduler PBS Pro Compilers GCC Intel Parallel Studio Libraries GNU, Intel MKL Allinea tools GPGPU CUDA Toolkit 7.5 Environment Modules 27
  • 28. PBS Professional (Job Scheduler) 28
  • 29. Why PBS Professional (Scheduler)? 29  Workload management solution that maximizes the efficiency and utilization of high-performance computing (HPC) resources and improves job turnaround RobustWorkload Management  Floating licenses  Scalability, with flexible queues  Job arrays  User and administrator interface  Job suspend/resume  Application checkpoint/restart  Automatic file staging  Accounting logs  Access control lists Advanced Scheduling Algorithms  Resource-based scheduling  Preemptive scheduling  Optimized node sorting  Enhanced job placement  Advance & standing reservations  Cycle harvesting across workstations  Scheduling across multiple complexes  Network topology scheduling  Manages both batch and interactive work  Backfilling Reliability,Availability and Scalability  Server failover feature  Automatic job recovery  System monitoring  Integration with MPI solutions  Tested to manage 1,000,000+ jobs per day  Tested to accept 30,000 Jobs per minute  EAL3+ security  Checkpoint support
  • 30. Process Flow of a PBS Job 1. User submits job 2. PBS server returns a job ID 3. PBS scheduler requests a list of resources from the server * 4. PBS scheduler sorts all the resources and jobs * 5. PBS scheduler informs PBS server which host(s) that job can run on * 6. PBS server pushes job script to execution host(s) 7. PBS MoM executes job script 8. PBS MoM periodically reports resource usage back to PBS server * 9.When job is completed PBS MoM copies output and error files 10. Job execution completed/user notification sent HOST A HOST B HOST C PBS SCHEDULER PBS SERVER pbsworks ncpus mem host pbsworks on HOST A pbsworks Note: * This information is for debugging purposes only. It may change in future releases. 30 Cluster Network
  • 31. Compute Manager GUI: Job Submission Page • Applications panel – Displays the applications available on the registered PAS server • Submission Form panel – Displays a job submission form for the application selecting the Applications panel • Directory Structure panel – Displays the directory structure of the location specified in the Address box – Files panel – Displays the contents of the directory, files, and subdirectories selected in the Directory Structure panel 31 Directory Structure Files Applications
  • 32. Job Queues & Scheduling Policies 32 Queue Name Queue type Job run time limit No of cores available Description Long Batch 240 Hours 1024 Jobs are expected to run longer time Development Interactive 24 Hours 48 Coding, profiling and debugging Normal Default Batch 3 Days 27000 Default queue Large Memory Batch - 360 Jobs dispatched based on memory requirement GPU GPU batch - 368,640 (CUDA) Specific for GPU jobs Visualization Interactive 8 Hours 1 High end graphics card Production Batch - 480 Cores GIS queue
  • 35. Parallel programming OpenMP • Available compilers (gcc/gfortran/icc/ifort) – OpenMP (not openmpi, Used mainly in SMP programming) • OpenMP (Open Multi-Processing) • OpenMP is an approach and OpenMPI is an implementation of MPI • An API for shared-memory parallel programming in C/C++ and Fortran • Parallelization in OpenMP achieved through threads • Programming OpenMP is easier as it involves only pragma directive • OpenMP program cannot communicate to the processor over network • Different stages of the program uses different number of threads • A typical approach is demonstrated through the below image 35
  • 36. Parallel Programming MPI • MPI – MPI stands for Messaging Passing Interface – MPI is a library specification – MPI implementation is typically a wrapper to standard compilers such as C/Fortran/Java/Python – Typically used in Distributed memory communication 36
  • 38. 38 Allinea DDT • DDT – Distributed Debugging tool from Allinea • Graphical interface for debugging – Serial applications/codes – OpenMP applications/codes – MPI applications/codes – CUDA applications/codes • You control the pace of the code execution and examine execution flow and variables • Typical Scenario – Set a point in your code where you want execution to stop – Let your code run until the point is reached – Check the variables of concern
  • 39. 39 Allinea MAP • MAP – Application Profiling tool from Allinea • Graphical interface for profilling – Serial applications/codes – OpenMP applications/codes – MPI applications/codes
  • 40. 40 Allinea MAP • Running your code with MAP – $ module load impi/5.1.2 – $ mpiicc -g -O0 -o wave_c wave_c.c – $ module load map/a.b.c – $ map mpiexec –n 4 ./wave_c 20
  • 43. GPU • GPUs – Graphic Processing Units were initially made to render better graphics performance • With the amount of research put on GPUs, it was identified that GPUs can perform better with Floating Point Operations as well • The term GPU changed to GPGPUs (General Purpose GPUs) • CUDAToolkit includes compiler, math libraries, tools, and debuggers 43
  • 44. GPU in NSCC • GPU Configuration – Total 128 GPU nodes – Each server with 1 Tesla K40 GPU – 128 GB host memory per server – 12GB device memory – 2880 CUDA Cores • Connect to GPU server – To compile GPU application: • Submit interactive job requesting for GPU resource • Compile job using NVCC compiler – To submit GPU job • Flexible to among qsub for login nodes • OR login to compute manager 44
  • 46. What is Environment modules • Environment modules helps to dynamically load/unload environment variables such as PATH, LD_LIBRARY_PATH, etc., • Environment modules are based on module files which are written in TCL language • Environment modules are shell independent • Helpful to maintain different version of same software • Flexibility to create module files by the users 46
  • 49. Compatible Applications Engineering Applications Quasiparticle calculationQuantum Chemistry Numerical Analysis Weather research 49
  • 50. August 27, 2015 50 https://help.nscc.sg/software-list/
  • 51. Managed Services offered 52 • Computational resources • Storage management Infrastructure Services • Hardware break fix • Software incident resolution Incident Resolution • Data management • Job management • Software installation etc., General Service Requests • Code Optimization • Special queue configuration, etc. Specialized Service Requests • Introductory class • Code optimization techniques • Parallel Profiling etc. Training Services • Portal/e-Mail/Phone • Request for a service via portal • Interactive Job submission portal Helpdesk
  • 52. Where is NSCC • NSCC Petascale supercomputer in Connexis building • 40Gbps links extended to NUS, NTU and GIS • Login nodes are placed in NUS, NTU and GIS datacenters • Access to NSCC is just like your local HPC system 53 1 Fusionopolis Way, Level-17 Connexis South Tower, Singapore 138632
  • 53. Supported Login methods • How do I login – SSH From aWindows PC use Putty or any standard SSH client software hostname is nscclogin.nus.edu.sg, use NSCC Credentials From Linux machine, use ssh username@login-astar.nscc.sg / ssh username@login-astar.nscc.sg From MAC, open terminal and ssh username@login-astar.nscc.sg / ssh username@login-astar.nscc.sg – File Transfer SCP or any other secure shell file transfer software fromWindows Use the command scp to transfer files from MAC/Linux – Compute Manager Open any standard web browser In the address bar, type https://loginweb-astar.nscc.sg Use NSCC credentials to login – Outside campus Connect to CampusVPN gain above mentioned services 54
  • 54. NSCC HPC Support (Proposed to be available by 15th Mar) • Corporate Info – web portal http://nscc.sg • NSCC HPC web portal http://help.nscc.sg • NSCC support email help@nscc.sg • NSCC Workshop portal http://workshop.nscc.sg 55
  • 55. 56 Help us improve.Take the online survey! Visit: http://workshop.nscc.sg >> Survey
  • 56. Help portal 57 FAQs of NSCC Enroll to NSCC https://help.nscc.sg/
  • 59. Web Site : http://nscc.sg Helpdesk : https://help.nscc.sg Email : help@nscc.sg Phone : +65 6645 3412 60
  • 60.
  • 61. User Enrollment Instructions: • Open https://help.nscc.sg • Navigate User services -> Enrollment • Click on Login • Select your organization (NUS/NTU/A*Star) from the drop down • Input your credentials Ref: https://help.nscc.sg -> User Guides -> User Enrollment guide 62
  • 62. Login to NSCC Login nodes • Download Putty form internet • Open Putty • Type login server name (login.nscc.sg) • Input your credentials to login 63
  • 63. Compute manager • OpenWeb Browser (Firefox or IE) • Type https://nusweb.nscc.sg / https://ntuweb.nscc.sg / https://loginweb-astar.nscc.sg • Use your credentials to login • Submit a sample job 64
  • 64. Transfer files • Use FileZilla to transfer files 65
  • 65. Creating PBS Job submission script • Use the below sample script cat submit.pbs #!/bin/bash #PBS -q dev #PBS -l select=1:ncpus=24:mpiprocs=24 #PBS -l place=scatter cd ${PBS_O_WORKDIR} sleep 30 qsub submit.pbs 66
  • 66. Environment module • Open Putty • Type module avail • Type module load 67
  • 67. Compiling simple C Program • Use putty to login • Create helloworld.c #include<stdio.h> void main() { printf("Helloworldn"); } • Use module load composerxe/2016.1.150 • Type icc heloworld.c -o helloworld.o 68
  • 68. Submit job cat submit.pbs #!/bin/bash #PBS -q dev #PBS -l select=1:ncpus=1 cd ${PBS_O_WORKDIR} ./helloworld.o 69
  • 69. Compiling mpi C Program • Use putty to login • Create helloworld.c #include <mpi.h> #include <stdio.h> #include <string.h> #include <mpi.h> #include <stdio.h> #include <unistd.h> int main(int argc, char **argv) { int rank; char hostname[256]; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); gethostname(hostname,255); printf("Hello world! I am process number: %d on host %sn", rank, hostname); MPI_Finalize(); return 0; } • Use module load composerxe/2016.1.150 • Type icc heloworld.c -o mpihello.o 70
  • 70. Submit job cat submit.pbs #!/bin/bash #PBS -q dev #PBS -l select=1:ncpus=24:mpiprocs=24 #PBS –l place=scatter cd ${PBS_O_WORKDIR} mpirun ./mpihello.o 71
  • 71. Submit pre-compiled applicatin 72 cat submit.pbs #!/bin/bash #PBS -q dev #PBS -l select=1:ncpus=24:mpiprocs=24 #PBS –l place=scatter cd ${PBS_O_WORKDIR} mpirun ./mpihello.o
  • 72. Using Scratch space #!/bin/bash #PBS -N My_Job # Name of the job #PBS -l select=1:ncpus=24:mpiprocs=24 # Setting number of nodes and CPUs to use #PBS -W sandbox=private # Get PBS to enter private sandbox #PBS -W stagein=file_io@wlm01:/home/adm/sup/fsg1/<my input directory> # Directory name where all the input files are alvailable # files in the input directory will be copied to scratch space creating a directory file_io #PBS -W stageout=*@wlm01:/home/adm/sup/fsg1/<myoutput directory> # Output directory path in my home directory # Once the job is finished, the files from file_io in scratch will be copied back to <myoutput directory> #PBS -q normal cd ${PBS_O_WORKDIR} echo " PBS_WORK_DIR is : $PBS_O_WORKDIR" echo "PBS JOB DIR is: $PBS_JOBDIR" #Notice that the output of pwd will be in lustre scratch space echo "PWD is : `pwd`" sleep 30 #mpirun ./a.out < input_file > output_file 73