Workshop actualización SVG CESGA 2012
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Workshop actualización SVG CESGA 2012

  • 426 views
Uploaded on

NEW COMPUTING & STORAGE SERVICES AT CESGA 2012

NEW COMPUTING & STORAGE SERVICES AT CESGA 2012

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
426
On Slideshare
426
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
2
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. SVG Upgrade 2012
  • 2. AGENDA•Upgraded SVG • Motivation • Hardware configuration • Infiniband Network • Environment • Queues configuration • Benchmarks•CESGA Supercomputers in 2013 • Distribution of jobs
  • 3. HardwareThin-nodes Total Total 8 HP SL230s Gen8 each with 24 24 Intel Sandy CPUs Intel Sandy Bridge Bridge processors 2x Intel Xeon E5-2670, 8 cores each, 2.6GHz 192 cores 192 cores 1,5 TB memory 64GB main memory DDR3-1600MHz 1,5 TB memory 28 TB disk 2TB SAS hard disk 28 Peak Performance 3788 GFLops TB disk 2x1GbE 3788 Gflops peak performance 2xInfiniband FDR 56Gb Peak Performance 332GFlopsFat-nodes 2 HP DL560 Gen8 each with 4x Intel Xeon E5-4620, 8 cores each, 2.2GHz 512GB main memory DDR3-1600MHz 6 hard disks each 1TB 4x1GbE Infiniband FDR 56Gb 10GbE Peak Performance 563 GFlops
  • 4. MotivationTarget• Competitive solution for: • Parallel MPI & OpenMP applications • Memory Intensive• Alternative for Finis Terrae• Lower cost of operation and maintenance• Finis Terrae II prototype • To define new requirements
  • 5. Infiniband networkMellanox SX6036 switch36 port FDR 56Gb/s4Tb/s aggregated non-blocking BW1 microseconds MPI latencyDual connection: High availability – same BW
  • 6. EnvironmentIntegrated in the SVG cluster: Scientific Linux 6.3 (Red Hat) Common /opt/cesga Common /home, stores… Same gateway: svg.cesga.es Interactive use: compute –arch sandy Jobs: qsub –arch sandy Binary compatible - no need to recompile
  • 7. Usage compute –arch amd qsub –arch amd svg.cesga.es computeSSH qsub compute –arch sandy qsub –arch sandy
  • 8. ConfigurationFull production phase (November 2012) • Only runs jobs with –sandy option • General availability of applications specifically compiled • Maximum wall-clock time 12 hours • Maximum 2 jobs per node fat nodesUnder consideration near future: Jobs without –sandy option Higher wall-clock time
  • 9. Queues ConfigurationExclusive nodes To take advantage of Infiniband To take advantage of Turboboost Jobs not interferring each other Maximum performanceMaximum 2 jobs on Fat nodes 32 cores nodes Exclusive if required by the jobs (cores, memory)
  • 10. Queues: Limits“module help sge”Up to 112 cores (MPI)Up to 32 cores shared memory (OpenMP)Memory: up to 64GB per core up to 512GB for non MPI jobs up to 1024GB per jobScratch up to 1,7TBExecution time: 12 hours If needed ask for more resources in https://www.altausuarios.cesga.es/
  • 11. Intel® Xeon® Processor E5-2600 Turbo Boost1Max Turbo Boost frequency based on number of 100 MHz increments above marked frequency (+1 = 0.100 GHz, +2 = 0.200 GHz, +3 = 0.300 GHz, etc. )
  • 12. CESGA Supercomputers 2013 Shared storage: /home /store Linux O. S. Grid Engine Batch Scheduler Finis Terrae (FT) Superordenador Virtual Gallego (SVG) Capability computing Throughput and Capacity computing Parallel jobs (>4 ... 1024 cores) Sequential & parallel jobs up to 32 cores per node and 112 cores Huge memory (>4... 1024GB) MPI Huge parallel scratch (>50... 10,000GB) Low-medium-large memory (up to 512GB!) Medium single node scratch (<1000GB) Customized clusters – Cloud services
  • 13. Other ImprovementsVPN for home connectionStorage: Do not use SFS from SVG Use “store”High availability front-ends: svg.cesga.es