SlideShare a Scribd company logo
Lustre2.5 Performance Evaluation:
Performance Improvements
with Large I/O Patches,
Metadata Improvements, and Metadata
Scaling with DNE
Hitoshi Sato*1, Shuichi Ihara*2, Satoshi Matsuoka*1
*1 Tokyo Institute of Technology
*2 Data Direct Networks Japan
1
Tokyo Tech’s TSUBAME2.0 Nov. 1, 2010
“The Greenest Production Supercomputer in the World”
TSUBAME 2.0
New Development
32nm 40nm
>400GB/s Mem BW
80Gbps NW BW
~1KW max
>1.6TB/s Mem BW >12TB/s Mem BW
35KW Max
>600TB/s Mem BW
220Tbps NW
Bisecion BW
1.4MW Max
2
TSUBAME2 System Overview
11PB (7PB HDD, 4PB Tape, 200TB SSD)
“Global Work Space” #1
SFA10k #5
“Global Work
Space” #2 “Global Work Space” #3
SFA10k #4SFA10k #3SFA10k #2SFA10k #1
/data0 /work0
/work1 /gscr
“cNFS/Clusterd Samba w/ GPFS”
HOME
System
application
“NFS/CIFS/iSCSI by BlueARC”
HOME
iSCSI
Infiniband QDR Networks
SFA10k #6
GPFS#1 GPFS#2 GPFS#3 GPFS#4
Parallel File System Volumes
Home Volumes
QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8
1.2PB
3.6 PB
/data1
Thin nodes 1408nodes (32nodes x44 Racks)
HP Proliant SL390s G7 1408nodes
CPU: Intel Westmere-EP 2.93GHz
6cores × 2 = 12cores/node
GPU: NVIDIA Tesla K20X, 3GPUs/node
Mem: 54GB (96GB)
SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB)
Medium nodes
HP Proliant DL580 G7 24nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070,
NextIO vCORE Express 2070
Mem:128GB
SSD: 120GB x 4 = 480GB
Fat nodes
HP Proliant DL580 G7 10nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070
Mem: 256GB (512GB)
SSD: 120GB x 4 = 480GB
・・・・・・
Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD
Interconnets: Full-bisection Optical QDR Infiniband Network
Voltaire Grid Director 4700 ×12
IB QDR: 324 ports
Core Switch Edge Switch Edge Switch (/w 10GbE ports)
Voltaire Grid Director 4036 ×179
IB QDR : 36 ports
Voltaire Grid Director 4036E ×6
IB QDR:34ports
10GbE: 2port
12switches
6switches179switches
2.4 PB HDD +
GPFS+Tape Lustre Home
Local SSDs
3
TSUBAME2 System Overview
11PB (7PB HDD, 4PB Tape, 200TB SSD)
“Global Work Space” #1
SFA10k #5
“Global Work
Space” #2 “Global Work Space” #3
SFA10k #4SFA10k #3SFA10k #2SFA10k #1
/data0 /work0
/work1 /gscr
“cNFS/Clusterd Samba w/ GPFS”
HOME
System
application
“NFS/CIFS/iSCSI by BlueARC”
HOME
iSCSI
Infiniband QDR Networks
SFA10k #6
GPFS#1 GPFS#2 GPFS#3 GPFS#4
Parallel File System Volumes
Home Volumes
QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8
1.2PB
3.6 PB
/data1
Thin nodes 1408nodes (32nodes x44 Racks)
HP Proliant SL390s G7 1408nodes
CPU: Intel Westmere-EP 2.93GHz
6cores × 2 = 12cores/node
GPU: NVIDIA Tesla K20X, 3GPUs/node
Mem: 54GB (96GB)
SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB)
Medium nodes
HP Proliant DL580 G7 24nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070,
NextIO vCORE Express 2070
Mem:128GB
SSD: 120GB x 4 = 480GB
Fat nodes
HP Proliant DL580 G7 10nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070
Mem: 256GB (512GB)
SSD: 120GB x 4 = 480GB
・・・・・・
Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD
Interconnets: Full-bisection Optical QDR Infiniband Network
Voltaire Grid Director 4700 ×12
IB QDR: 324 ports
Core Switch Edge Switch Edge Switch (/w 10GbE ports)
Voltaire Grid Director 4036 ×179
IB QDR : 36 ports
Voltaire Grid Director 4036E ×6
IB QDR:34ports
10GbE: 2port
12switches
6switches179switches
2.4 PB HDD +
GPFS+Tape Lustre Home
Local SSDs
Mostly Read I/O
(data-intensive apps, parallel workflow,
parameter survey)
• Home storage for computing nodes
• Cloud-based campus storage services
Backup
Fine-grained R/W I/O
(check point, temporal files)
Fine-grained R/W I/O
(check point, temporal files)
4
Towards TSUBAME 3.0
Interim Upgrade
TSUBAME2.0 to 2.5
(Sept.10th, 2013)
TSUBAME2.0 Compute Node
Fermi GPU 3 x 1408 = 4224 GPUs
TSUBAME-KFC
Upgrade the TSUBAME2.0s GPUs
NVIDIA Fermi M2050 to Kepler K20X
SFP/DFP peak
from 4.8PF/2.4PF
=> 17PF/5.7PF
A TSUBAME3.0 prototype system with
advanced cooling for next-gen.
supercomputers.
40 compute nodes are oil-submerged.
160 NVIDIA Kepler K20x
80 Ivy Bridge Xeon
FDR Infiniband
Total 210.61 Flops
System 5.21 TFlops
Storage design is also a matter!!
5
Existing Storage Problems
Small I/O
• PFS
– Throughput-oriented Design
– Master-Worker Configuration
• Performance Bottlenecks on
Meta-data references
• Low IOPS (1k – 10k IOPS)
I/O contention
• Shared across compute
nodes
– I/O contention from various
multiple applications
– I/O performance degradation
Small I/O ops
from user’s apps
PFS operation
down
Recovered
IOPS
TIME
Lustre I/O Monitoring
6
Requirements for Next
Generation I/O Sub-System 1
• Basic Requirements
– TB/sec sequential bandwidth for traditional HPC applications
– Extremely high IOPS for to cope with applications that
generate massive small I/O
• Example: graph, etc.
– Some application workflows have different I/O requirements
for each workflow component
• Reduction/consolidation of PFS I/O resources is needed
while achieving TB/sec performance
– We cannot simply use a larger number of IO servers and drives
for achieving ~TB/s throughput!
– Many constraints
• Space, power, budget, etc.
• Performance per Watt and performance per RU is crucial
7
Requirements for Next
Generation I/O Sub-System 2
• New Approaches
• Tsubame 2.0 has pioneered the use of local flash storage as
a high-IOPS alternative to an external PFS
• Tired and hybrid storage environments, combining (node)
local flash with an external PFS
• Industry Status
• High-performance, high-capacity flash (and other new
semiconductor devices) are becoming available at reasonable
cost
• New approaches/interface to use high-performance devices
(e.g. NVMexpress)
8
Key Requirements of I/O system
• Electrical Power and Space are still Challenging
– Reduce of #OSS/OST, but keep higher IO Performance and
large storage capacity
– How maximize Lustre performance
• Understand New type of Flush storage device
– Any benefits with Lustre? How use it?
– MDT on SSD helps?
9
Lustre 2.x Performance Evaluation
• Maximize OSS/OST Performance for large
Sequential IO
– Single OSS and OST performance
– 1MB vs 4MB RPC
– Not only Peak performance, but also sustain performance
• Small IO to shared file
– 4K random read access to the Lustre
• Metadata Performance
– CPU impacts?
– SSD helps?
10
Throughput per OSS Server
0
2000
4000
6000
8000
10000
12000
14000
1 2 4 8 16
Throughput(MB/sec)
Number of clients
Single OSS's Throughput (Write)
Nobonding AA-bonding AA-bonding(CPU bind)
0
2000
4000
6000
8000
10000
12000
14000
1 2 4 8 16
Throughput(MB/s)
Number of clients
Single OSS's Throughput (Read)
Nobonding AA-bonding AA-bonding(CPU bind)
Single FDR’s LNET bandwidth
6.0 (GB/sec)
Dual IB FDR LNET bandwidth
12.0 (GB/sec)
11
Performance comparing
(1 MB vs. 4 MB RPC, IOR FPP, asyncIO)
• 4 x OSS, 280 x NL-SAS
• 14 clients and up to 224 processes
0
5000
10000
15000
20000
25000
30000
28 56 112 224
Throughput(MB/sec)
Number of Process
IOR(Write, FPP, xfersize=4MB)
1MB RPC
4MB RPC
0
5000
10000
15000
20000
25000
28 56 112 224
Throughput(MB/sec)
Number of Process
IOR(Read, FPP, xfersize=4MB)
1MB RPC
4MB RPC
12
Lustre Performance Degradation with Large
Number of Threads: OSS standpoint
0
2000
4000
6000
8000
10000
12000
0 200 400 600 800 1000 1200
THroughput(MB/sec)
Number of Process
IOR(FPP, 1MB, Write)
RPC=4MB RPC=1MB
0
1000
2000
3000
4000
5000
6000
7000
8000
0 200 400 600 800 1000 1200
THroughput(MB/sec)
Number of Process
IOR(FPP, 1MB, Read)
RPC=4MB RPC=1MB
• 1 x OSS, 80 x NL-SAS
• 20 clients and up to 960 processes
4.5x
30%
55%
13
Lustre Performance Degradation with Large
Number of Threads: SSDs vs. Nearline Disks
• IOR (FFP, 1MB) to single OST
• 20 clients and up to 640 processes
0%
20%
40%
60%
80%
100%
120%
0 100 200 300 400 500 600 700
PerformanceperOSTas%ofPeakPerformance
Number of Process
Write: Lustre Backend Performance Degradation
(Maximum for each dataset=100%)
7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC)
0%
20%
40%
60%
80%
100%
120%
0 100 200 300 400 500 600 700
PerformanceperOSTas%ofPeakPerformance
Number of Process
Read : Lustre Backend Performance Degradation
(Maximum for each dataset=100%)
7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC)
14
Lustre 2.4 4k Random Read Performance
(With 10 SSDs)
• 2 x OSS, 10 x SSD(2 x RAID5), 16 clients
• Create Large random files (FFP, SSF), and run random read
access with 4KB
0
100000
200000
300000
400000
500000
600000
1n32p 2n64p 4n128p 8n256p 12n384p 16n512p
IOPS
Lustre 4K random read(FFP and SSF)
FFP(POSIX)
FFP(MPIIO)
SSF(POSIX)
SSF(MPIIO)
15
Conclusions: Lustre with SSDs
• SSDs pools in Lustre or SSDs
– SSDs change the behavior of pools, as “seek” latency is no longer an
issue
• Application scenarios
– Very consistent performance, independent of the IO size or number
of concurrent IOs
– Very high random access to a single shared file (millions of IOPS in
a large file system with sufficient clients)
• With SSDs, the bottleneck is no more the device, but the RAID
array (RAID stack) and the file system
– Very high random read IOPS in Lustre is possible, but only if the
metadata workload is limited (i.e. random IO to a single shared file)
• We will investigate more benchmarks of Lustre on SSD
16
Metadata Performance Improvements
• Very Significant Improvements (since Lustre 2.3)
– Increased performance for both unique and shared directory metadata
– Almost linear scaling for most metadata operations with DNE
0
20000
40000
60000
80000
100000
120000
140000
Directory
creation
Directory
stat
Directory
removal
File
creation
File stat File
removal
ops/sec
Performance Comparing: Lustre-1.8 vs 2.4
(metadata operation to shared dir)
lustre-1.8.8 lustre-2.4.2
* 1 x MDS, 16 clients, 768 Lustre exports(multiple mount points)
0
100000
200000
300000
400000
500000
600000
Directory
creation
Directory
stat
Directory
removal
File
creation
File stat File
removal
ops/sec
Performance Comparing
(metadata operation to unique dir)
Same Storage resource, but just add MDS resource
Single MDS Dual MDS(DNE)
17
Metadata Performance
CPU Impact
• Metadata
Performance is
highly dependent on
CPU performance
• Limited variation in
CPU frequency or
memory speed can
have a significant
impact on metadata
performance
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
0 100 200 300 400 500 600 700
ops/sec
Number of Process
File Creation (Unique dir)
3.6GHz
2.4GHz
0
10000
20000
30000
40000
50000
60000
70000
0 100 200 300 400 500 600 700
ops/sec
Number of Process
File Creation (Shared dir)
3.6GHz
2.4GHz
18
Conclusions: Metadata Performance
• Significant metadata improvements for shares and unique
metadata performance since Lustre 2.3
– Improvements across all metadata workloads, especially
removals
– SSDs add to performance, but the impact is limited (and
probably mostly due to latency, not the IOPS performance of
the device)
• Important considerations for metadata benchmarks
– Metadata benchmarks are very sensitive to the underlying
hardware
– Clients limitations are important when considering metadata
performance of scaling
– Only directory/file stats scale with the number of processes
per node, other metadata workloads do appear not scale with
the number of processes on a single node
19
Summary
• Lustre Today
– Significant performance increase in object storage
and metadata performance
– Much better spindle efficiency (which is now
reaching the physical drive limit)
– Client performance is quickly becoming the
limitation for small file and random performance,
not the server side!!
• Consider alternative scenarios
– PFS combined with fast local devices (or, even,
local instances of the PFS)
– PFS combined with a global acceleration/caching
layer
20

More Related Content

What's hot

Collaborate nfs kyle_final
Collaborate nfs kyle_finalCollaborate nfs kyle_final
Collaborate nfs kyle_finalKyle Hailey
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Community
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
Sean Chittenden
 
SSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQLSSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQL
Yoshinori Matsunobu
 
Bluestore
BluestoreBluestore
Bluestore
Patrick McGarry
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems Baruch Osoveskiy
 
Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011Andy Parsons
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300
qsantechnology
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red_Hat_Storage
 
ZFS Workshop
ZFS WorkshopZFS Workshop
ZFS Workshop
APNIC
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Community
 
Raid designs in Qsan Storage
Raid designs in Qsan StorageRaid designs in Qsan Storage
Raid designs in Qsan Storageqsantechnology
 
Zfs Nuts And Bolts
Zfs Nuts And BoltsZfs Nuts And Bolts
Zfs Nuts And Bolts
Eric Sproul
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Community
 
ZFS
ZFSZFS
Cassandra Performance Benchmark
Cassandra Performance BenchmarkCassandra Performance Benchmark
Cassandra Performance Benchmark
Bigstep
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Danielle Womboldt
 
Memory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and VirtualizationMemory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and Virtualization
Bigstep
 
ZFS Talk Part 1
ZFS Talk Part 1ZFS Talk Part 1
ZFS Talk Part 1
Steven Burgess
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
Danielle Womboldt
 

What's hot (20)

Collaborate nfs kyle_final
Collaborate nfs kyle_finalCollaborate nfs kyle_final
Collaborate nfs kyle_final
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
 
SSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQLSSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQL
 
Bluestore
BluestoreBluestore
Bluestore
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems
 
Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
 
ZFS Workshop
ZFS WorkshopZFS Workshop
ZFS Workshop
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Raid designs in Qsan Storage
Raid designs in Qsan StorageRaid designs in Qsan Storage
Raid designs in Qsan Storage
 
Zfs Nuts And Bolts
Zfs Nuts And BoltsZfs Nuts And Bolts
Zfs Nuts And Bolts
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
 
ZFS
ZFSZFS
ZFS
 
Cassandra Performance Benchmark
Cassandra Performance BenchmarkCassandra Performance Benchmark
Cassandra Performance Benchmark
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Memory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and VirtualizationMemory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and Virtualization
 
ZFS Talk Part 1
ZFS Talk Part 1ZFS Talk Part 1
ZFS Talk Part 1
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 

Viewers also liked

GTC Japan 2014
GTC Japan 2014GTC Japan 2014
GTC Japan 2014
Hitoshi Sato
 
MemoryPlus Workshop
MemoryPlus WorkshopMemoryPlus Workshop
MemoryPlus Workshop
Hitoshi Sato
 
Japan Lustre User Group 2014
Japan Lustre User Group 2014Japan Lustre User Group 2014
Japan Lustre User Group 2014
Hitoshi Sato
 
2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night Jenkins2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night Jenkins
Takayuki Okazaki
 
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash ArrayWebinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Storage Switzerland
 
DDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your HardwareDDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your Hardware
inside-BigData.com
 
Google the presentation
Google the presentationGoogle the presentation
Google the presentation
Sean Robinson
 
Investigating Private Companies
Investigating Private CompaniesInvestigating Private Companies
Investigating Private Companies
Reynolds Center for Business Journalism
 
Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Trihydro Corporation
 
Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012TMX Equicom
 
BEAD Presentation
BEAD PresentationBEAD Presentation
BEAD Presentationbalchenn
 
Los manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevoLos manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevo
UNACH
 
My First Source Code
My First Source CodeMy First Source Code
My First Source Code
enidcruz
 
Air Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas DevelopmentAir Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas Development
Trihydro Corporation
 
Portland Visitor's Guide
Portland Visitor's Guide Portland Visitor's Guide
Portland Visitor's Guide acarmour1020
 
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas DevelopmentAvoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Trihydro Corporation
 
E libros
E librosE libros
E librosSujay
 
酒酒
Drayton Hall Field Notes
Drayton Hall Field NotesDrayton Hall Field Notes
Drayton Hall Field NotesAlex Cohn
 
2016 kirk family reunion final
2016 kirk family reunion final2016 kirk family reunion final
2016 kirk family reunion final
Janelle Dunn
 

Viewers also liked (20)

GTC Japan 2014
GTC Japan 2014GTC Japan 2014
GTC Japan 2014
 
MemoryPlus Workshop
MemoryPlus WorkshopMemoryPlus Workshop
MemoryPlus Workshop
 
Japan Lustre User Group 2014
Japan Lustre User Group 2014Japan Lustre User Group 2014
Japan Lustre User Group 2014
 
2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night Jenkins2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night Jenkins
 
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash ArrayWebinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
 
DDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your HardwareDDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your Hardware
 
Google the presentation
Google the presentationGoogle the presentation
Google the presentation
 
Investigating Private Companies
Investigating Private CompaniesInvestigating Private Companies
Investigating Private Companies
 
Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...
 
Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012
 
BEAD Presentation
BEAD PresentationBEAD Presentation
BEAD Presentation
 
Los manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevoLos manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevo
 
My First Source Code
My First Source CodeMy First Source Code
My First Source Code
 
Air Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas DevelopmentAir Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas Development
 
Portland Visitor's Guide
Portland Visitor's Guide Portland Visitor's Guide
Portland Visitor's Guide
 
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas DevelopmentAvoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
 
E libros
E librosE libros
E libros
 
酒酒
 
Drayton Hall Field Notes
Drayton Hall Field NotesDrayton Hall Field Notes
Drayton Hall Field Notes
 
2016 kirk family reunion final
2016 kirk family reunion final2016 kirk family reunion final
2016 kirk family reunion final
 

Similar to LUG 2014

Tacc Infinite Memory Engine
Tacc Infinite Memory EngineTacc Infinite Memory Engine
Tacc Infinite Memory Engine
inside-BigData.com
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
Nicolas Poggi
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
David Grier
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Community
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail Internet World
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
hellobank1
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Community
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
In-Memory Computing Summit
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Community
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Community
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
OpenStack Korea Community
 
Corralling Big Data at TACC
Corralling Big Data at TACCCorralling Big Data at TACC
Corralling Big Data at TACC
inside-BigData.com
 
SUN主机产品介绍.ppt
SUN主机产品介绍.pptSUN主机产品介绍.ppt
SUN主机产品介绍.ppt
PencilData
 
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with UnivaNVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
inside-BigData.com
 
Latest HPC News from NVIDIA
Latest HPC News from NVIDIALatest HPC News from NVIDIA
Latest HPC News from NVIDIA
inside-BigData.com
 
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014
Howard Marks
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and Reporting
Ceph Community
 

Similar to LUG 2014 (20)

Tacc Infinite Memory Engine
Tacc Infinite Memory EngineTacc Infinite Memory Engine
Tacc Infinite Memory Engine
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
CLFS 2010
CLFS 2010CLFS 2010
CLFS 2010
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Corralling Big Data at TACC
Corralling Big Data at TACCCorralling Big Data at TACC
Corralling Big Data at TACC
 
SUN主机产品介绍.ppt
SUN主机产品介绍.pptSUN主机产品介绍.ppt
SUN主机产品介绍.ppt
 
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with UnivaNVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
 
Latest HPC News from NVIDIA
Latest HPC News from NVIDIALatest HPC News from NVIDIA
Latest HPC News from NVIDIA
 
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and Reporting
 

More from Hitoshi Sato

ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
Hitoshi Sato
 
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
Hitoshi Sato
 
Building Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC ContainerBuilding Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC Container
Hitoshi Sato
 
Singularityで分散深層学習
Singularityで分散深層学習Singularityで分散深層学習
Singularityで分散深層学習
Hitoshi Sato
 
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
Hitoshi Sato
 
GTC Japan 2017
GTC Japan 2017GTC Japan 2017
GTC Japan 2017
Hitoshi Sato
 
産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMN産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMN
Hitoshi Sato
 

More from Hitoshi Sato (7)

ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
 
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
 
Building Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC ContainerBuilding Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC Container
 
Singularityで分散深層学習
Singularityで分散深層学習Singularityで分散深層学習
Singularityで分散深層学習
 
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
 
GTC Japan 2017
GTC Japan 2017GTC Japan 2017
GTC Japan 2017
 
産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMN産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMN
 

Recently uploaded

FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 

LUG 2014

  • 1. Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato*1, Shuichi Ihara*2, Satoshi Matsuoka*1 *1 Tokyo Institute of Technology *2 Data Direct Networks Japan 1
  • 2. Tokyo Tech’s TSUBAME2.0 Nov. 1, 2010 “The Greenest Production Supercomputer in the World” TSUBAME 2.0 New Development 32nm 40nm >400GB/s Mem BW 80Gbps NW BW ~1KW max >1.6TB/s Mem BW >12TB/s Mem BW 35KW Max >600TB/s Mem BW 220Tbps NW Bisecion BW 1.4MW Max 2
  • 3. TSUBAME2 System Overview 11PB (7PB HDD, 4PB Tape, 200TB SSD) “Global Work Space” #1 SFA10k #5 “Global Work Space” #2 “Global Work Space” #3 SFA10k #4SFA10k #3SFA10k #2SFA10k #1 /data0 /work0 /work1 /gscr “cNFS/Clusterd Samba w/ GPFS” HOME System application “NFS/CIFS/iSCSI by BlueARC” HOME iSCSI Infiniband QDR Networks SFA10k #6 GPFS#1 GPFS#2 GPFS#3 GPFS#4 Parallel File System Volumes Home Volumes QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8 1.2PB 3.6 PB /data1 Thin nodes 1408nodes (32nodes x44 Racks) HP Proliant SL390s G7 1408nodes CPU: Intel Westmere-EP 2.93GHz 6cores × 2 = 12cores/node GPU: NVIDIA Tesla K20X, 3GPUs/node Mem: 54GB (96GB) SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB) Medium nodes HP Proliant DL580 G7 24nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070, NextIO vCORE Express 2070 Mem:128GB SSD: 120GB x 4 = 480GB Fat nodes HP Proliant DL580 G7 10nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070 Mem: 256GB (512GB) SSD: 120GB x 4 = 480GB ・・・・・・ Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD Interconnets: Full-bisection Optical QDR Infiniband Network Voltaire Grid Director 4700 ×12 IB QDR: 324 ports Core Switch Edge Switch Edge Switch (/w 10GbE ports) Voltaire Grid Director 4036 ×179 IB QDR : 36 ports Voltaire Grid Director 4036E ×6 IB QDR:34ports 10GbE: 2port 12switches 6switches179switches 2.4 PB HDD + GPFS+Tape Lustre Home Local SSDs 3
  • 4. TSUBAME2 System Overview 11PB (7PB HDD, 4PB Tape, 200TB SSD) “Global Work Space” #1 SFA10k #5 “Global Work Space” #2 “Global Work Space” #3 SFA10k #4SFA10k #3SFA10k #2SFA10k #1 /data0 /work0 /work1 /gscr “cNFS/Clusterd Samba w/ GPFS” HOME System application “NFS/CIFS/iSCSI by BlueARC” HOME iSCSI Infiniband QDR Networks SFA10k #6 GPFS#1 GPFS#2 GPFS#3 GPFS#4 Parallel File System Volumes Home Volumes QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8 1.2PB 3.6 PB /data1 Thin nodes 1408nodes (32nodes x44 Racks) HP Proliant SL390s G7 1408nodes CPU: Intel Westmere-EP 2.93GHz 6cores × 2 = 12cores/node GPU: NVIDIA Tesla K20X, 3GPUs/node Mem: 54GB (96GB) SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB) Medium nodes HP Proliant DL580 G7 24nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070, NextIO vCORE Express 2070 Mem:128GB SSD: 120GB x 4 = 480GB Fat nodes HP Proliant DL580 G7 10nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070 Mem: 256GB (512GB) SSD: 120GB x 4 = 480GB ・・・・・・ Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD Interconnets: Full-bisection Optical QDR Infiniband Network Voltaire Grid Director 4700 ×12 IB QDR: 324 ports Core Switch Edge Switch Edge Switch (/w 10GbE ports) Voltaire Grid Director 4036 ×179 IB QDR : 36 ports Voltaire Grid Director 4036E ×6 IB QDR:34ports 10GbE: 2port 12switches 6switches179switches 2.4 PB HDD + GPFS+Tape Lustre Home Local SSDs Mostly Read I/O (data-intensive apps, parallel workflow, parameter survey) • Home storage for computing nodes • Cloud-based campus storage services Backup Fine-grained R/W I/O (check point, temporal files) Fine-grained R/W I/O (check point, temporal files) 4
  • 5. Towards TSUBAME 3.0 Interim Upgrade TSUBAME2.0 to 2.5 (Sept.10th, 2013) TSUBAME2.0 Compute Node Fermi GPU 3 x 1408 = 4224 GPUs TSUBAME-KFC Upgrade the TSUBAME2.0s GPUs NVIDIA Fermi M2050 to Kepler K20X SFP/DFP peak from 4.8PF/2.4PF => 17PF/5.7PF A TSUBAME3.0 prototype system with advanced cooling for next-gen. supercomputers. 40 compute nodes are oil-submerged. 160 NVIDIA Kepler K20x 80 Ivy Bridge Xeon FDR Infiniband Total 210.61 Flops System 5.21 TFlops Storage design is also a matter!! 5
  • 6. Existing Storage Problems Small I/O • PFS – Throughput-oriented Design – Master-Worker Configuration • Performance Bottlenecks on Meta-data references • Low IOPS (1k – 10k IOPS) I/O contention • Shared across compute nodes – I/O contention from various multiple applications – I/O performance degradation Small I/O ops from user’s apps PFS operation down Recovered IOPS TIME Lustre I/O Monitoring 6
  • 7. Requirements for Next Generation I/O Sub-System 1 • Basic Requirements – TB/sec sequential bandwidth for traditional HPC applications – Extremely high IOPS for to cope with applications that generate massive small I/O • Example: graph, etc. – Some application workflows have different I/O requirements for each workflow component • Reduction/consolidation of PFS I/O resources is needed while achieving TB/sec performance – We cannot simply use a larger number of IO servers and drives for achieving ~TB/s throughput! – Many constraints • Space, power, budget, etc. • Performance per Watt and performance per RU is crucial 7
  • 8. Requirements for Next Generation I/O Sub-System 2 • New Approaches • Tsubame 2.0 has pioneered the use of local flash storage as a high-IOPS alternative to an external PFS • Tired and hybrid storage environments, combining (node) local flash with an external PFS • Industry Status • High-performance, high-capacity flash (and other new semiconductor devices) are becoming available at reasonable cost • New approaches/interface to use high-performance devices (e.g. NVMexpress) 8
  • 9. Key Requirements of I/O system • Electrical Power and Space are still Challenging – Reduce of #OSS/OST, but keep higher IO Performance and large storage capacity – How maximize Lustre performance • Understand New type of Flush storage device – Any benefits with Lustre? How use it? – MDT on SSD helps? 9
  • 10. Lustre 2.x Performance Evaluation • Maximize OSS/OST Performance for large Sequential IO – Single OSS and OST performance – 1MB vs 4MB RPC – Not only Peak performance, but also sustain performance • Small IO to shared file – 4K random read access to the Lustre • Metadata Performance – CPU impacts? – SSD helps? 10
  • 11. Throughput per OSS Server 0 2000 4000 6000 8000 10000 12000 14000 1 2 4 8 16 Throughput(MB/sec) Number of clients Single OSS's Throughput (Write) Nobonding AA-bonding AA-bonding(CPU bind) 0 2000 4000 6000 8000 10000 12000 14000 1 2 4 8 16 Throughput(MB/s) Number of clients Single OSS's Throughput (Read) Nobonding AA-bonding AA-bonding(CPU bind) Single FDR’s LNET bandwidth 6.0 (GB/sec) Dual IB FDR LNET bandwidth 12.0 (GB/sec) 11
  • 12. Performance comparing (1 MB vs. 4 MB RPC, IOR FPP, asyncIO) • 4 x OSS, 280 x NL-SAS • 14 clients and up to 224 processes 0 5000 10000 15000 20000 25000 30000 28 56 112 224 Throughput(MB/sec) Number of Process IOR(Write, FPP, xfersize=4MB) 1MB RPC 4MB RPC 0 5000 10000 15000 20000 25000 28 56 112 224 Throughput(MB/sec) Number of Process IOR(Read, FPP, xfersize=4MB) 1MB RPC 4MB RPC 12
  • 13. Lustre Performance Degradation with Large Number of Threads: OSS standpoint 0 2000 4000 6000 8000 10000 12000 0 200 400 600 800 1000 1200 THroughput(MB/sec) Number of Process IOR(FPP, 1MB, Write) RPC=4MB RPC=1MB 0 1000 2000 3000 4000 5000 6000 7000 8000 0 200 400 600 800 1000 1200 THroughput(MB/sec) Number of Process IOR(FPP, 1MB, Read) RPC=4MB RPC=1MB • 1 x OSS, 80 x NL-SAS • 20 clients and up to 960 processes 4.5x 30% 55% 13
  • 14. Lustre Performance Degradation with Large Number of Threads: SSDs vs. Nearline Disks • IOR (FFP, 1MB) to single OST • 20 clients and up to 640 processes 0% 20% 40% 60% 80% 100% 120% 0 100 200 300 400 500 600 700 PerformanceperOSTas%ofPeakPerformance Number of Process Write: Lustre Backend Performance Degradation (Maximum for each dataset=100%) 7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC) 0% 20% 40% 60% 80% 100% 120% 0 100 200 300 400 500 600 700 PerformanceperOSTas%ofPeakPerformance Number of Process Read : Lustre Backend Performance Degradation (Maximum for each dataset=100%) 7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC) 14
  • 15. Lustre 2.4 4k Random Read Performance (With 10 SSDs) • 2 x OSS, 10 x SSD(2 x RAID5), 16 clients • Create Large random files (FFP, SSF), and run random read access with 4KB 0 100000 200000 300000 400000 500000 600000 1n32p 2n64p 4n128p 8n256p 12n384p 16n512p IOPS Lustre 4K random read(FFP and SSF) FFP(POSIX) FFP(MPIIO) SSF(POSIX) SSF(MPIIO) 15
  • 16. Conclusions: Lustre with SSDs • SSDs pools in Lustre or SSDs – SSDs change the behavior of pools, as “seek” latency is no longer an issue • Application scenarios – Very consistent performance, independent of the IO size or number of concurrent IOs – Very high random access to a single shared file (millions of IOPS in a large file system with sufficient clients) • With SSDs, the bottleneck is no more the device, but the RAID array (RAID stack) and the file system – Very high random read IOPS in Lustre is possible, but only if the metadata workload is limited (i.e. random IO to a single shared file) • We will investigate more benchmarks of Lustre on SSD 16
  • 17. Metadata Performance Improvements • Very Significant Improvements (since Lustre 2.3) – Increased performance for both unique and shared directory metadata – Almost linear scaling for most metadata operations with DNE 0 20000 40000 60000 80000 100000 120000 140000 Directory creation Directory stat Directory removal File creation File stat File removal ops/sec Performance Comparing: Lustre-1.8 vs 2.4 (metadata operation to shared dir) lustre-1.8.8 lustre-2.4.2 * 1 x MDS, 16 clients, 768 Lustre exports(multiple mount points) 0 100000 200000 300000 400000 500000 600000 Directory creation Directory stat Directory removal File creation File stat File removal ops/sec Performance Comparing (metadata operation to unique dir) Same Storage resource, but just add MDS resource Single MDS Dual MDS(DNE) 17
  • 18. Metadata Performance CPU Impact • Metadata Performance is highly dependent on CPU performance • Limited variation in CPU frequency or memory speed can have a significant impact on metadata performance 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 0 100 200 300 400 500 600 700 ops/sec Number of Process File Creation (Unique dir) 3.6GHz 2.4GHz 0 10000 20000 30000 40000 50000 60000 70000 0 100 200 300 400 500 600 700 ops/sec Number of Process File Creation (Shared dir) 3.6GHz 2.4GHz 18
  • 19. Conclusions: Metadata Performance • Significant metadata improvements for shares and unique metadata performance since Lustre 2.3 – Improvements across all metadata workloads, especially removals – SSDs add to performance, but the impact is limited (and probably mostly due to latency, not the IOPS performance of the device) • Important considerations for metadata benchmarks – Metadata benchmarks are very sensitive to the underlying hardware – Clients limitations are important when considering metadata performance of scaling – Only directory/file stats scale with the number of processes per node, other metadata workloads do appear not scale with the number of processes on a single node 19
  • 20. Summary • Lustre Today – Significant performance increase in object storage and metadata performance – Much better spindle efficiency (which is now reaching the physical drive limit) – Client performance is quickly becoming the limitation for small file and random performance, not the server side!! • Consider alternative scenarios – PFS combined with fast local devices (or, even, local instances of the PFS) – PFS combined with a global acceleration/caching layer 20