Pivot3 Overview
Smarter Infrastructure Solutions
Pivot3๋Š” ๋” ์Šค๋งˆํŠธํ•œ ์ธํ”„๋ผ๋ฅผ ์ œ๊ณต.
๊ฒ€์ฆ๋œ ํ˜์‹ ๊ธฐ์ˆ 
โ€ข ์†Œํ”„ํŠธ์›จ์–ด ์ •์˜ ์Šคํ† ๋ฆฌ์ง€
โ€ข ์„œ๋น„์Šคํ’ˆ์งˆ
โ€ข ํ”Œ๋ž˜์‹œ ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ณ
30 ํŠนํ—ˆ๊ธฐ์ˆ 
53๊ฐœ๊ตญ, 3000์—ฌ ๊ณ ๊ฐ์‚ฌ
๊ด‘๋ฒ”์œ„ํ•œ ๊ธฐ์ˆ ์ œํœด
BOULDER
MEXICO CITY
AUSTIN
HOUSTON
LONDON
DUBAI
SINGAPORE
SEOUL
Software-Defined
Storage
Hyper-
convergence
Erasure
Coding
Quality of
Service
Data
Protection
์ฃผ์š” ํ˜์‹ ๊ณผ ํŠนํ—ˆ๊ธฐ์ˆ 
Software-Defined Storage
Hyperconverged
Infrastructure
Quality of Service
PCIe-Flash
Hybrid Arrays
PCIe All-Flash Arrays
2 0 0 5 2 0 1 1 2 0 1 2 2 0 1 5
๊ธฐ์ˆ  ํ˜์‹ 
ํŠนํ—ˆ ๊ธฐ์ˆ 
ํŠนํ—ˆ๊ธฐ์ˆ  ํŠนํ—ˆ ๋Œ€๊ธฐ
Nexgen
๊ฐ•๋ ฅํ•œ ์†”๋ฃจ์…˜ ๋ฐ ๊ธฐ์ˆ  ์ œํœด
ํ•˜์ดํผ์ปจ๋ฒ„์ง€๋“œ ์ธํ”„๋ผ์ŠคํŠธ๋Ÿญ์ณ๋ž€?
๊ฐ€ํŠธ๋„ˆ๊ฐ€ ๊ผฝ์€ ์ฃผ์š” HCI ์š”๊ฑด :
๊ฐ„๋‹จ: ๋น ๋ฅธ ๊ตฌ์„ฑ ๋ฐ ์šด์˜
์œ ์—ฐ์„ฑ: Scale-up and out ํ™•์žฅ ์šฉ์ด
์„ ํƒ ๊ฐ€๋Šฅํ•  ๊ฒƒ: ๊ตฌ์„ฑ ๋ฐ ์žฅ๋น„ ์˜ต์…˜
๊ทœ์ •๋œ ๊ตฌ์กฐ: ์˜ˆ์ธก๊ฐ€๋Šฅํ•œ ์„ฑ๋Šฅ๊ณผ ๊ฐ€์šฉ์„ฑ
๊ฒฝ์ œ์„ฑ: CAPEX and OPEX ์ ˆ๊ฐ
How to Determine the Best Consumption Model for Converged or Hyperconverged Systems 11/06/15
HCI
Servers
Storage
Network
Storage
โ€ข ์Šคํ† ๋ฆฌ์ง€์™€ ์ปดํ“จํŒ…์ด ๊ฒฐํ•ฉ๋œ
โ€ข ์†Œํ”„ํŠธ์›จ์–ด๋กœ ์ •์˜๋œ ์Šคํ† ๋ฆฌ์ง€
โ€ข ํ‘œ์ค€ x86 ์„œ๋ฒ„ ํ”Œ๋žซํผ
โ€ข ๋ชจ๋“ˆ ๋‹จ์œ„ ํ™•์žฅ
์ „ํ†ต์  ๊ตฌ์กฐ
ํ•˜์ดํผ์ปจ๋ฒ„์ง€๋“œ
์ธํ”„๋ผ์ŠคํŠธ๋Ÿญ์ณ
์„ ๋‘๊ธฐ์—…์€ hyper scale ๋ฐฉ์‹์˜ ์—…๋ฌด ๋ชจ๋ธ์„ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.
์ง€์—ญ์  Erasure
coding
Erasure coding์€ ๊ตฌ๊ธ€ ์•„๋งˆ์กด ๊ฐ™์€ Hyper scale ํšŒ์‚ฌ์— ์ ์šฉํ•˜๋Š” ํ•ต์‹ฌ
๋ฐ์ดํ„ฐ ๋ณดํ˜ธ๊ธฐ์ˆ  ์ž…๋‹ˆ๋‹ค.
Erasure coding์€ RAID 6 ๋ณด๋‹ค 10,000 ๋ฐฐ ์ด์ƒ์˜ ์•ˆ์ •์„ฑ์ด
์žˆ์œผ๋ฉฐ, Peta Byte๊ทœ๋ชจ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์œ ์ง€ํ•˜๊ธฐ ์œ„ํ•œ ์œ ์ผํ•œ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค.
Pivot3๋Š” ๊ธฐ์กด Erasure Coding๊ธฐ์ˆ ์— ์‘๋‹ต์‹œ๊ฐ„์„ ๊ฐœ์„ ํ•œ Scalar
Erasure Coding ์„ ํ•˜์ดํผ ์ปจ๋ฒ„์ง€๋“œ ์ธํ”„๋ผ์— ์ ์šฉํ•˜์—ฌ ๋†’์€
ํšจ์œจ๊ณผ ๊ทนํ•œ์˜ ์•ˆ์ •์„ฑ์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค.
์–ดํ”Œ๋ผ์ด์–ธ์Šค
Erasure Coding
์—ฌํƒ€ HCI๋Š” mirror๋ฐฉ์‹๋งŒ์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ
Pivot3๋Š” ๋ชจ๋“  ๋…ธ๋“œ์˜ ๋ชจ๋“  ๋””์Šคํฌ๋ฅผ ์ „๋ถ€ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
์•„์ง๋„ ์ด๋ ‡๊ฒŒ ํ”„๋กœ์ ํŠธ๋ฅผ ์‹œ์ž‘ํ•˜์‹ญ๋‹ˆ๊นŒ?
Pivot3๋Š” ์ด๋ ‡๊ฒŒ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค.
์กฐ๋ฆฝ, ์žฅ์• ํŒŒํŠธ๊ต์ฒด, ์„ค์น˜, ์ตœ์‹ ์—…๋ฐ์ดํŠธ, ๊ตฌ์„ฑ, ํ…Œ์ŠคํŠธ,
์žฌ๊ตฌ์„ฑโ€ฆโ€ฆโ€ฆ.. ์ด๋ ‡๊ฒŒ ํ•œ ๋‹ฌ์ด ์ง€๋‚˜๊ฐ‘๋‹ˆ๋‹คโ€ฆ
Pivot3 vSTAC starts here.
์™„์ „ํžˆ ์ตœ์ ํ™” ๊ตฌ์„ฑ๋˜์–ด ๋ฐ”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค.
Project Start
Acquire Appropriate
Hardware
Calculate IOPs
Hardware Configuration
Network Configuration
Software Install
Test
Tune
Benchmark and Iterate
Deploy Scale Out
Pivot3 โ€“ ์ข€ ๋” ์Šค๋งˆํŠธํ•œ ์ธํ”„๋ผ ์†”๋ฃจ์…˜
๋” ๋‚˜์€ ๊ฒฐ๊ณผ
๊ณ ๊ฐ€์šฉ์„ฑ์ด
๋ณด์žฅ๋œ ํ”Œ๋žซํผ
Hyperconverged Infrastructure and Flash Storage
์ค‘์š”ํ•œ ๊ฒƒ์„
์šฐ์„  ์ฒ˜๋ฆฌ
Pivot3 ํ•˜์ดํผ์ปจ๋ฒ„์ „์Šค ํ˜์‹ 
P E R F O R M A N C E2x M O R E I O P S P E R
D E S K T O P S4x O P E R A T I N G
E N V I R O N M E N T
O V E R H E A D10%
๋ถ„์‚ฐ์ฒ˜๋ฆฌ HCI ์ด๋ ˆ์ด์ € ์ฝ”๋“œ ํŠนํ—ˆ๊ธฐ์ˆ  ํšจ์œจ์ ์ธ ์šด์˜ ํ™˜๊ฒฝ
SDS
VIRTUAL SERVERS VIRTUAL SAN
VM VM
VM VM
VM VM
x86
๋” ๋†’์€ ํšจ๊ณผ๋ฅผ ๋ณด์žฅ
VIRTUAL SERVERS VIRTUAL SAN
VM VM
VM VM
VM VM
VM VM
x86
Hyperconverged Infrastructure
ํ•„์š”์— ๋”ฐ๋ผ ํ‘œ์ค€ x86์„œ๋ฒ„๋ฅผ ๋Š˜๋ ค๊ฐ€๋Š”
์œ ์—ฐํ•œ ์•„ํ‚คํ…์ณ
๋ชจ๋“ˆ๋ฐฉ์‹
x86 Nodes
๊ทน๋Œ€ํ™”๋œ ์ž์› ํšจ์œจ์„ฑ์„ ์œ„ํ•ด
์Šคํ† ๋ฆฌ์ง€์™€ ์ปดํ“จํŒ…์„ ๋™์‹œ์—
ํ™•์žฅํ•˜๋Š” ๋ถ„์‚ฐ Scale-out ๊ตฌ์กฐ
๋ถ„์‚ฐ
Scale-out
7% ์˜ ์‹œ์Šคํ…œ ๋ฆฌ์†Œ์Šค๋งŒ์„ ์‚ฌ์šฉํ•˜๋Š”
ํšจ์œจ์ ์ธ HCI
ํšจ์œจ์ ์ธ
์šด์˜ํ™˜๊ฒฝ
๊ณ ์„ฑ๋Šฅ์ด ์œ ์ง€๋˜๋Š” ๊ณ ๊ฐ€์šฉ ํ™˜๊ฒฝ
Hyperconverged Infrastructure
์ตœ๋Œ€ 94% utilization ์ œ๊ณตํ•˜๋Š” ์œ ์—ฐํ•œ
๋ฐ์ดํ„ฐ๋ณดํ˜ธ ๊ธฐ์ˆ 
99.9999%์˜ ๋ฐ์ดํ„ฐ ์•ˆ์ •์„ฑ
ํŠนํ—ˆ ๊ธฐ์ˆ 
Erasure
Coding
์žฅ์• ์‹œ์—๋„
85%์ด์ƒ์˜ ์„ฑ๋Šฅ์„ ๋ณด์žฅํ•˜๋ฉด์„œ
๊ณ  ๊ฐ€์šฉ์„ฑ์„ ๋ณด์žฅ
Availability
With
Performance
VIRTUAL SERVERS
VM VM
VM VM
VM VM
VM VM
x86
VIRTUAL SAN
์œ ์—ฐํ•œ ํ•˜์ดํผ ์ปจ๋ฒ„์ง€๋“œ ํ™˜๊ฒฝ ๊ตฌ์ถ• ๊ฐ€๋Šฅ
vSTAC SERVER NODES vSTAC BLADES
โ€ข All-Flash
โ€ข Hybrid
โ€ข CapacityExpansion
โ€ข Surveillance-Optimized
โ€ข Software-onlyoption
โ€ข All-inclusiveFeatureSet
โ€ข All-flash
โ€ข Highperformancedensity
โ€ข Software-onlyoption
โ€ข All-inclusiveFeatureSet
โ€ข Hyperconvergencewith
QoS ManagedServiceLevels
โ€ข FlashAccelerationTier
Dell FX2 Blades
Cisco UCS
B200 Blades
Dell M1000/
M630 Blades
vSTAC SLX
More Technical
information
Please add awesome picture
Raid 1
์ „ํ†ต์ ์ธ ์Šคํ† ๋ฆฌ์ง€ ํ˜•ํƒœ๋Š”โ€ฆ.
โ€ข ํ˜ธ์ŠคํŠธ์˜ ์„ฑ๋Šฅ์€:
โ€ข RAID set์— ํฌํ•จ๋œ ๋””์Šคํฌ ์ˆ˜์— ๋น„๋ก€
โ€ข ์„ฑ๋Šฅ์€ ํ•ด๋‹น ๋ณผ๋ฅจ์„ ์†Œ์œ ํ•œ Controller์˜
์„ฑ๋Šฅ์— ์ œํ•œ
โ€ข ์„ฑ๋Šฅ์„ ์œ„ํ•ด์„œ๋Š” ํŠน๋ณ„ํ•œ RAID set์„ ๊ตฌํ˜„ํ•ด์•ผํ•จ
โ€ข ์“ฐ์ž„์— ๋”ฐ๋ผ RAID set์„ ๋””์ž์ธํ•˜๊ณ 
๊ด€๋ฆฌํ•˜์—ฌ์•ผํ•จ
โ€ข ์šฉ๋Ÿ‰์„ ์ถฉ๋ถ„ํžˆ ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•จ
โ€ข Volume๋ณ€๊ฒฝ๊ณผ๋Š” ๋‹ฌ๋ฆฌ RAID set์˜ ๋ณ€๊ฒฝ์€ ์ฆ‰์‹œ
๋ณ€๊ฒฝ์ด ์–ด๋ ค์›€
Raid 1
Raid 5
Raid 6
Raid 1
Raid 5
SnapShot
Pool
Spares
Controller 1
R1 R1 R5R5
Vol 0 Vol 1 Vol 2 Vol 3
Controller 2
R6R6 R1R1
Vol 4 Vol 5 Vol 6 Vol 7
vSTAC Cluster
Pivot3๋Š” โ€ฆโ€ฆ.
โ€ข Non RAID๋ฐฉ์‹์œผ๋กœ ๊ธฐ์กด
RAIDset์œ„์— ๋ณผ๋ฅจ์„ ์ƒ์„ฑํ•˜๋Š”
๋ฐฉ์‹์ด ์•„๋‹Œ ์ „์ฒด vPG์—
๋ณผ๋ฅจ์„ ๋ฐ”๋กœ ์ƒ์„ฑ
โ€ข ํ•ด๋‹น ๋ณผ๋ฅจ์€ Online์œผ๋กœ
๋ณดํ˜ธ๋ ˆ๋ฒจ, ์„ฑ๋Šฅ์„ ๋ณ€๊ฒฝํ•  ์ˆ˜
์žˆ์Œ
โ€ข Node๊ฐ€ ๋Š˜์–ด๋‚˜๋ฉด ์šฉ๋Ÿ‰๊ณผ
์„ฑ๋Šฅ์ด ๊ฐ™์ด ์ฆ๊ฐ€
โ€ข ๋ชจ๋“  ๊ด€๋ฆฌ๋Š” Pivot3 vSTAC
Management Console์—์„œ
์‰ฝ๊ฒŒ
vSTAC ClustervPG 1 vPG 2 vPG n
Node 1 Node 2 Node 3
VM VM VM
vPG 1 (All Flash)
APP
OS
APP
OS
APP
OS
Node 1 Node 2 Node 3
VM VM VM
vPG 2 (Hybrid)
APP
OS
APP
OS
APP
OS
DATA STORE 1 DATA STORE 2
๋…์ฐฝ์ ์ธ Erasure Coding๊ธฐ๋ฐ˜์˜ ํšจ์œจ์„ฑ
66%
75%
80%
83%
86% 88% 89%
90%
91% 92% 92%
93% 93%
94%
40%
50%
60%
70%
80%
90%
100%
3 4 5 6 7 8 9 10 11 12 13 14 15 16
STORAGE๊ฐ€์šฉ๊ณต๊ฐ„
๋…ธ๋“œ
โ€ข ํŠนํ—ˆ ๋ฐ›์€ erasure coding ๋ฐฉ์‹์€ ์ตœ๋Œ€์˜ ๊ณต๊ฐ„์„
๋ณด์žฅ
โ€ข๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€ ๋ ์ˆ˜๋ก ์Šคํ† ๋ฆฌ์ง€์˜ ๊ณต๊ฐ„ ํšจ์œจ์€
๋†’์•„์ง
โ€ข ์ „์ฒด ๋…ธ๋“œ์ค‘ 5๊ฐœ์˜ ๋””์Šคํฌ ์žฅ์•  ๋˜๋Š” 1๋…ธ๋“œ+2๊ฐœ์˜
๋””์Šคํฌ ์žฅ์• ๊นŒ์ง€๋ฅผ ํ—ˆ์šฉ
High Storage Efficiency that increases with scale
๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ์ตœ๊ณ 
50%๊นŒ์ง€์˜ ๊ณต๊ฐ„์„ ์‚ฌ์šฉํ•  ์ˆ˜
์žˆ์Œ
๊ฐ€์šฉ๊ณต๊ฐ„
Scale
Mirror50%
33%
16%
โ€ข ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ๊ฐ€์žฅ ์‰ฌ์šฐ๋ฉด์„œ๋„
ํŠน๋ณ„ํ•œ ๊ธฐ์ˆ ์ด ํ•„์š”์—†์Œ
โ€ข ๋…ธ๋“œ๊ฐ€ ๋Š˜์–ด๋‚˜๋”๋ผ๋„ ๊ฐ€์šฉ๊ณต๊ฐ„์˜ ํšจ์œจ์€
์ฆ๊ฐ€๋˜์ง€ ์•Š์Œ
โ€ข ๋””์Šคํฌ์˜ ์žฅ์• ํ—ˆ์šฉ์„ ๋Š˜๋ฆด์ˆ˜๋ก ๊ฐ€์šฉ๊ณต๊ฐ„์€
๊ธ‰์†ํžˆ ๊ฐ์†Œํ•จ
๊ธฐ์กด ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹
3์ค‘ ๋ฏธ๋Ÿฌ
Five Drive Protection
์ฐจ๋ณ„์  #1
Pivot3
๊ธฐํƒ€ ์ œํ’ˆ
๋ณผ๋ฅจ ๋งค๋‹ˆ์ €
๋””์Šคํฌ ์ง์ ‘ ์•ก์„ธ์Šค
ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•œ ์•ก์„ธ์Šค
Hypervisor (ESXi)
HDD
Pivot3 vSTAC OS (VM)
Hyper-Converged
Infrastructure
โ€ขPivot3 Close-to-the-metal methodology
โ€ข ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•˜์ง€ ์•Š๊ณ  ์ง์ ‘ ๋””์Šคํฌ๋ฅผ
์•ก์„ธ์Šคํ•˜์—ฌ 30-40%์˜ ์„ฑ๋Šฅํ–ฅ์ƒ
โ€ข ๋ฌผ๋ฆฌ ๋””์Šคํฌ๋ฅผ ์ง์ ‘ ๊ด€๋ฆฌ
Hypervisor
SSD
VM running
Software-defined storage
Disk access
through the
hypervisor
โ€ข ํ•˜์ดํผ ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•ด ๋ณผ๋ฅจ๋งค๋‹ˆ์ €์— ์•ก์„ธ์Šค ํ•˜๊ฑฐ๋‚˜
โ€ข JBOD์‚ฌ์šฉํ•˜๋”๋ผ๋„ ํ•˜์ดํผ๋ฐ”์ด์ € ํ†ตํ•ด์„œ ๋””์Šคํฌ ์ ‘๊ทผ
โ€ข ๋ฌผ๋ฆฌ๋””์Šคํฌ์˜ ์ง์ ‘๊ด€๋ฆฌ ๋ถˆ๊ฐ€
์ฐจ๋ณ„์  #2
HDD HDD
HDDSSD
์ง์ ‘
์•ก์„ธ์Šค
์บ์‹œ๊ด€๋ฆฌ์ž
Full-time Active/Active/Active
Pivot3 Global Active/Active/Active Conventional SAN: Active / Passive
Active / Passive Redundant Controllers
Storage
controller #0
Storage
controller #1
Passive
Active
Passive
Active
โ€ข VM์€ Pivot3 cluster์˜ ๋ชจ๋“  ์ปจํŠธ๋กค๋Ÿฌ์— ์•ก์„ธ์Šค๊ฐ€
๊ฐ€๋Šฅ
โ€ข ๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€๋ ๋•Œ ๋งˆ๋‹ค ์ปจํŠธ๋กค๋Ÿฌ์˜ ์„ฑ๋Šฅ+
๊ฐ€์šฉ์„ฑ+Bandwidth์ด ์„ ํ˜•์ ์œผ๋กœ ์ฆ๊ฐ€
โ€ข True, optimized Global Active / Active.
โ€ข ๋ณดํ†ต ์Šคํ† ๋ฆฌ์ง€๋Š” 2๊ฐœ์˜ ์ปจํŠธ๋กค๋Ÿฌ๋กœ ๋™์ž‘
โ€ข ๊ฐ ์ปจํŠธ๋กค๋Ÿฌ๋Š” ์„œ๋กœ๊ฐ„์˜ ๋™์ž‘์„ ๊ฐ์‹œํ•˜๋ฉด์„œ A-A
ํ˜น์€ A-S๋กœ ๋™์ž‘
โ€ข ์„ฑ๋Šฅ, ๊ฐ€์šฉ์„ฑ, Bandwidth์€ ๊ณ ์ •๋จ
Node #1
HDD HDD HDD
Node #n
HDD HDD HDD
Node #2
HDD HDD HDD
์ปจํŠธ๋กค๋Ÿฌ
์ปจํŠธ๋กค๋Ÿฌ
์ปจํŠธ๋กค๋Ÿฌ
โˆ‘ (IOPs)Global Active / Active =>
์ฐจ๋ณ„์  #3
Global Virtual Drive Sparing
Pivot3 Virtual Global Sparing Conventional sparing system
Appliance #1
HDD
Spare
HDD
โ€ข ์ „ํ†ต๋ฐฉ์‹์˜ ์ŠคํŽ˜์–ด๋Š” ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ํ•˜๋‚˜์”ฉ์˜
์ŠคํŽ˜์–ด๋ฅผ ํ• ๋‹นํ•˜์—ฌ ๋””์Šคํฌ ์žฅ์• ์— ๋Œ€๋น„ํ•จ
โ€ข HDD ์ „์ฒด ์„ฑ๋Šฅ์€ ๋””์Šคํฌ์ˆ˜-์ŠคํŽ˜์–ด์ˆ˜ (ex 900 IOPS)
โ€ข HDD์žฅ์• ์‹œ ํ•ด๋‹น ๋…ธ๋“œ์˜ ๋ชจ๋“  HDD๊ฐ€ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌํ•˜์—ฌ
์ •์ƒ์ ์ธ ์„ฑ๋Šฅ์„ ๋‚ผ ์ˆ˜ ์—†์Œ
โ€ข ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ์ŠคํŽ˜์–ด ๋””์Šคํฌ๋ฅผ ๋ฏธ๋ฆฌ ์ง€์ •ํ•˜์ง€ ์•Š๊ณ 
์ „์ฒด ๋…ธ๋“œ์— ์ŠคํŽ˜์–ด ๊ณต๊ฐ„์„ ๊ฐ€์ƒ์œผ๋กœ ๋ฐฐ์ •
โ€ข Pivot3 ํด๋Ÿฌ์Šคํ„ฐ์— ํ•˜๋‚˜์˜ ๋””์Šคํฌ ๊ณต๊ฐ„๋งŒํผ๋งŒ
์†Œ๋น„๋จ
โ€ข HDD์˜ ์žฅ์• ๊ฐ€ ๋ฐœ์ƒ ํ•˜๋”๋ผ๋„ ์„ฑ๋Šฅ์˜ ์ €ํ•˜ ์—†์ด
๋ณต๊ตฌ๋˜๋ฉฐ ์ˆ˜๋™์ž‘์—…์€ ์ „ํ˜€ ๋ถˆํ•„์š”
โ€ข HDD์žฅ์• ์‹œ ๋ชจ๋“  ๋…ธ๋“œ์˜ HDD๊ฐ€ ์ž‘์—…์„ ๋‚˜๋ˆ ์„œ
์กฐ๊ธˆ์”ฉ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌ ์„ฑ๋Šฅ์˜ ์ €ํ•˜๊ฐ€ ์—†์Œ
HDD HDD
Appliance #2
HDD
Spare
HDD
HDD HDD
Appliance #n
HDD
Spare
HDD
HDD HDD
Pivot3 Re-positioning
์ฐจ๋ณ„์  #4
Appliance #1
HDD HDD HDD HDD
Virtual sparing
Appliance #n
HDD HDD HDD HDD
Virtual sparing
Appliance #2
HDD HDD HDD HDD
Virtual sparing
Spare disk
1. Read D1
2. Read D3
3. Read D4
4. Read P
5. Read D5
6. Read D6
7. Read D7
8. calculate D2=P1xD1xD3xD4xD5xD6xD7
9. Write D2N
10. X1,000,000
Virtual Spare
1. Read D2,D12โ€ฆ
2. Write D2N
3. Write D12N
4. โ€ฆ..
5. X100,000
No CPU action
D2N
D2N D12N
CPU Disk
CPU Disk
D1 D2 D3 D4 P1 D5 D6 D7
Replace vs Rebuild
D1 D2 D3 D4 P9 D5 D6 D7 D8
D11 D12 D13 P14 D14 D15 D16 D17 D18
HDD ์‘๋‹ต ์‹œ๊ฐ„
HDD1 Drive 2 HDD3
์‚ฌ์ „์˜ˆ์ธก ๋””์Šคํฌ ์žฅ์• ์ฒ˜๋ฆฌ
โ€ข๋Œ€๋ถ€๋ถ„์˜ HDD๋Š” ์™„์ „ํžˆ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ๋ณด๋‹ค, ๋จผ์ € ๋А๋ ค์ง€๊ฑฐ๋‚˜ ๋ฐฐ๋“œ ์„นํ„ฐ๊ฐ€ ๋ฐœ์ƒ
โ€ข๋А๋ ค์ง„ ๋””์Šคํฌ๋Š” ์ „์ฒด ์„ฑ๋Šฅ์—๋„ ์˜ํ–ฅ์„ ์คŒ
โ€ขPivot3 ๋Š” ๋””์Šคํฌ์˜ ์™„์ „ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ์ „์— ๋ฏธ๋ฆฌ ์ŠคํŽ˜์–ด์กฐ์น˜๋ฅผ ์ˆ˜ํ–‰ํ•จ
Cluster
HDD7
Appliance #3
Virtual Global
spare
HDD4
HDD1
HDD8
HDD5
HDD2
HDD9
HDD6
HDD3
Virtual sparing
STEP 2
Global Spare์— ์žฌ๋ฐฐ์น˜๋ฅผ ๋ฏธ๋ฆฌ ์ˆ˜ํ–‰
* ์„ฑ๋Šฅ์ €ํ•˜ ๋ฐœ์ƒ์„ ์‚ฌ์ „ ๋ฐฉ์ง€
2. ์‚ฌ์ „ ๋ฐ์ดํ„ฐ ๋ณต์ œSTEP 1
HDD์˜ ์‘๋‹ต์‹œ๊ฐ„์„ ํ•ญ์ƒ ๊ฐ์‹œ,
๋А๋ ค์ง„ HDD๋ฅผ ๊ฒ€์ถœ
HDD์žฅ์•  ๋ฐœ์ƒ ์ „ ์‚ฌ์ „์˜ˆ์ธก
1. ์žฅ์• ์‚ฌ์ „๊ฒ€์ถœ STEP 3
HDD๊ต์ฒด ์‹œ์—๋„ ์ „ํ˜€ ์„ฑ๋Šฅ์˜
์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š์Œ
3. HDD๊ต์ฒด ํ›„ ๋ฆฌ๋นŒ๋”ฉ ๋ถˆํ•„์š”
Cluster
HDD7
HDD4
HDD1
HDD8
HDD5
HDD9
HDD6
HDD3NewHDD2
HDD1 HDD2 HDD3
Virtual sparing
Virtual sparing
Virtual sparing Virtual sparing
Virtual sparing
Virtual sparing
Virtual sparing
Virtual sparing
์ฐจ๋ณ„์  #5
#1 Node
ElastiCache โ€“ ๋ฉ”๋ชจ๋ฆฌ ๋ฐ์ดํ„ฐ ์บ์‹ฑ
ElastiCache ํŠนํ—ˆ
โ€ข ๊ฐ ๋…ธ๋“œ๋‹นUp 64GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ
๋ฐ์ดํ„ฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ
โ€ข vPG๋‚ด์˜ ๋ฉ”๋ชจ๋ฆฌ๋Š” ๋™์‹œ์— ์‚ฌ์šฉ๋จ
โ€ข vPG ์ตœ๋Œ€ ์บ์‹œ1TB
โ€ข ๋™์  read/write cache
โ€ข cold cache data to HDDs
#3 Node#2 Node
ElastiCache
Memory
FlashCache
SSD
HDD
ElastiCache
Memory
FlashCache
SSD
HDD
ElastiCache
Memory
FlashCache
SSD
HDD
Acknowledgement
APP
OS
์ฐจ๋ณ„์  #6
Node 1 Node 2 Node 3
VM VM VM
vPG (Hybrid)
APP
OS
APP
OS
APP
OS
ElastiCache ์˜ ์“ฐ๊ธฐ ์ตœ์ ํ™”
โ€ข Write cache์˜ ๋‚ด๋ ค์“ฐ๊ธฐ์— ์ตœ์ ํ™”
โ€ข IO type ๊ณผ ๋ถ€ํ•˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ๋‚ด๋ ค์“ฐ๊ธฐ์ˆ˜ํ–‰
โ€ข ํŠนํ—ˆ ๋ฐ›์€ ์Šค๋งˆํŠธ ์บ์‹œ
โ€ข ํฌ๊ณ  ๋ฌด๊ฑฐ์šด ๋ฐ์ดํ„ฐ๋Š” ์บ์‹œ์— ์œ ์ง€
โ€ข ์ž์ฃผ ์“ฐ์ง€ ์•Š๋Š” ๋ฐ์ดํ„ฐ๋Š” ๋””์Šคํฌ์— ๋‚ด๋ ค์“ฐ๊ธฐ
โ€ข ์บ์‹œ๊ฐ€ ํ—ˆ์šฉํ•˜๋Š” ํ•œ ์ตœ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ
์œ ์ง€ํ•˜์—ฌ ์ฝ๊ธฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ
Benefits
โ€ข ๋†’์€ IOPs ๋น ๋ฅธ ๋ ˆ์ดํ„ด์‹œ
โ€ข High performing Hybrid nodes at a better TCO
Regular flusher
frequency
Flushing
rate
Accelerated
flusher frequency
IO rate
Low cache
utilization
Cache
High Random
AccessCache
Flexible Deployment Options
Node 1 Node 2 Node 3
Data
Only
vPG
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
Node 4
Only the drive types/sizes need to be the same
์Šคํ† ๋ฆฌ์ง€์˜ ์ฆ์„ค์€:
โ€ข CPU ์„ฑ๋Šฅ, ๊ฐœ์ˆ˜์— ๋ฌด๊ด€
โ€ข ๋ฉ”๋ชจ๋ฆฌ ์šฉ๋Ÿ‰์— ๋ฌด๊ด€
โ€ข ํ•„์š”์— ๋”ฐ๋ผ ์ฆ์„ค
๋ชจ๋ธ All-flash HCI Watch Data
Compute
Latency
์šฉ๋Ÿ‰
์–ด๋– ํ•œ ์š”๊ตฌ์—๋„ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋Š” ์œ ์—ฐํ•œ
๋ชจ๋ธ ์„ ํƒ์˜ ์ž์œ 
All-flash : ๊ฐ€์žฅ ๋น ๋ฅธ ์‘๋‹ต์‹œ๊ฐ„
HCI : ์ผ๋ฐ˜์ ์ธ ๊ฐ€์ƒํ™”, VDI
Watch : ๋งŽ์€ ์šฉ๋Ÿ‰์ด ์š”๊ตฌ๋˜๋Š” ์„œ๋น„์Šค
Data : ๊ธฐ์กดํ™˜๊ฒฝ์— ์šฉ๋Ÿ‰๋งŒ ์ฆ์„ค
Hybrid / Watch /DATA
12 X 3.5 in drives
Sizes range from:
1 TB = 12 TB Node
2 TB = 24 TB Node
4 TB = 48 TB Node
6 TB = 72 TB Node
8 TB = 96 TB Node
Watch/ DATA
1cpu, 2PCIslot
Data
No ESX
Dual 400
GB SSDs
10GB
Copper or
SFP+
10GB,
Copper or
SFP+
Power
supplies
750W
1
2 3
4
5
1- SSD cache
2- Full x16 PCI-e bay
& x8
3- low profile X8 &
X1 PCI-e slots
4- onboard USB for
usb SSD
5- Dual SD cards for
ESX install
vSTAC HCI/Watch
10 Gb
SFP+
10 Gb
SFP+
Mellanox
10G/40/56Gbps
1 Gb
BT
1 Gb
BT
10 Gb
BT
10 Gb
BTBT
HCI,Watch
PCIeDaughter Card
๋ณด์žฅ๋ ˆ๋ฒจ ๋ฐ ์šฉ๋Ÿ‰ Pivot3 Level
(Proprietary Architecture)
Data / System Protection
EC1
RAID 1E
โ€ข 1 disk or 1 appliance failure
RAID 5E
EC3
RAID 1P โ€ข 3 simultaneous disk
failures or
โ€ข 1 disk + 1 appliance
failure
RAID 6P
RAID 6E
EC5 RAID 6X
โ€ข 5 simultaneous disk
failures or
โ€ข 2 disk + 1 appliance
failure
Pivot3 Level
Single
Data / System Protection
RAID 1 1 disk failure
RAID 5 1 disk failure
RAID 6 2 simultaneous disk failures
Top Ten Storage Challenges
ํ์‚ฌ์˜ IT์ธํ”„๋ผ์—์„œ ์Šคํ† ๋ฆฌ์ง€์— ๊ด€๋ จ๋œ ์ตœ๋Œ€ ๊ณผ์ œ๋Š” ๋ฌด์—‡์ž…๋‹ˆ๊นŒ?
N=212 multiple responses Source: Enterprise Strategy Group, 2016
๋น„์ฆˆ๋‹ˆ์Šค๋Š” ๋ณด๋‹ค ๋ณต์žกํ•œ ์š”๊ตฌ๊ฐ€ ๋ฐœ์ƒํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค
Latency
I/O Intensive
Storage Performance
์ฃผ๋ฌธDB
Processing AND
Storage Intensive
๊ฑฐ๋ž˜APP
Mid Intensity
Processing and Storage
์„œ๋ฒ„๊ฐ€์ƒํ™”, BCDR
Processing Intensive
๊ทธ๋ž˜ํ”ฝ, VDI
Business
Applications
PROCESSING
CCTV
BI
Annalistic
Backup
Infra
์Šคํ† ๋ฆฌ์ง€ ๋ฏธ์…˜์™„์ˆ˜
๊ฐ€์žฅ ๋น ๋ฅธ PCIe Flash Arrays
All-flash and hybrid
๋™์  QoS
์‚ฌ์—…์š”๊ตฌ์— ์ฆ‰์‹œ ๋ถ€ํ•ฉํ•˜๋Š” ์„ฑ๋Šฅ
๊ด€๋ฆฌ์˜ ๋‹จ์ˆœํ™”
์„ธ๋ฐ€ํ•œ ์ •์ฑ…๊ธฐ๋ฐ˜ ๊ด€๋ฆฌ
ํ™˜์ƒ์˜ ์„ฑ๋Šฅ์˜ต์…˜!
๋‹จ์ผ์„œ๋ฒ„์— 300KIOPS
้ฉๆ้ฉๆ‰€
-
50,000
100,000
150,000
200,000
250,000
300,000
350,000
400,000
450,000
500,000
DDR4 PCIe Flash Ent SSD Eco SSD HDD
IOPS & Latency
์ €์žฅ ๋ฏธ๋””์–ด ์„ฑ๋Šฅ ๋น„๊ต
โ€ข Flash์˜ ์„ฑ๋Šฅ ์ตœ๋Œ€ 100๋ฐฐ๊นŒ์ง€ ์ฐจ์ด๊ฐ€
๋‚ ์ˆ˜ ์žˆ์Œ
โ€ข ๋ฏธ๋””์–ด์„ฑ๋Šฅ์ด์ƒ์˜ ์ธํ„ฐ์ปค๋„ฅํŠธ๊ฐ€
ํ•„์š”
โ€ข Data chunk์— ๋”ฐ๋ผ Bandwidth์ด
์ฃผ์š”๊ณ ๋ ค๋Œ€์ƒ์ด ๋จ
โ€ข ์ตœ๊ทผ ํ‰๊ท  IOํฌ๊ธฐ๋Š”32KB์ด์ƒ
10Gbps
20ยตs
350ยตs
60ยตs
4000ยตs
*16KB ๊ธฐ์ค€ 10Gbps์˜ bandwidth์—๋Š” ์ตœ๋Œ€ 60KIOPS๊ฐ€ ๊ฐ€๋Šฅ
0.01ยตs
32Gbps
8,000,000
NexGen PCIe Flash ๊ตฌ์กฐ
โ€ข ๋ชจ๋“  IO๋Š” ๋‹น์—ฐํžˆ
์ปจํŠธ๋กค๋Ÿฌ๋ฅผ ๊ฑฐ์นœ๋‹ค
โ€ข ๋ณ‘๋ชฉ ๋‹น์—ฐํžˆ ์žˆ๋‹ค.
โ€ข ๊ณ ์„ฑ๋Šฅ์„ ์œ„ํ•œ ๋ฉ€ํ‹ฐ ํ‹ฐ์–ด ํ”Œ๋ ˆ์‰ฌ ๊ตฌ์กฐ
โ€ข ์šฐ์„ ์ˆœ์œ„์— ๋”ฐ๋ผ ๋ฐ์ดํ„ฐ์œ„์น˜๊ฐ€ ๋™์ ์œผ๋กœ ๋ณ€๊ฒฝ
์ „ํ†ต์  ์Šคํ† ๋ฆฌ์ง€ NexGen PCIe Flash Arrays
Ultra-Low Latency
๊ณ ์„ฑ๋Šฅ ์ €์žฅ๊ณต๊ฐ„
Lowest Latency
๊ณ ์šฉ๋Ÿ‰ ์ €์žฅ๊ณต๊ฐ„
or or
all writes / reads
read cache
reads
low priority reads
NexGen ๋ฐ์ดํ„ฐํšจ์œจํ™” ๊ธฐ์ˆ ์˜ ํšจ๊ณผ
2.5X acceleration*
50% lower latency
Reduced IO vs. PCIe Flash IO2.5X
4X
4X SSD life extension
7:1 consolidation ratio** from PCIe Flash
writes to SSD writes
2:1
50% average capacity reduction
2:1 data reduction ratio***
No performance impact
* 2.5X acceleration based on v3.5 software benchmarks; ** 7:1 consolidation ratio based on NexGen customer measured
metrics; *** 2:1 capacity reduction based on NexGen customer measured metrics
IO ํ†ตํ•ฉ์œผ๋กœ
SSD์ˆ˜๋ช…์ฆ๊ฐ€
๋ฐ์ดํ„ฐ
ํšจ์œจํ™”๋กœ
์ ์œ  ์šฉ๋Ÿ‰๊ฐ์†Œ
IO๋Ÿ‰ ๊ฐ์†Œ๋กœ ์ธํ•œ
์„ฑ๋Šฅํ–ฅ์ƒ
๋ชฉํ‘œ๋ฅผ ์†์‰ฝ๊ฒŒ ์ž๋™ํ™”
โ€ข์‚ฌ์ „ ์ •์˜๋œ ์ •์ฑ…
โ€ข๊ด€๋ฆฌ ๊ฐ€๋Šฅํ•œ ์ตœ๋Œ€, ์ตœ์†Œ
Priorities
โ€ข์ž๋™ ๋ฐด๋“œ์œ— ์กฐ์ •
โ€ข์ž๋™ ํ์ž‰ ์กฐ์ •
๋ฐ์ดํ„ฐ ๋ฐฐ์น˜
โ€ข์‹ค์‹œ๊ฐ„, Always on
โ€ขPrioritized Active Caching
QoS : ์šฐ์„ ์ˆœ์œ„๋ฅผ ์‹ค ์„œ๋น„์Šค์— ์ฆ‰์‹œ ์ ์šฉ
SLA
NexGen
Dynamic
QoS
40K
30K
20K
10K
IOPS
0
40K
30K
20K
10K
IOPS
0
Service Level์— ์ ์šฉ
โ€ข ๋ชจ๋“  ๋ฐ์ดํ„ฐ๊ฐ€ ๋™์ผํ•œ
์ค‘์š”๋„๋กœ์ทจ๊ธ‰๋จ
โ€ข ์„œ๋น„์Šค์—๋ถ€ํ•ฉํ•˜์ง€ ์•Š๋Š” ์„ฑ๋Šฅ
โ€ข Impacts business operations
๏ƒจ๋ณ„๋„์˜ ์Šคํ† ๋ฆฌ์ง€
๏ƒจ๋น„ํšจ์œจ์ ํˆฌ์ž๊ด€๋ฆฌ
โ€ข ์—…๋ฌด์š”๊ตฌ์—๋ถ€ํ•ฉํ•˜๋Š” ์„ฑ๋Šฅ
โ€ข ์„œ๋น„์Šค๋ ˆ๋ฒจ์—ํ•ฉ์น˜
โ€ข Mission Critical ์„œ๋น„์Šค๋Š”ํ•ญ์ƒ ๋ณด์žฅ๋จ
QoS ์—†๋Š” ์Šคํ† ๋ฆฌ์ง€ NexGen Storage QoS
! !
VM์ฃผ๋ฌธ DB ๊ฐœ๋ฐœ DB
VM์ฃผ๋ฌธ DB ๊ฐœ๋ฐœ DB
์ตœ๊ณ 
Priority
๋†’์Œ
Priority
์ตœ์ €
Priority
๊ธฐ์ •์˜๋œ QoS ์ •์ฑ…
125,000 IOPS
1000 MB/s
1 ms
75,000 IOPS
500 MB/s
3 ms
50,000 IOPS
250 MB/s
10 ms
25,000 IOPS
100 MB/s
20 ms
10,000 IOPS
50 MB/s
40 ms
100,000 IOPS
750 MB/s
5 ms
50,000 IOPS
375 MB/s
10 ms
20,000 IOPS
150 MB/s
25 ms
10,000 IOPS
75 MB/s
50 ms
2,000 IOPS
38 MB/s
100 ms
Hybrid ์Šคํ† ๋ฆฌ์ง€ All-Flash ์Šคํ† ๋ฆฌ์ง€
QoS์— ๋งž์ถฐ ์ž๋™๋ฐฐ์น˜
๋Šฅ๋™์ ์ธ ์บ์‹œ ์ฐจ๋ณ„ํ™”
โ€ข ๋ณดํ˜ธ๋œ Read/Write ์˜์—ญ
โ€ข ๋ชจ๋“  Write๋Š” ๊ฐ€์žฅ ๋น ๋ฅธPCIe flash์—์„œ
โ€ข HA๋ฅผ ์œ„ํ•ด Write๋Š” ๋ฏธ๋Ÿฌ๋จ
โ€ข QoS ์— ๋”ฐ๋ผ ๋‚ด๋ ค์“ธ์ง€์œ ์ง€ํ• ์ง€๊ฒฐ์ •
โ€ข ์ฐจ๋ณ„ํ™”๋œ Read ์บ์‹œ
โ€ข ์Šค๋งˆํŠธํ•˜๊ฒŒ ์–ธ์ œ ์–ด๋””๋ฅผ ๊ฒฐ์ •
โ€ข ๋ฐ์ดํ„ฐ๋Š”RAM๊ณผ PCIe flash์— ์บ์‹œ๋จ
Policy Read ์บ์‹œ์šฐ์„ ์ˆœ์œ„ ์บ์‹œ์กฐ๊ฑด
MC: Policy 1 Most Aggressive 1 I/O hit
BC: Policy 2 Aggressive 4 I/O hits
BC: Policy 3 Less Aggressive 16 I/O hits
NC: Policies 4 & 5 ์—†์Œ Data is never cached
*Per 1 MB Page
์šฐ์„ ์ˆœ์œ„ ์ฐจ๋“ฑํ™”์˜ ์˜ˆ
Highest Priority High Priority Lowest Priority
ํŒŒ์›Œ์œ ์ €
์›์Šคํ…Œ์ด์…˜ ๋Œ€์ฒด
์˜๊ตฌ ๋ฐ์Šคํฌํƒ‘
VIP
๋น ๋ฅธ ์‘๋‹ต์‹œ๊ฐ„
Linked Clone desktops
์ •ํ˜•ํ™”๋œ ์—…๋ฌด
๋ณดํ†ต์˜ ์‘๋‹ต์‹œ๊ฐ„
๋น„์˜๊ตฌ ๋ฐ์Šคํฌํƒ‘
๋Œ€๊ณ ๊ฐ ์›น์„œ๋น„์Šค
์ฃผ๋ฌธ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค
Transaction Database
Business Reporting
Business Intelligence
Inventory
๊ฐœ๋ฐœ QA ํ™˜๊ฒฝ
Backup Databases
VDI
DB
Log Temp DBData
Back-To-Granular Storage Management
๋ฌผ๋ฆฌํ™˜๊ฒฝ DB ์ผ๋ฐ˜๋ณผ๋ฅจ ์‚ฌ์šฉํ•˜๋Š” DB VM vVol์‚ฌ์šฉํ•˜๋Š” DB VM
VMFS Datastore ๊ด€๋ฆฌ
โ€ข ๋” ํšจ์œจ์ 
โ€ข ๋™์  ์„ธ๋ฐ€ํ•œ ๊ด€๋ฆฌ
SQL Server
Log Temp DBData
Log Temp DBData
โ€ข ํšจ์œจ์ ์ธ ํŽธ์ด์ง€๋งŒ
โ€ข ์ •์ฑ…์€ ํ†ตํ•ฉ ๋ณผ๋ฅจ์— ์ ์šฉ๋จ
โ€ข ๋น„ ํšจ์œจ์ 
โ€ข ์ˆ˜๋™๊ด€๋ฆฌ
โ€ข ์ •์  ์„ธ๋ฐ€ํ•œ ๊ด€๋ฆฌ ๊ฐ€๋Šฅ
VVols
Data Log Temp
DB
Data Log Temp
DB
LUN์„ ๊ด€๋ฆฌ
? ?
VM๋ณ„ VMDK๋ณ„ ๋™์ QoS
โ€ข vCenter์—์„œ ํ†ตํ•ฉ๊ด€๋ฆฌ
โ€ข Virtual volume(vVol)์„ ์™„๋ฒฝ ์ง€์›ํ•˜๋Š” VASA)
โ€ข N5์Šคํ† ๋ฆฌ์ง€๊ด€๋ฆฌ์˜ ์„ฑ๋Šฅ Qos๊ฐ€
vCenter์—์„œ๋„๋™์ผ์ ์šฉ
โ€ข VM-level์—์„œ ์ •์ฑ… ์ ์šฉํ•˜๋ฉด๊ฐ
๊ฐ€์ƒ๋””์Šคํฌ์—์ž๋™ ์ ์šฉ
โ€ข ๋‹จ์ˆœํ•œ VM๊ด€๋ฆฌ
โ€ข VM๋ณ„ ๊ฐ€์ƒ๋””์Šคํฌ๋ฐฐํฌ, ์Šค๋ƒ…์ƒท, ๋ณต์ œ, ํด๋ก ,
QoS๊ด€๋ฆฌ
โ€ข ํ•˜๋‚˜์˜ ์ •์ฑ…์œผ๋กœ๋ชจ๋“  ์Šคํ† ๋ฆฌ์ง€ ๊ด€๋ จ ๊ด€๋ฆฌ๋ฅผ
์ด๊ด„
Pivot3 QoS Manager for vCenter Server
enables VMware VVol integration
Data Protection QoS
โ€ข Data๋ณดํ˜ธ์ •์ฑ… QoS
โ€ข ํ•˜๋‚˜์˜ ์ •์ฑ…์œผ๋กœ ์—ฌ๋Ÿฌ ๋ณผ๋ฅจ์— ์ ์šฉ๊ฐ€๋Šฅ
โ€ข ๋กœ์ปฌ ์Šค๋ƒ…์ƒท / ๋ณต์ œ
โ€ข ์Šค๋ƒ…์ƒท ์œ ์ง€๊ธฐ๊ฐ„
โ€ข ์žกํ ์˜ˆ์•ฝ
โ€ข ์„œ๋น„์Šค๋ ˆ๋ฒจ๋ณ„์šฐ์„ ์ˆœ์œ„ ์ ์šฉ
โ€ข ์˜จ๋ผ์ธ ์ฆ‰์‹œ์ ์šฉ
โ€ข ๋‹จ์ˆœํ•œ VM๊ด€๋ฆฌ
โ€ข VM๋ณ„ provisioning, ์Šค๋ƒ…์ƒท, ๋ณต์ œ, ํด๋ก ,
QoS๊ด€๋ฆฌ
โ€ข ํ•˜๋‚˜์˜ ์ •์ฑ…์œผ๋กœ๋ชจ๋“  ์Šคํ† ๋ฆฌ์ง€ ๊ด€๋ จ
๊ด€๋ฆฌ๋ฅผ ์ด๊ด„
๋‹จ์ˆœํ•œ ์ •์ฑ…๊ด€๋ฆฌ
์ •์ฑ…์€ ์ฆ‰์‹œ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค.
Policy 1 Details
๋™์  QoS ๋ณ€๊ฒฝ์˜ˆ์‹œ
๊ธฐ์กด QoS ์ •์ฑ…
โ€ข Non-Critical 5
โ€ข 11.85ms Latency
โ€ข 1.2K IOPS
์ƒˆ ์ •์ฑ… ๋ณ€๊ฒฝ ํ›„
โ€ข Mission-Critical 1
โ€ข .46ms Latency 96%๏ƒช
โ€ข 18.7K IOPS 1,458%๏ƒฉ
โ€ข ์ฆ‰์‹œ ๋ณ€๊ฒฝ์ด ์‹œ์ž‘๋จ
โ€ข Software defined performance
for storage
11.85ms ๏ƒจ 0.46ms Latency!
1.2K ๏ƒจ 18.7K IOPS!
์ •์ฑ… ์ž๋™ํ™”
โ€ข ์ •์ฑ…๊ธฐ๋ฐ˜ ์ž๋™ํ™”
โ€ข ์Šค์ผ€์ฅด์„ ํ†ตํ•œ ์ •์ฑ…์ ์šฉ
โ€ข ์„ฑ๋Šฅ์ •์ฑ…
โ€ข ๋ฐ์ดํ„ฐ๋ณดํ˜ธ์ •์ฑ…
โ€ข Snapshot and replication schedules
โ€ข Replication targets (up to 5)
โ€ข ์žฌ์‹œ๋„๊ธฐ๊ฐ„ ์„ค์ •
โ€ข ์œ ์ง€๊ธฐ๊ฐ„์„ค์ • 6pm โ€“ 8am
Non Critical: Policy-5
8am โ€“ 6pm (biz hours)
Mission Critical: Policy-1
Scheduled Task:
Policy-1-5
์—…๋ฌด์‹œ๊ฐ„์—๋Š”
๊ฐ€์žฅ๋น ๋ฅด๊ฒŒ
๋น„์—…๋ฌด์‹œ๊ฐ„์—”
๋А๋ฆฌ๊ฒŒ
QoS Scheduling
HCI๋กœ์˜ ํ™•์žฅ
SLX =
HCI + All flash storage + QoS
๏ƒจ ๋‹จ์ผ ํ”Œ๋žซํผ ์šด์˜
โ€ข HCI์˜ ๊ฒฝ์ œ์„ฑ, ํ‘œ์ค€ํ™”
โ€ข N5์˜ ์„ฑ๋Šฅ, ๋ณด์žฅ์„ฑ
โ€ข ์ตœ๊ณ ์˜ ์œ ์—ฐ์„ฑ
๋ชจ๋“  IO ํƒ€์ž…์— ์ ์šฉ ๊ฐ€๋Šฅํ•œ ์ธํ”„๋ผ
โ€ข ์„œ๋ฒ„ํ†ตํ•ฉ
โ€ข VDI
โ€ข ๋ฏธ์…˜ ํฌ๋ฆฌํ‹ฐ์ปฌ DB
โ€ข Archiving / Backup
โ€ข CCTV
๊ฐ„ํŽธํ•œ ๋ฐ์ดํ„ฐ๋ณดํ˜ธ(DR)
์Šค๋ƒ…์ƒท
โ€ข ์Šค์ผ€์ฅด
โ€ข Thin Provisioned
Clones
๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ for DR
โ€ข ์Šค์ผ€์ค„
โ€ข Asynchronous
โ€ข Between Different N5 Models
โ€ข VSS Provider for Microsoft
Remote
copy
Remote
snapshot
Hybrid Flash Arrays All-Flash Arrays
Model N5-200 N5-300 N5-500 N5-1000 N5-1500 N5-6000
PCIe Flash Capacity 2.0TB > 7.2TB 2.6TB > 7.8TB 5.2TB > 10.4TB 10.4TB > 15.6TB 2.6 TB 2.6 TB
SSD Capacity 15 TB > 60 TB 60 TB > 240 TB
Disk Capacity 32TB > 128TB 64TB > 448TB 64TB > 448TB / 128TB > 512TB
Performance
Capability
150,000 IOPS
2.0GB/s Throughput
200,000 IOPS
2.4GB/s Throughput
225,000 IOPS
2.7GB/s Throughput
250,000 IOPS
3.0GB/s Throughput
450,000 IOPS *
6.0 GB/s throughput **
RAM 96GB 192GB 96GB
All Features
Included
Quality of Service | Service Levels | Dynamic Data Path | Prioritized Active Cache | Data Reduction | Data Protection (Snapshot and Replication)
Storage Processors Dual Active-Active
Interfaces Data: (4) 1/10GbE SFP+ -or- 1/10GBT RJ45 / Management: (4) 1GbE RJ45, http, https Data: (8) / Management: (4)
Hardware
Availability
Redundant storage processors | Redundant fans | Redundant, hot swap power supplies
Redundant network connections | Dual port SAS SSD drives | RAID, hot swap SSD drives
Capacity Packs 32TB HDD Shelf 32TB (2x16) / 48TB (3x16) / 64TB (4x16) / 128TB (8x16) HDD Shelf 15TB (960GBx16), 60TB (3.8TBx16) SSD Shelf
Performance Packs 5.2TB PCIe Flash
VMware Integration VAAI | vCenter Server Plug-in | Virtual Volumes Partner Ecosystem | VASA
* 4K random reads; ** 256K sequential reads
N5 ์ œํ’ˆ๋ณ„ ์‚ฌ์–‘
๊ณ ๊ฐ๋งŒ์กฑ ์ง€์›
Live Support
Expertise
โ€ข 24x7x365 Availability
โ€ข ์Šคํ† ๋ฆฌ์ง€ ์ „๋ฌธ๊ฐ€ +
๊ฐ€์ƒํ™” ์ „๋ฌธ๊ฐ€ ์ง€์›
์ƒ์‹œ
๋ชจ๋‹ˆํ„ฐ๋ง
โ€ข ๋Œ€์‘๋ฐฉ์•ˆ์„ ์ œ์‹œํ•˜๋Š”
์•Œ๋žŒ ์ฒด๊ณ„
โ€ข Phone-home Telemetry
ํฌ๊ด„์ 
์ง€์›์ •์ฑ…
โ€ข ๋‹จ์ผ ์ง€์› ์ ‘์ 
โ€ข ๊ณ„์•ฝ ๊ธฐ๊ฐ„ ๋‚ด ์ง€์›
๋น„์šฉ์ƒ์Šน ์—†์Œ
Support Offerings
โ€ข 7 day x 24 hour phone | Onsite parts
โ€ข 7 day x 24 hour phone | NBD parts
โ€ข 5 day x 9 hour phone support | NBD Parts
๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค
๋ฌธ์˜
HCI@cdit.co.kr
02)3442-5588

Pivot3 overview

  • 1.
  • 2.
    Pivot3๋Š” ๋” ์Šค๋งˆํŠธํ•œ์ธํ”„๋ผ๋ฅผ ์ œ๊ณต. ๊ฒ€์ฆ๋œ ํ˜์‹ ๊ธฐ์ˆ  โ€ข ์†Œํ”„ํŠธ์›จ์–ด ์ •์˜ ์Šคํ† ๋ฆฌ์ง€ โ€ข ์„œ๋น„์Šคํ’ˆ์งˆ โ€ข ํ”Œ๋ž˜์‹œ ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ณ 30 ํŠนํ—ˆ๊ธฐ์ˆ  53๊ฐœ๊ตญ, 3000์—ฌ ๊ณ ๊ฐ์‚ฌ ๊ด‘๋ฒ”์œ„ํ•œ ๊ธฐ์ˆ ์ œํœด BOULDER MEXICO CITY AUSTIN HOUSTON LONDON DUBAI SINGAPORE SEOUL
  • 3.
    Software-Defined Storage Hyper- convergence Erasure Coding Quality of Service Data Protection ์ฃผ์š” ํ˜์‹ ๊ณผํŠนํ—ˆ๊ธฐ์ˆ  Software-Defined Storage Hyperconverged Infrastructure Quality of Service PCIe-Flash Hybrid Arrays PCIe All-Flash Arrays 2 0 0 5 2 0 1 1 2 0 1 2 2 0 1 5 ๊ธฐ์ˆ  ํ˜์‹  ํŠนํ—ˆ ๊ธฐ์ˆ  ํŠนํ—ˆ๊ธฐ์ˆ  ํŠนํ—ˆ ๋Œ€๊ธฐ Nexgen
  • 4.
  • 5.
    ํ•˜์ดํผ์ปจ๋ฒ„์ง€๋“œ ์ธํ”„๋ผ์ŠคํŠธ๋Ÿญ์ณ๋ž€? ๊ฐ€ํŠธ๋„ˆ๊ฐ€ ๊ผฝ์€์ฃผ์š” HCI ์š”๊ฑด : ๊ฐ„๋‹จ: ๋น ๋ฅธ ๊ตฌ์„ฑ ๋ฐ ์šด์˜ ์œ ์—ฐ์„ฑ: Scale-up and out ํ™•์žฅ ์šฉ์ด ์„ ํƒ ๊ฐ€๋Šฅํ•  ๊ฒƒ: ๊ตฌ์„ฑ ๋ฐ ์žฅ๋น„ ์˜ต์…˜ ๊ทœ์ •๋œ ๊ตฌ์กฐ: ์˜ˆ์ธก๊ฐ€๋Šฅํ•œ ์„ฑ๋Šฅ๊ณผ ๊ฐ€์šฉ์„ฑ ๊ฒฝ์ œ์„ฑ: CAPEX and OPEX ์ ˆ๊ฐ How to Determine the Best Consumption Model for Converged or Hyperconverged Systems 11/06/15 HCI Servers Storage Network Storage โ€ข ์Šคํ† ๋ฆฌ์ง€์™€ ์ปดํ“จํŒ…์ด ๊ฒฐํ•ฉ๋œ โ€ข ์†Œํ”„ํŠธ์›จ์–ด๋กœ ์ •์˜๋œ ์Šคํ† ๋ฆฌ์ง€ โ€ข ํ‘œ์ค€ x86 ์„œ๋ฒ„ ํ”Œ๋žซํผ โ€ข ๋ชจ๋“ˆ ๋‹จ์œ„ ํ™•์žฅ ์ „ํ†ต์  ๊ตฌ์กฐ ํ•˜์ดํผ์ปจ๋ฒ„์ง€๋“œ ์ธํ”„๋ผ์ŠคํŠธ๋Ÿญ์ณ
  • 6.
    ์„ ๋‘๊ธฐ์—…์€ hyper scale๋ฐฉ์‹์˜ ์—…๋ฌด ๋ชจ๋ธ์„ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ์ง€์—ญ์  Erasure coding Erasure coding์€ ๊ตฌ๊ธ€ ์•„๋งˆ์กด ๊ฐ™์€ Hyper scale ํšŒ์‚ฌ์— ์ ์šฉํ•˜๋Š” ํ•ต์‹ฌ ๋ฐ์ดํ„ฐ ๋ณดํ˜ธ๊ธฐ์ˆ  ์ž…๋‹ˆ๋‹ค. Erasure coding์€ RAID 6 ๋ณด๋‹ค 10,000 ๋ฐฐ ์ด์ƒ์˜ ์•ˆ์ •์„ฑ์ด ์žˆ์œผ๋ฉฐ, Peta Byte๊ทœ๋ชจ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์œ ์ง€ํ•˜๊ธฐ ์œ„ํ•œ ์œ ์ผํ•œ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. Pivot3๋Š” ๊ธฐ์กด Erasure Coding๊ธฐ์ˆ ์— ์‘๋‹ต์‹œ๊ฐ„์„ ๊ฐœ์„ ํ•œ Scalar Erasure Coding ์„ ํ•˜์ดํผ ์ปจ๋ฒ„์ง€๋“œ ์ธํ”„๋ผ์— ์ ์šฉํ•˜์—ฌ ๋†’์€ ํšจ์œจ๊ณผ ๊ทนํ•œ์˜ ์•ˆ์ •์„ฑ์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ์–ดํ”Œ๋ผ์ด์–ธ์Šค Erasure Coding ์—ฌํƒ€ HCI๋Š” mirror๋ฐฉ์‹๋งŒ์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ Pivot3๋Š” ๋ชจ๋“  ๋…ธ๋“œ์˜ ๋ชจ๋“  ๋””์Šคํฌ๋ฅผ ์ „๋ถ€ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
  • 7.
  • 8.
    Pivot3๋Š” ์ด๋ ‡๊ฒŒ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์กฐ๋ฆฝ,์žฅ์• ํŒŒํŠธ๊ต์ฒด, ์„ค์น˜, ์ตœ์‹ ์—…๋ฐ์ดํŠธ, ๊ตฌ์„ฑ, ํ…Œ์ŠคํŠธ, ์žฌ๊ตฌ์„ฑโ€ฆโ€ฆโ€ฆ.. ์ด๋ ‡๊ฒŒ ํ•œ ๋‹ฌ์ด ์ง€๋‚˜๊ฐ‘๋‹ˆ๋‹คโ€ฆ Pivot3 vSTAC starts here. ์™„์ „ํžˆ ์ตœ์ ํ™” ๊ตฌ์„ฑ๋˜์–ด ๋ฐ”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. Project Start Acquire Appropriate Hardware Calculate IOPs Hardware Configuration Network Configuration Software Install Test Tune Benchmark and Iterate Deploy Scale Out
  • 9.
    Pivot3 โ€“ ์ข€๋” ์Šค๋งˆํŠธํ•œ ์ธํ”„๋ผ ์†”๋ฃจ์…˜ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ ๊ณ ๊ฐ€์šฉ์„ฑ์ด ๋ณด์žฅ๋œ ํ”Œ๋žซํผ Hyperconverged Infrastructure and Flash Storage ์ค‘์š”ํ•œ ๊ฒƒ์„ ์šฐ์„  ์ฒ˜๋ฆฌ
  • 10.
    Pivot3 ํ•˜์ดํผ์ปจ๋ฒ„์ „์Šค ํ˜์‹  PE R F O R M A N C E2x M O R E I O P S P E R D E S K T O P S4x O P E R A T I N G E N V I R O N M E N T O V E R H E A D10% ๋ถ„์‚ฐ์ฒ˜๋ฆฌ HCI ์ด๋ ˆ์ด์ € ์ฝ”๋“œ ํŠนํ—ˆ๊ธฐ์ˆ  ํšจ์œจ์ ์ธ ์šด์˜ ํ™˜๊ฒฝ SDS VIRTUAL SERVERS VIRTUAL SAN VM VM VM VM VM VM x86
  • 11.
    ๋” ๋†’์€ ํšจ๊ณผ๋ฅผ๋ณด์žฅ VIRTUAL SERVERS VIRTUAL SAN VM VM VM VM VM VM VM VM x86 Hyperconverged Infrastructure ํ•„์š”์— ๋”ฐ๋ผ ํ‘œ์ค€ x86์„œ๋ฒ„๋ฅผ ๋Š˜๋ ค๊ฐ€๋Š” ์œ ์—ฐํ•œ ์•„ํ‚คํ…์ณ ๋ชจ๋“ˆ๋ฐฉ์‹ x86 Nodes ๊ทน๋Œ€ํ™”๋œ ์ž์› ํšจ์œจ์„ฑ์„ ์œ„ํ•ด ์Šคํ† ๋ฆฌ์ง€์™€ ์ปดํ“จํŒ…์„ ๋™์‹œ์— ํ™•์žฅํ•˜๋Š” ๋ถ„์‚ฐ Scale-out ๊ตฌ์กฐ ๋ถ„์‚ฐ Scale-out 7% ์˜ ์‹œ์Šคํ…œ ๋ฆฌ์†Œ์Šค๋งŒ์„ ์‚ฌ์šฉํ•˜๋Š” ํšจ์œจ์ ์ธ HCI ํšจ์œจ์ ์ธ ์šด์˜ํ™˜๊ฒฝ
  • 12.
    ๊ณ ์„ฑ๋Šฅ์ด ์œ ์ง€๋˜๋Š” ๊ณ ๊ฐ€์šฉํ™˜๊ฒฝ Hyperconverged Infrastructure ์ตœ๋Œ€ 94% utilization ์ œ๊ณตํ•˜๋Š” ์œ ์—ฐํ•œ ๋ฐ์ดํ„ฐ๋ณดํ˜ธ ๊ธฐ์ˆ  99.9999%์˜ ๋ฐ์ดํ„ฐ ์•ˆ์ •์„ฑ ํŠนํ—ˆ ๊ธฐ์ˆ  Erasure Coding ์žฅ์• ์‹œ์—๋„ 85%์ด์ƒ์˜ ์„ฑ๋Šฅ์„ ๋ณด์žฅํ•˜๋ฉด์„œ ๊ณ  ๊ฐ€์šฉ์„ฑ์„ ๋ณด์žฅ Availability With Performance VIRTUAL SERVERS VM VM VM VM VM VM VM VM x86 VIRTUAL SAN
  • 13.
    ์œ ์—ฐํ•œ ํ•˜์ดํผ ์ปจ๋ฒ„์ง€๋“œํ™˜๊ฒฝ ๊ตฌ์ถ• ๊ฐ€๋Šฅ vSTAC SERVER NODES vSTAC BLADES โ€ข All-Flash โ€ข Hybrid โ€ข CapacityExpansion โ€ข Surveillance-Optimized โ€ข Software-onlyoption โ€ข All-inclusiveFeatureSet โ€ข All-flash โ€ข Highperformancedensity โ€ข Software-onlyoption โ€ข All-inclusiveFeatureSet โ€ข Hyperconvergencewith QoS ManagedServiceLevels โ€ข FlashAccelerationTier Dell FX2 Blades Cisco UCS B200 Blades Dell M1000/ M630 Blades vSTAC SLX
  • 14.
  • 15.
    Raid 1 ์ „ํ†ต์ ์ธ ์Šคํ† ๋ฆฌ์ง€ํ˜•ํƒœ๋Š”โ€ฆ. โ€ข ํ˜ธ์ŠคํŠธ์˜ ์„ฑ๋Šฅ์€: โ€ข RAID set์— ํฌํ•จ๋œ ๋””์Šคํฌ ์ˆ˜์— ๋น„๋ก€ โ€ข ์„ฑ๋Šฅ์€ ํ•ด๋‹น ๋ณผ๋ฅจ์„ ์†Œ์œ ํ•œ Controller์˜ ์„ฑ๋Šฅ์— ์ œํ•œ โ€ข ์„ฑ๋Šฅ์„ ์œ„ํ•ด์„œ๋Š” ํŠน๋ณ„ํ•œ RAID set์„ ๊ตฌํ˜„ํ•ด์•ผํ•จ โ€ข ์“ฐ์ž„์— ๋”ฐ๋ผ RAID set์„ ๋””์ž์ธํ•˜๊ณ  ๊ด€๋ฆฌํ•˜์—ฌ์•ผํ•จ โ€ข ์šฉ๋Ÿ‰์„ ์ถฉ๋ถ„ํžˆ ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•จ โ€ข Volume๋ณ€๊ฒฝ๊ณผ๋Š” ๋‹ฌ๋ฆฌ RAID set์˜ ๋ณ€๊ฒฝ์€ ์ฆ‰์‹œ ๋ณ€๊ฒฝ์ด ์–ด๋ ค์›€ Raid 1 Raid 5 Raid 6 Raid 1 Raid 5 SnapShot Pool Spares Controller 1 R1 R1 R5R5 Vol 0 Vol 1 Vol 2 Vol 3 Controller 2 R6R6 R1R1 Vol 4 Vol 5 Vol 6 Vol 7
  • 16.
    vSTAC Cluster Pivot3๋Š” โ€ฆโ€ฆ. โ€ขNon RAID๋ฐฉ์‹์œผ๋กœ ๊ธฐ์กด RAIDset์œ„์— ๋ณผ๋ฅจ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ์‹์ด ์•„๋‹Œ ์ „์ฒด vPG์— ๋ณผ๋ฅจ์„ ๋ฐ”๋กœ ์ƒ์„ฑ โ€ข ํ•ด๋‹น ๋ณผ๋ฅจ์€ Online์œผ๋กœ ๋ณดํ˜ธ๋ ˆ๋ฒจ, ์„ฑ๋Šฅ์„ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์Œ โ€ข Node๊ฐ€ ๋Š˜์–ด๋‚˜๋ฉด ์šฉ๋Ÿ‰๊ณผ ์„ฑ๋Šฅ์ด ๊ฐ™์ด ์ฆ๊ฐ€ โ€ข ๋ชจ๋“  ๊ด€๋ฆฌ๋Š” Pivot3 vSTAC Management Console์—์„œ ์‰ฝ๊ฒŒ vSTAC ClustervPG 1 vPG 2 vPG n Node 1 Node 2 Node 3 VM VM VM vPG 1 (All Flash) APP OS APP OS APP OS Node 1 Node 2 Node 3 VM VM VM vPG 2 (Hybrid) APP OS APP OS APP OS DATA STORE 1 DATA STORE 2
  • 17.
    ๋…์ฐฝ์ ์ธ Erasure Coding๊ธฐ๋ฐ˜์˜ํšจ์œจ์„ฑ 66% 75% 80% 83% 86% 88% 89% 90% 91% 92% 92% 93% 93% 94% 40% 50% 60% 70% 80% 90% 100% 3 4 5 6 7 8 9 10 11 12 13 14 15 16 STORAGE๊ฐ€์šฉ๊ณต๊ฐ„ ๋…ธ๋“œ โ€ข ํŠนํ—ˆ ๋ฐ›์€ erasure coding ๋ฐฉ์‹์€ ์ตœ๋Œ€์˜ ๊ณต๊ฐ„์„ ๋ณด์žฅ โ€ข๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€ ๋ ์ˆ˜๋ก ์Šคํ† ๋ฆฌ์ง€์˜ ๊ณต๊ฐ„ ํšจ์œจ์€ ๋†’์•„์ง โ€ข ์ „์ฒด ๋…ธ๋“œ์ค‘ 5๊ฐœ์˜ ๋””์Šคํฌ ์žฅ์•  ๋˜๋Š” 1๋…ธ๋“œ+2๊ฐœ์˜ ๋””์Šคํฌ ์žฅ์• ๊นŒ์ง€๋ฅผ ํ—ˆ์šฉ High Storage Efficiency that increases with scale ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ์ตœ๊ณ  50%๊นŒ์ง€์˜ ๊ณต๊ฐ„์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Œ ๊ฐ€์šฉ๊ณต๊ฐ„ Scale Mirror50% 33% 16% โ€ข ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ๊ฐ€์žฅ ์‰ฌ์šฐ๋ฉด์„œ๋„ ํŠน๋ณ„ํ•œ ๊ธฐ์ˆ ์ด ํ•„์š”์—†์Œ โ€ข ๋…ธ๋“œ๊ฐ€ ๋Š˜์–ด๋‚˜๋”๋ผ๋„ ๊ฐ€์šฉ๊ณต๊ฐ„์˜ ํšจ์œจ์€ ์ฆ๊ฐ€๋˜์ง€ ์•Š์Œ โ€ข ๋””์Šคํฌ์˜ ์žฅ์• ํ—ˆ์šฉ์„ ๋Š˜๋ฆด์ˆ˜๋ก ๊ฐ€์šฉ๊ณต๊ฐ„์€ ๊ธ‰์†ํžˆ ๊ฐ์†Œํ•จ ๊ธฐ์กด ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹ 3์ค‘ ๋ฏธ๋Ÿฌ Five Drive Protection ์ฐจ๋ณ„์  #1 Pivot3 ๊ธฐํƒ€ ์ œํ’ˆ
  • 18.
    ๋ณผ๋ฅจ ๋งค๋‹ˆ์ € ๋””์Šคํฌ ์ง์ ‘์•ก์„ธ์Šค ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•œ ์•ก์„ธ์Šค Hypervisor (ESXi) HDD Pivot3 vSTAC OS (VM) Hyper-Converged Infrastructure โ€ขPivot3 Close-to-the-metal methodology โ€ข ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•˜์ง€ ์•Š๊ณ  ์ง์ ‘ ๋””์Šคํฌ๋ฅผ ์•ก์„ธ์Šคํ•˜์—ฌ 30-40%์˜ ์„ฑ๋Šฅํ–ฅ์ƒ โ€ข ๋ฌผ๋ฆฌ ๋””์Šคํฌ๋ฅผ ์ง์ ‘ ๊ด€๋ฆฌ Hypervisor SSD VM running Software-defined storage Disk access through the hypervisor โ€ข ํ•˜์ดํผ ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•ด ๋ณผ๋ฅจ๋งค๋‹ˆ์ €์— ์•ก์„ธ์Šค ํ•˜๊ฑฐ๋‚˜ โ€ข JBOD์‚ฌ์šฉํ•˜๋”๋ผ๋„ ํ•˜์ดํผ๋ฐ”์ด์ € ํ†ตํ•ด์„œ ๋””์Šคํฌ ์ ‘๊ทผ โ€ข ๋ฌผ๋ฆฌ๋””์Šคํฌ์˜ ์ง์ ‘๊ด€๋ฆฌ ๋ถˆ๊ฐ€ ์ฐจ๋ณ„์  #2 HDD HDD HDDSSD ์ง์ ‘ ์•ก์„ธ์Šค ์บ์‹œ๊ด€๋ฆฌ์ž
  • 19.
    Full-time Active/Active/Active Pivot3 GlobalActive/Active/Active Conventional SAN: Active / Passive Active / Passive Redundant Controllers Storage controller #0 Storage controller #1 Passive Active Passive Active โ€ข VM์€ Pivot3 cluster์˜ ๋ชจ๋“  ์ปจํŠธ๋กค๋Ÿฌ์— ์•ก์„ธ์Šค๊ฐ€ ๊ฐ€๋Šฅ โ€ข ๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€๋ ๋•Œ ๋งˆ๋‹ค ์ปจํŠธ๋กค๋Ÿฌ์˜ ์„ฑ๋Šฅ+ ๊ฐ€์šฉ์„ฑ+Bandwidth์ด ์„ ํ˜•์ ์œผ๋กœ ์ฆ๊ฐ€ โ€ข True, optimized Global Active / Active. โ€ข ๋ณดํ†ต ์Šคํ† ๋ฆฌ์ง€๋Š” 2๊ฐœ์˜ ์ปจํŠธ๋กค๋Ÿฌ๋กœ ๋™์ž‘ โ€ข ๊ฐ ์ปจํŠธ๋กค๋Ÿฌ๋Š” ์„œ๋กœ๊ฐ„์˜ ๋™์ž‘์„ ๊ฐ์‹œํ•˜๋ฉด์„œ A-A ํ˜น์€ A-S๋กœ ๋™์ž‘ โ€ข ์„ฑ๋Šฅ, ๊ฐ€์šฉ์„ฑ, Bandwidth์€ ๊ณ ์ •๋จ Node #1 HDD HDD HDD Node #n HDD HDD HDD Node #2 HDD HDD HDD ์ปจํŠธ๋กค๋Ÿฌ ์ปจํŠธ๋กค๋Ÿฌ ์ปจํŠธ๋กค๋Ÿฌ โˆ‘ (IOPs)Global Active / Active => ์ฐจ๋ณ„์  #3
  • 20.
    Global Virtual DriveSparing Pivot3 Virtual Global Sparing Conventional sparing system Appliance #1 HDD Spare HDD โ€ข ์ „ํ†ต๋ฐฉ์‹์˜ ์ŠคํŽ˜์–ด๋Š” ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ํ•˜๋‚˜์”ฉ์˜ ์ŠคํŽ˜์–ด๋ฅผ ํ• ๋‹นํ•˜์—ฌ ๋””์Šคํฌ ์žฅ์• ์— ๋Œ€๋น„ํ•จ โ€ข HDD ์ „์ฒด ์„ฑ๋Šฅ์€ ๋””์Šคํฌ์ˆ˜-์ŠคํŽ˜์–ด์ˆ˜ (ex 900 IOPS) โ€ข HDD์žฅ์• ์‹œ ํ•ด๋‹น ๋…ธ๋“œ์˜ ๋ชจ๋“  HDD๊ฐ€ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌํ•˜์—ฌ ์ •์ƒ์ ์ธ ์„ฑ๋Šฅ์„ ๋‚ผ ์ˆ˜ ์—†์Œ โ€ข ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ์ŠคํŽ˜์–ด ๋””์Šคํฌ๋ฅผ ๋ฏธ๋ฆฌ ์ง€์ •ํ•˜์ง€ ์•Š๊ณ  ์ „์ฒด ๋…ธ๋“œ์— ์ŠคํŽ˜์–ด ๊ณต๊ฐ„์„ ๊ฐ€์ƒ์œผ๋กœ ๋ฐฐ์ • โ€ข Pivot3 ํด๋Ÿฌ์Šคํ„ฐ์— ํ•˜๋‚˜์˜ ๋””์Šคํฌ ๊ณต๊ฐ„๋งŒํผ๋งŒ ์†Œ๋น„๋จ โ€ข HDD์˜ ์žฅ์• ๊ฐ€ ๋ฐœ์ƒ ํ•˜๋”๋ผ๋„ ์„ฑ๋Šฅ์˜ ์ €ํ•˜ ์—†์ด ๋ณต๊ตฌ๋˜๋ฉฐ ์ˆ˜๋™์ž‘์—…์€ ์ „ํ˜€ ๋ถˆํ•„์š” โ€ข HDD์žฅ์• ์‹œ ๋ชจ๋“  ๋…ธ๋“œ์˜ HDD๊ฐ€ ์ž‘์—…์„ ๋‚˜๋ˆ ์„œ ์กฐ๊ธˆ์”ฉ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌ ์„ฑ๋Šฅ์˜ ์ €ํ•˜๊ฐ€ ์—†์Œ HDD HDD Appliance #2 HDD Spare HDD HDD HDD Appliance #n HDD Spare HDD HDD HDD Pivot3 Re-positioning ์ฐจ๋ณ„์  #4 Appliance #1 HDD HDD HDD HDD Virtual sparing Appliance #n HDD HDD HDD HDD Virtual sparing Appliance #2 HDD HDD HDD HDD Virtual sparing
  • 21.
    Spare disk 1. ReadD1 2. Read D3 3. Read D4 4. Read P 5. Read D5 6. Read D6 7. Read D7 8. calculate D2=P1xD1xD3xD4xD5xD6xD7 9. Write D2N 10. X1,000,000 Virtual Spare 1. Read D2,D12โ€ฆ 2. Write D2N 3. Write D12N 4. โ€ฆ.. 5. X100,000 No CPU action D2N D2N D12N CPU Disk CPU Disk D1 D2 D3 D4 P1 D5 D6 D7 Replace vs Rebuild D1 D2 D3 D4 P9 D5 D6 D7 D8 D11 D12 D13 P14 D14 D15 D16 D17 D18
  • 22.
    HDD ์‘๋‹ต ์‹œ๊ฐ„ HDD1Drive 2 HDD3 ์‚ฌ์ „์˜ˆ์ธก ๋””์Šคํฌ ์žฅ์• ์ฒ˜๋ฆฌ โ€ข๋Œ€๋ถ€๋ถ„์˜ HDD๋Š” ์™„์ „ํžˆ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ๋ณด๋‹ค, ๋จผ์ € ๋А๋ ค์ง€๊ฑฐ๋‚˜ ๋ฐฐ๋“œ ์„นํ„ฐ๊ฐ€ ๋ฐœ์ƒ โ€ข๋А๋ ค์ง„ ๋””์Šคํฌ๋Š” ์ „์ฒด ์„ฑ๋Šฅ์—๋„ ์˜ํ–ฅ์„ ์คŒ โ€ขPivot3 ๋Š” ๋””์Šคํฌ์˜ ์™„์ „ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ์ „์— ๋ฏธ๋ฆฌ ์ŠคํŽ˜์–ด์กฐ์น˜๋ฅผ ์ˆ˜ํ–‰ํ•จ Cluster HDD7 Appliance #3 Virtual Global spare HDD4 HDD1 HDD8 HDD5 HDD2 HDD9 HDD6 HDD3 Virtual sparing STEP 2 Global Spare์— ์žฌ๋ฐฐ์น˜๋ฅผ ๋ฏธ๋ฆฌ ์ˆ˜ํ–‰ * ์„ฑ๋Šฅ์ €ํ•˜ ๋ฐœ์ƒ์„ ์‚ฌ์ „ ๋ฐฉ์ง€ 2. ์‚ฌ์ „ ๋ฐ์ดํ„ฐ ๋ณต์ œSTEP 1 HDD์˜ ์‘๋‹ต์‹œ๊ฐ„์„ ํ•ญ์ƒ ๊ฐ์‹œ, ๋А๋ ค์ง„ HDD๋ฅผ ๊ฒ€์ถœ HDD์žฅ์•  ๋ฐœ์ƒ ์ „ ์‚ฌ์ „์˜ˆ์ธก 1. ์žฅ์• ์‚ฌ์ „๊ฒ€์ถœ STEP 3 HDD๊ต์ฒด ์‹œ์—๋„ ์ „ํ˜€ ์„ฑ๋Šฅ์˜ ์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š์Œ 3. HDD๊ต์ฒด ํ›„ ๋ฆฌ๋นŒ๋”ฉ ๋ถˆํ•„์š” Cluster HDD7 HDD4 HDD1 HDD8 HDD5 HDD9 HDD6 HDD3NewHDD2 HDD1 HDD2 HDD3 Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing ์ฐจ๋ณ„์  #5
  • 23.
    #1 Node ElastiCache โ€“๋ฉ”๋ชจ๋ฆฌ ๋ฐ์ดํ„ฐ ์บ์‹ฑ ElastiCache ํŠนํ—ˆ โ€ข ๊ฐ ๋…ธ๋“œ๋‹นUp 64GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๋ฐ์ดํ„ฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ โ€ข vPG๋‚ด์˜ ๋ฉ”๋ชจ๋ฆฌ๋Š” ๋™์‹œ์— ์‚ฌ์šฉ๋จ โ€ข vPG ์ตœ๋Œ€ ์บ์‹œ1TB โ€ข ๋™์  read/write cache โ€ข cold cache data to HDDs #3 Node#2 Node ElastiCache Memory FlashCache SSD HDD ElastiCache Memory FlashCache SSD HDD ElastiCache Memory FlashCache SSD HDD Acknowledgement APP OS ์ฐจ๋ณ„์  #6
  • 24.
    Node 1 Node2 Node 3 VM VM VM vPG (Hybrid) APP OS APP OS APP OS ElastiCache ์˜ ์“ฐ๊ธฐ ์ตœ์ ํ™” โ€ข Write cache์˜ ๋‚ด๋ ค์“ฐ๊ธฐ์— ์ตœ์ ํ™” โ€ข IO type ๊ณผ ๋ถ€ํ•˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ๋‚ด๋ ค์“ฐ๊ธฐ์ˆ˜ํ–‰ โ€ข ํŠนํ—ˆ ๋ฐ›์€ ์Šค๋งˆํŠธ ์บ์‹œ โ€ข ํฌ๊ณ  ๋ฌด๊ฑฐ์šด ๋ฐ์ดํ„ฐ๋Š” ์บ์‹œ์— ์œ ์ง€ โ€ข ์ž์ฃผ ์“ฐ์ง€ ์•Š๋Š” ๋ฐ์ดํ„ฐ๋Š” ๋””์Šคํฌ์— ๋‚ด๋ ค์“ฐ๊ธฐ โ€ข ์บ์‹œ๊ฐ€ ํ—ˆ์šฉํ•˜๋Š” ํ•œ ์ตœ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์œ ์ง€ํ•˜์—ฌ ์ฝ๊ธฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ Benefits โ€ข ๋†’์€ IOPs ๋น ๋ฅธ ๋ ˆ์ดํ„ด์‹œ โ€ข High performing Hybrid nodes at a better TCO Regular flusher frequency Flushing rate Accelerated flusher frequency IO rate Low cache utilization Cache High Random AccessCache
  • 25.
    Flexible Deployment Options Node1 Node 2 Node 3 Data Only vPG VM APP OSVM APP OS VM APP OS VM APP OSVM APP OS VM APP OS VM APP OSVM APP OS VM APP OS Node 4 Only the drive types/sizes need to be the same ์Šคํ† ๋ฆฌ์ง€์˜ ์ฆ์„ค์€: โ€ข CPU ์„ฑ๋Šฅ, ๊ฐœ์ˆ˜์— ๋ฌด๊ด€ โ€ข ๋ฉ”๋ชจ๋ฆฌ ์šฉ๋Ÿ‰์— ๋ฌด๊ด€ โ€ข ํ•„์š”์— ๋”ฐ๋ผ ์ฆ์„ค ๋ชจ๋ธ All-flash HCI Watch Data Compute Latency ์šฉ๋Ÿ‰ ์–ด๋– ํ•œ ์š”๊ตฌ์—๋„ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋Š” ์œ ์—ฐํ•œ ๋ชจ๋ธ ์„ ํƒ์˜ ์ž์œ  All-flash : ๊ฐ€์žฅ ๋น ๋ฅธ ์‘๋‹ต์‹œ๊ฐ„ HCI : ์ผ๋ฐ˜์ ์ธ ๊ฐ€์ƒํ™”, VDI Watch : ๋งŽ์€ ์šฉ๋Ÿ‰์ด ์š”๊ตฌ๋˜๋Š” ์„œ๋น„์Šค Data : ๊ธฐ์กดํ™˜๊ฒฝ์— ์šฉ๋Ÿ‰๋งŒ ์ฆ์„ค
  • 26.
    Hybrid / Watch/DATA 12 X 3.5 in drives Sizes range from: 1 TB = 12 TB Node 2 TB = 24 TB Node 4 TB = 48 TB Node 6 TB = 72 TB Node 8 TB = 96 TB Node Watch/ DATA 1cpu, 2PCIslot Data No ESX Dual 400 GB SSDs 10GB Copper or SFP+ 10GB, Copper or SFP+ Power supplies 750W 1 2 3 4 5 1- SSD cache 2- Full x16 PCI-e bay & x8 3- low profile X8 & X1 PCI-e slots 4- onboard USB for usb SSD 5- Dual SD cards for ESX install
  • 27.
    vSTAC HCI/Watch 10 Gb SFP+ 10Gb SFP+ Mellanox 10G/40/56Gbps 1 Gb BT 1 Gb BT 10 Gb BT 10 Gb BTBT HCI,Watch PCIeDaughter Card
  • 28.
    ๋ณด์žฅ๋ ˆ๋ฒจ ๋ฐ ์šฉ๋Ÿ‰Pivot3 Level (Proprietary Architecture) Data / System Protection EC1 RAID 1E โ€ข 1 disk or 1 appliance failure RAID 5E EC3 RAID 1P โ€ข 3 simultaneous disk failures or โ€ข 1 disk + 1 appliance failure RAID 6P RAID 6E EC5 RAID 6X โ€ข 5 simultaneous disk failures or โ€ข 2 disk + 1 appliance failure Pivot3 Level Single Data / System Protection RAID 1 1 disk failure RAID 5 1 disk failure RAID 6 2 simultaneous disk failures
  • 30.
    Top Ten StorageChallenges ํ์‚ฌ์˜ IT์ธํ”„๋ผ์—์„œ ์Šคํ† ๋ฆฌ์ง€์— ๊ด€๋ จ๋œ ์ตœ๋Œ€ ๊ณผ์ œ๋Š” ๋ฌด์—‡์ž…๋‹ˆ๊นŒ? N=212 multiple responses Source: Enterprise Strategy Group, 2016
  • 31.
    ๋น„์ฆˆ๋‹ˆ์Šค๋Š” ๋ณด๋‹ค ๋ณต์žกํ•œ์š”๊ตฌ๊ฐ€ ๋ฐœ์ƒํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค Latency I/O Intensive Storage Performance ์ฃผ๋ฌธDB Processing AND Storage Intensive ๊ฑฐ๋ž˜APP Mid Intensity Processing and Storage ์„œ๋ฒ„๊ฐ€์ƒํ™”, BCDR Processing Intensive ๊ทธ๋ž˜ํ”ฝ, VDI Business Applications PROCESSING CCTV BI Annalistic Backup Infra
  • 32.
    ์Šคํ† ๋ฆฌ์ง€ ๋ฏธ์…˜์™„์ˆ˜ ๊ฐ€์žฅ ๋น ๋ฅธPCIe Flash Arrays All-flash and hybrid ๋™์  QoS ์‚ฌ์—…์š”๊ตฌ์— ์ฆ‰์‹œ ๋ถ€ํ•ฉํ•˜๋Š” ์„ฑ๋Šฅ ๊ด€๋ฆฌ์˜ ๋‹จ์ˆœํ™” ์„ธ๋ฐ€ํ•œ ์ •์ฑ…๊ธฐ๋ฐ˜ ๊ด€๋ฆฌ ํ™˜์ƒ์˜ ์„ฑ๋Šฅ์˜ต์…˜! ๋‹จ์ผ์„œ๋ฒ„์— 300KIOPS ้ฉๆ้ฉๆ‰€
  • 33.
    - 50,000 100,000 150,000 200,000 250,000 300,000 350,000 400,000 450,000 500,000 DDR4 PCIe FlashEnt SSD Eco SSD HDD IOPS & Latency ์ €์žฅ ๋ฏธ๋””์–ด ์„ฑ๋Šฅ ๋น„๊ต โ€ข Flash์˜ ์„ฑ๋Šฅ ์ตœ๋Œ€ 100๋ฐฐ๊นŒ์ง€ ์ฐจ์ด๊ฐ€ ๋‚ ์ˆ˜ ์žˆ์Œ โ€ข ๋ฏธ๋””์–ด์„ฑ๋Šฅ์ด์ƒ์˜ ์ธํ„ฐ์ปค๋„ฅํŠธ๊ฐ€ ํ•„์š” โ€ข Data chunk์— ๋”ฐ๋ผ Bandwidth์ด ์ฃผ์š”๊ณ ๋ ค๋Œ€์ƒ์ด ๋จ โ€ข ์ตœ๊ทผ ํ‰๊ท  IOํฌ๊ธฐ๋Š”32KB์ด์ƒ 10Gbps 20ยตs 350ยตs 60ยตs 4000ยตs *16KB ๊ธฐ์ค€ 10Gbps์˜ bandwidth์—๋Š” ์ตœ๋Œ€ 60KIOPS๊ฐ€ ๊ฐ€๋Šฅ 0.01ยตs 32Gbps 8,000,000
  • 34.
    NexGen PCIe Flash๊ตฌ์กฐ โ€ข ๋ชจ๋“  IO๋Š” ๋‹น์—ฐํžˆ ์ปจํŠธ๋กค๋Ÿฌ๋ฅผ ๊ฑฐ์นœ๋‹ค โ€ข ๋ณ‘๋ชฉ ๋‹น์—ฐํžˆ ์žˆ๋‹ค. โ€ข ๊ณ ์„ฑ๋Šฅ์„ ์œ„ํ•œ ๋ฉ€ํ‹ฐ ํ‹ฐ์–ด ํ”Œ๋ ˆ์‰ฌ ๊ตฌ์กฐ โ€ข ์šฐ์„ ์ˆœ์œ„์— ๋”ฐ๋ผ ๋ฐ์ดํ„ฐ์œ„์น˜๊ฐ€ ๋™์ ์œผ๋กœ ๋ณ€๊ฒฝ ์ „ํ†ต์  ์Šคํ† ๋ฆฌ์ง€ NexGen PCIe Flash Arrays Ultra-Low Latency ๊ณ ์„ฑ๋Šฅ ์ €์žฅ๊ณต๊ฐ„ Lowest Latency ๊ณ ์šฉ๋Ÿ‰ ์ €์žฅ๊ณต๊ฐ„ or or all writes / reads read cache reads low priority reads
  • 35.
    NexGen ๋ฐ์ดํ„ฐํšจ์œจํ™” ๊ธฐ์ˆ ์˜ํšจ๊ณผ 2.5X acceleration* 50% lower latency Reduced IO vs. PCIe Flash IO2.5X 4X 4X SSD life extension 7:1 consolidation ratio** from PCIe Flash writes to SSD writes 2:1 50% average capacity reduction 2:1 data reduction ratio*** No performance impact * 2.5X acceleration based on v3.5 software benchmarks; ** 7:1 consolidation ratio based on NexGen customer measured metrics; *** 2:1 capacity reduction based on NexGen customer measured metrics IO ํ†ตํ•ฉ์œผ๋กœ SSD์ˆ˜๋ช…์ฆ๊ฐ€ ๋ฐ์ดํ„ฐ ํšจ์œจํ™”๋กœ ์ ์œ  ์šฉ๋Ÿ‰๊ฐ์†Œ IO๋Ÿ‰ ๊ฐ์†Œ๋กœ ์ธํ•œ ์„ฑ๋Šฅํ–ฅ์ƒ
  • 36.
    ๋ชฉํ‘œ๋ฅผ ์†์‰ฝ๊ฒŒ ์ž๋™ํ™” โ€ข์‚ฌ์ „์ •์˜๋œ ์ •์ฑ… โ€ข๊ด€๋ฆฌ ๊ฐ€๋Šฅํ•œ ์ตœ๋Œ€, ์ตœ์†Œ Priorities โ€ข์ž๋™ ๋ฐด๋“œ์œ— ์กฐ์ • โ€ข์ž๋™ ํ์ž‰ ์กฐ์ • ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜ โ€ข์‹ค์‹œ๊ฐ„, Always on โ€ขPrioritized Active Caching QoS : ์šฐ์„ ์ˆœ์œ„๋ฅผ ์‹ค ์„œ๋น„์Šค์— ์ฆ‰์‹œ ์ ์šฉ SLA NexGen Dynamic QoS
  • 37.
    40K 30K 20K 10K IOPS 0 40K 30K 20K 10K IOPS 0 Service Level์— ์ ์šฉ โ€ข๋ชจ๋“  ๋ฐ์ดํ„ฐ๊ฐ€ ๋™์ผํ•œ ์ค‘์š”๋„๋กœ์ทจ๊ธ‰๋จ โ€ข ์„œ๋น„์Šค์—๋ถ€ํ•ฉํ•˜์ง€ ์•Š๋Š” ์„ฑ๋Šฅ โ€ข Impacts business operations ๏ƒจ๋ณ„๋„์˜ ์Šคํ† ๋ฆฌ์ง€ ๏ƒจ๋น„ํšจ์œจ์ ํˆฌ์ž๊ด€๋ฆฌ โ€ข ์—…๋ฌด์š”๊ตฌ์—๋ถ€ํ•ฉํ•˜๋Š” ์„ฑ๋Šฅ โ€ข ์„œ๋น„์Šค๋ ˆ๋ฒจ์—ํ•ฉ์น˜ โ€ข Mission Critical ์„œ๋น„์Šค๋Š”ํ•ญ์ƒ ๋ณด์žฅ๋จ QoS ์—†๋Š” ์Šคํ† ๋ฆฌ์ง€ NexGen Storage QoS ! ! VM์ฃผ๋ฌธ DB ๊ฐœ๋ฐœ DB VM์ฃผ๋ฌธ DB ๊ฐœ๋ฐœ DB ์ตœ๊ณ  Priority ๋†’์Œ Priority ์ตœ์ € Priority
  • 38.
    ๊ธฐ์ •์˜๋œ QoS ์ •์ฑ… 125,000IOPS 1000 MB/s 1 ms 75,000 IOPS 500 MB/s 3 ms 50,000 IOPS 250 MB/s 10 ms 25,000 IOPS 100 MB/s 20 ms 10,000 IOPS 50 MB/s 40 ms 100,000 IOPS 750 MB/s 5 ms 50,000 IOPS 375 MB/s 10 ms 20,000 IOPS 150 MB/s 25 ms 10,000 IOPS 75 MB/s 50 ms 2,000 IOPS 38 MB/s 100 ms Hybrid ์Šคํ† ๋ฆฌ์ง€ All-Flash ์Šคํ† ๋ฆฌ์ง€
  • 39.
    QoS์— ๋งž์ถฐ ์ž๋™๋ฐฐ์น˜ ๋Šฅ๋™์ ์ธ์บ์‹œ ์ฐจ๋ณ„ํ™” โ€ข ๋ณดํ˜ธ๋œ Read/Write ์˜์—ญ โ€ข ๋ชจ๋“  Write๋Š” ๊ฐ€์žฅ ๋น ๋ฅธPCIe flash์—์„œ โ€ข HA๋ฅผ ์œ„ํ•ด Write๋Š” ๋ฏธ๋Ÿฌ๋จ โ€ข QoS ์— ๋”ฐ๋ผ ๋‚ด๋ ค์“ธ์ง€์œ ์ง€ํ• ์ง€๊ฒฐ์ • โ€ข ์ฐจ๋ณ„ํ™”๋œ Read ์บ์‹œ โ€ข ์Šค๋งˆํŠธํ•˜๊ฒŒ ์–ธ์ œ ์–ด๋””๋ฅผ ๊ฒฐ์ • โ€ข ๋ฐ์ดํ„ฐ๋Š”RAM๊ณผ PCIe flash์— ์บ์‹œ๋จ Policy Read ์บ์‹œ์šฐ์„ ์ˆœ์œ„ ์บ์‹œ์กฐ๊ฑด MC: Policy 1 Most Aggressive 1 I/O hit BC: Policy 2 Aggressive 4 I/O hits BC: Policy 3 Less Aggressive 16 I/O hits NC: Policies 4 & 5 ์—†์Œ Data is never cached *Per 1 MB Page
  • 40.
    ์šฐ์„ ์ˆœ์œ„ ์ฐจ๋“ฑํ™”์˜ ์˜ˆ HighestPriority High Priority Lowest Priority ํŒŒ์›Œ์œ ์ € ์›์Šคํ…Œ์ด์…˜ ๋Œ€์ฒด ์˜๊ตฌ ๋ฐ์Šคํฌํƒ‘ VIP ๋น ๋ฅธ ์‘๋‹ต์‹œ๊ฐ„ Linked Clone desktops ์ •ํ˜•ํ™”๋œ ์—…๋ฌด ๋ณดํ†ต์˜ ์‘๋‹ต์‹œ๊ฐ„ ๋น„์˜๊ตฌ ๋ฐ์Šคํฌํƒ‘ ๋Œ€๊ณ ๊ฐ ์›น์„œ๋น„์Šค ์ฃผ๋ฌธ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค Transaction Database Business Reporting Business Intelligence Inventory ๊ฐœ๋ฐœ QA ํ™˜๊ฒฝ Backup Databases VDI DB
  • 41.
    Log Temp DBData Back-To-GranularStorage Management ๋ฌผ๋ฆฌํ™˜๊ฒฝ DB ์ผ๋ฐ˜๋ณผ๋ฅจ ์‚ฌ์šฉํ•˜๋Š” DB VM vVol์‚ฌ์šฉํ•˜๋Š” DB VM VMFS Datastore ๊ด€๋ฆฌ โ€ข ๋” ํšจ์œจ์  โ€ข ๋™์  ์„ธ๋ฐ€ํ•œ ๊ด€๋ฆฌ SQL Server Log Temp DBData Log Temp DBData โ€ข ํšจ์œจ์ ์ธ ํŽธ์ด์ง€๋งŒ โ€ข ์ •์ฑ…์€ ํ†ตํ•ฉ ๋ณผ๋ฅจ์— ์ ์šฉ๋จ โ€ข ๋น„ ํšจ์œจ์  โ€ข ์ˆ˜๋™๊ด€๋ฆฌ โ€ข ์ •์  ์„ธ๋ฐ€ํ•œ ๊ด€๋ฆฌ ๊ฐ€๋Šฅ VVols Data Log Temp DB Data Log Temp DB LUN์„ ๊ด€๋ฆฌ ? ?
  • 42.
    VM๋ณ„ VMDK๋ณ„ ๋™์ QoS โ€ขvCenter์—์„œ ํ†ตํ•ฉ๊ด€๋ฆฌ โ€ข Virtual volume(vVol)์„ ์™„๋ฒฝ ์ง€์›ํ•˜๋Š” VASA) โ€ข N5์Šคํ† ๋ฆฌ์ง€๊ด€๋ฆฌ์˜ ์„ฑ๋Šฅ Qos๊ฐ€ vCenter์—์„œ๋„๋™์ผ์ ์šฉ โ€ข VM-level์—์„œ ์ •์ฑ… ์ ์šฉํ•˜๋ฉด๊ฐ ๊ฐ€์ƒ๋””์Šคํฌ์—์ž๋™ ์ ์šฉ โ€ข ๋‹จ์ˆœํ•œ VM๊ด€๋ฆฌ โ€ข VM๋ณ„ ๊ฐ€์ƒ๋””์Šคํฌ๋ฐฐํฌ, ์Šค๋ƒ…์ƒท, ๋ณต์ œ, ํด๋ก , QoS๊ด€๋ฆฌ โ€ข ํ•˜๋‚˜์˜ ์ •์ฑ…์œผ๋กœ๋ชจ๋“  ์Šคํ† ๋ฆฌ์ง€ ๊ด€๋ จ ๊ด€๋ฆฌ๋ฅผ ์ด๊ด„ Pivot3 QoS Manager for vCenter Server enables VMware VVol integration
  • 43.
    Data Protection QoS โ€ขData๋ณดํ˜ธ์ •์ฑ… QoS โ€ข ํ•˜๋‚˜์˜ ์ •์ฑ…์œผ๋กœ ์—ฌ๋Ÿฌ ๋ณผ๋ฅจ์— ์ ์šฉ๊ฐ€๋Šฅ โ€ข ๋กœ์ปฌ ์Šค๋ƒ…์ƒท / ๋ณต์ œ โ€ข ์Šค๋ƒ…์ƒท ์œ ์ง€๊ธฐ๊ฐ„ โ€ข ์žกํ ์˜ˆ์•ฝ โ€ข ์„œ๋น„์Šค๋ ˆ๋ฒจ๋ณ„์šฐ์„ ์ˆœ์œ„ ์ ์šฉ โ€ข ์˜จ๋ผ์ธ ์ฆ‰์‹œ์ ์šฉ โ€ข ๋‹จ์ˆœํ•œ VM๊ด€๋ฆฌ โ€ข VM๋ณ„ provisioning, ์Šค๋ƒ…์ƒท, ๋ณต์ œ, ํด๋ก , QoS๊ด€๋ฆฌ โ€ข ํ•˜๋‚˜์˜ ์ •์ฑ…์œผ๋กœ๋ชจ๋“  ์Šคํ† ๋ฆฌ์ง€ ๊ด€๋ จ ๊ด€๋ฆฌ๋ฅผ ์ด๊ด„
  • 44.
  • 45.
    ๋™์  QoS ๋ณ€๊ฒฝ์˜ˆ์‹œ ๊ธฐ์กดQoS ์ •์ฑ… โ€ข Non-Critical 5 โ€ข 11.85ms Latency โ€ข 1.2K IOPS ์ƒˆ ์ •์ฑ… ๋ณ€๊ฒฝ ํ›„ โ€ข Mission-Critical 1 โ€ข .46ms Latency 96%๏ƒช โ€ข 18.7K IOPS 1,458%๏ƒฉ โ€ข ์ฆ‰์‹œ ๋ณ€๊ฒฝ์ด ์‹œ์ž‘๋จ โ€ข Software defined performance for storage 11.85ms ๏ƒจ 0.46ms Latency! 1.2K ๏ƒจ 18.7K IOPS!
  • 46.
    ์ •์ฑ… ์ž๋™ํ™” โ€ข ์ •์ฑ…๊ธฐ๋ฐ˜์ž๋™ํ™” โ€ข ์Šค์ผ€์ฅด์„ ํ†ตํ•œ ์ •์ฑ…์ ์šฉ โ€ข ์„ฑ๋Šฅ์ •์ฑ… โ€ข ๋ฐ์ดํ„ฐ๋ณดํ˜ธ์ •์ฑ… โ€ข Snapshot and replication schedules โ€ข Replication targets (up to 5) โ€ข ์žฌ์‹œ๋„๊ธฐ๊ฐ„ ์„ค์ • โ€ข ์œ ์ง€๊ธฐ๊ฐ„์„ค์ • 6pm โ€“ 8am Non Critical: Policy-5 8am โ€“ 6pm (biz hours) Mission Critical: Policy-1 Scheduled Task: Policy-1-5 ์—…๋ฌด์‹œ๊ฐ„์—๋Š” ๊ฐ€์žฅ๋น ๋ฅด๊ฒŒ ๋น„์—…๋ฌด์‹œ๊ฐ„์—” ๋А๋ฆฌ๊ฒŒ QoS Scheduling
  • 47.
    HCI๋กœ์˜ ํ™•์žฅ SLX = HCI+ All flash storage + QoS ๏ƒจ ๋‹จ์ผ ํ”Œ๋žซํผ ์šด์˜ โ€ข HCI์˜ ๊ฒฝ์ œ์„ฑ, ํ‘œ์ค€ํ™” โ€ข N5์˜ ์„ฑ๋Šฅ, ๋ณด์žฅ์„ฑ โ€ข ์ตœ๊ณ ์˜ ์œ ์—ฐ์„ฑ ๋ชจ๋“  IO ํƒ€์ž…์— ์ ์šฉ ๊ฐ€๋Šฅํ•œ ์ธํ”„๋ผ โ€ข ์„œ๋ฒ„ํ†ตํ•ฉ โ€ข VDI โ€ข ๋ฏธ์…˜ ํฌ๋ฆฌํ‹ฐ์ปฌ DB โ€ข Archiving / Backup โ€ข CCTV
  • 48.
    ๊ฐ„ํŽธํ•œ ๋ฐ์ดํ„ฐ๋ณดํ˜ธ(DR) ์Šค๋ƒ…์ƒท โ€ข ์Šค์ผ€์ฅด โ€ขThin Provisioned Clones ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ for DR โ€ข ์Šค์ผ€์ค„ โ€ข Asynchronous โ€ข Between Different N5 Models โ€ข VSS Provider for Microsoft Remote copy Remote snapshot
  • 49.
    Hybrid Flash ArraysAll-Flash Arrays Model N5-200 N5-300 N5-500 N5-1000 N5-1500 N5-6000 PCIe Flash Capacity 2.0TB > 7.2TB 2.6TB > 7.8TB 5.2TB > 10.4TB 10.4TB > 15.6TB 2.6 TB 2.6 TB SSD Capacity 15 TB > 60 TB 60 TB > 240 TB Disk Capacity 32TB > 128TB 64TB > 448TB 64TB > 448TB / 128TB > 512TB Performance Capability 150,000 IOPS 2.0GB/s Throughput 200,000 IOPS 2.4GB/s Throughput 225,000 IOPS 2.7GB/s Throughput 250,000 IOPS 3.0GB/s Throughput 450,000 IOPS * 6.0 GB/s throughput ** RAM 96GB 192GB 96GB All Features Included Quality of Service | Service Levels | Dynamic Data Path | Prioritized Active Cache | Data Reduction | Data Protection (Snapshot and Replication) Storage Processors Dual Active-Active Interfaces Data: (4) 1/10GbE SFP+ -or- 1/10GBT RJ45 / Management: (4) 1GbE RJ45, http, https Data: (8) / Management: (4) Hardware Availability Redundant storage processors | Redundant fans | Redundant, hot swap power supplies Redundant network connections | Dual port SAS SSD drives | RAID, hot swap SSD drives Capacity Packs 32TB HDD Shelf 32TB (2x16) / 48TB (3x16) / 64TB (4x16) / 128TB (8x16) HDD Shelf 15TB (960GBx16), 60TB (3.8TBx16) SSD Shelf Performance Packs 5.2TB PCIe Flash VMware Integration VAAI | vCenter Server Plug-in | Virtual Volumes Partner Ecosystem | VASA * 4K random reads; ** 256K sequential reads N5 ์ œํ’ˆ๋ณ„ ์‚ฌ์–‘
  • 50.
    ๊ณ ๊ฐ๋งŒ์กฑ ์ง€์› Live Support Expertise โ€ข24x7x365 Availability โ€ข ์Šคํ† ๋ฆฌ์ง€ ์ „๋ฌธ๊ฐ€ + ๊ฐ€์ƒํ™” ์ „๋ฌธ๊ฐ€ ์ง€์› ์ƒ์‹œ ๋ชจ๋‹ˆํ„ฐ๋ง โ€ข ๋Œ€์‘๋ฐฉ์•ˆ์„ ์ œ์‹œํ•˜๋Š” ์•Œ๋žŒ ์ฒด๊ณ„ โ€ข Phone-home Telemetry ํฌ๊ด„์  ์ง€์›์ •์ฑ… โ€ข ๋‹จ์ผ ์ง€์› ์ ‘์  โ€ข ๊ณ„์•ฝ ๊ธฐ๊ฐ„ ๋‚ด ์ง€์› ๋น„์šฉ์ƒ์Šน ์—†์Œ Support Offerings โ€ข 7 day x 24 hour phone | Onsite parts โ€ข 7 day x 24 hour phone | NBD parts โ€ข 5 day x 9 hour phone support | NBD Parts
  • 51.

Editor's Notes

  • #7ย What is Hyper-Convergence? At the most basic level Combined storage and computing platform Off The Shelf (OTS) Components, X86 Processors vs specialized ASICS Delivered in a Appliance Model Software Defined Storage โ€“ what makes this possible X86 Lets dig a bit deeper on the X86 economics and ecosystem, because itโ€™s one of the real drivers of HCI As of 2014, from Gartner: X86 servers represent 99% of the servers in terms of Units Shipped and >75% by $$$$, These % figures continue to increase year over year. This is result continual increase in capability and lower costs of X86 processors driven ever higher volumes. Also due to competition, the Vendors have had to settle for lower margins. HP have 25% GM and Dell 23% verses IBM's >50% GM which is why they exited the X86 Server market by selling that business to Lenovo (12% GM) HCI is bringing the same economies, same trends that have unfolded in the server world to the combined Storage & Compute environment Caption: Modular / Scalable Architecture, Simplicity, Generalists (vs specialists), Lower TCO Bringing the benefits that Google, Amazon, etc. have seen to the Enterprise Scale When talking to customers that are not familiar with the concept of HC, you may need to elaborate a but further in the case where this particular definition is not 100% clear to them as they already buy servers today, which have CPUโ€™s, Disks, Memory, and NICโ€™s all in a rack chassis. ย These could be considered as off-the-shelf since they can buy it from Dell/HP/Lenovo/whoever, and just like their fridge, itโ€™s an appliance. ย "So what are you trying to tell me HC is? Isnโ€™t that whatโ€™s in a server I am buying from Dell today?โ€ Since the majority are users of some sort of virtual machine environment (Vmware/Hyper-V/KVM) they get that and we may be able to use that as a launching pad to describe HC as a combination of "VIRTUALIZED storage + VIRTUALIZED compute + VIRTUALIZED networkingโ€ that is put together as an appliance with the same familiar x86 technologies they currently work with. ย The mechanical boundaries of the appliance are transparent in a fully virtualized, or hyper-converged system. ย The finishing message that should be conveyed is that hyper-convergence is the brining together of all the independently virtualized resources of the data center into a simple appliance based deployment.
  • #21ย A-A๋ฐฉ์‹์ด๋ผ ํ•˜๋”๋ผ๋„ ์„œ๋กœ๊ฐ„์˜ task mirror๋กœ ์„ฑ๋Šฅ์€ 1์ž„.
  • #25ย ElastiCache has a variable cache based on the amount of reads or writes coming into ElastiCacheโ€ฆit is not fixed
  • #26ย Our caching algorithms make adjustments to the flusher speeds based on IO type and IO rate. These algorithms keep our cache at an optimal state where the heavily utilized/active data remains in cache, and the cold data stages down to disk. We have implemented cache differently than traditional storage arrays and is the main reason we get such great performance in our Hybrid arrays with SATA drives.ย  The speed of write cache flushing depends on many factors, obviously one of them will be IO type and IO loads. After flushing, write cache will transition to read cache (our unique implementation) and will stay as long as possible until we need to reclaim the cache for newer IOs.
  • #28ย The 400GB MLC SSDs are used for Flash Cache. We use 25GB of capacity on each 400GB drive for Flash Cache, the remaining capacity on the drives is used for moving the Flash Cache around to extend write wear of the MLC SSD drives.
  • #32ย IT์—์„œ ๊ฐ€์žฅ ์–ด๋ ต๊ณ ๋„ ๋ณต์žกํ•˜์ง€๋งŒ ๊ฐ€์žฅ ๋ณ€ํ™”ํ•˜๊ธฐ ์–ด๋ ค์šด ๋ถ€๋ถ„์ด ๋ฐ”๋กœ ์Šคํ† ๋ฆฌ์ง€ ๊ด€๋ จ ์–ด๋ ค์›€๋“ค์„ ์ฐจ๋ก€๋กœ ๋‚˜์—ดํ•ด๋ณด๋ฉด ๋น„์šฉ, ๊ด€๋ฆฌ์‹œ๊ฐ„, ์„ฑ๋Šฅ๋ณด์žฅ, DR๋“ฑ์ด ์šฐ์„ ์ˆœ์œ„๋กœ ๋ชจ๋“ ๊ธฐ์—…๋“ค์ด ๋น„์Šทํ•œ ๊ณ ๋ฏผ์„ ์•ˆ๊ณ  ์žˆ์Œ
  • #33ย ํŠนํžˆ๋‚˜ ๋ชจ๋“  ์‚ฌ์—…๊ธฐ๋ฐ˜๋“ค์ด ITํ†ตํ•ฉ๋˜๋ฉด์„œ ๊ณ„์‚ฐ์„ฑ๋Šฅ๊ณผ ๋งž๋ฌผ๋ ค ์‘๋‹ต์‹œ๊ฐ„์— ๋Œ€ํ•œ์š”๊ตฌ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์šฉ๋Ÿ‰์— ๋Œ€ํ•œ ์š”๊ตฌ๋„ ๋‹ค์–‘ํ•ด์ง€๊ณ  ์žˆ์œผ๋ฉฐ ํŠนํžˆ ๋ฐ์ดํ„ฐ์˜ ๋ณดํ˜ธ์— ๋Œ€ํ•œ ๊ฒƒ๋“ค๊นŒ์ง€ ๊ฐ€์„ธ๋˜๋ฉด์„œ ์›Œํฌ๋กœ๋“œํƒ€์ž…์œผ๋กœ๋งŒ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๋Š” ๊ฒƒ์€ ์ด๋ฏธ ์ „์‹œ๋Œ€์ ์ด๋ผ๊ณ  ํ• ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ๋“ค์–ด ๊ณผ๊ฑฐ์—๋Š” DB๋Š” ๋น ๋ฅธ ์Šคํ† ๋ฆฌ์ง€์—, ๊ฐœ๋ฐœ์€ ์ €๊ฐ€์Šคํ† ๋ฆฌ์ง€์—, ๊ฐ€์ƒํ™”๋Š” ์ผ๋ฐ˜์Šคํ† ๋ฆฌ์ง€์— ํ•˜๋Š”๊ฒƒ์ด ์–ด์ฉ”์ˆ˜ ์—†๋Š” ์„ ํƒ์ด์ง€๋งŒ ์ด์ œ๋Š” DB์ค‘์—์„œ๋„ ๋” ๋นจ๋ผ์•ผํ•˜๊ฑฐ๋‚˜ ๊ฐœ๋ฐœ์ค‘์—์„œ๋„ ํ•œ์‹œ์ ์œผ๋กœ ๋” ๋น ๋ฅธ ์„ฑ๋Šฅ์„ ์š”๊ตฌํ• ์ˆ˜ ๋„ ์žˆ๋Š” ๊ฒƒ์ด๋‹ค. ํŠนํžˆ Business Intelligent ์ž…์žฅ์—์„œ๋ณด๋ฉด ๋งŽ์€์–‘์˜ ๋ฐ์ดํ„ฐ๋ฅผ ํฌํ•จํ•˜๋ฉด์„œ๋„ ๋ถ„์„์‹œ์—๋Š” ๋น ๋ฅธ ์‘๋‹ต์„ ํ•ด๋‚ผ์ˆ˜ ์žˆ์–ด์•ผํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ณผ๊ฑฐ์™€ ๊ฐ™์€ ์ •์ ์ธ ๊ธฐ์ˆ ์  ์›Œํฌ๋กœ๋“œ ํƒ€์ž…์œผ๋กœ๋งŒ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์‚ฌ์šฉํ•˜๋Š”๊ฒƒ์€ ๋น„ํšจ์œจ์ ์ด๋ฉฐ, ๋น ๋ฅธ ๋น„์ฆˆ๋‹ˆ์Šค ๋ณ€ํ™”์š”๊ตฌ์— ์ถฉ๋ถ„ํžˆ ๋Œ€์‘ํ•˜์ง€ ๋ชปํ•  ๋ฟ์•„๋‹ˆ๋ผ ๊ฒฐ๊ตญ ๋น„๊ฒฝ์ œ์  ํˆฌ์ž๋ฅผ ํ•˜๊ฒŒ๋œ๋‹ค. ์ฆ‰์ด์ œ๋Š” ์ฆ‰์žฌ์ ์†Œ์— ์„ฑ๋Šฅ, ์šฉ๋Ÿ‰์„ ๋ณด์žํ•  ์ˆ˜ ์žˆ๋Š” ์Šคํ† ๋ฆฌ์ง€ ์ „๋žต์ด ํ•„์š”ํ•˜๋‹ค.
  • #34ย ๋จผ์ € ๋„ฅ์Šค์  ์€ ๊ทธ ๋ˆ„๊ตฌ๋ณด๋‹ค ๋น ๋ฅด๋‹ค. ๊ทธ์ด์œ ๋Š” ๋‹ค์Œ์žฅ์— ๋‘˜์งธ ์•„๋ฌด๋ฆฌ ๋น ๋ฅธ ์•„์šฐํ† ๋ฐ˜์ด ๋งˆ๋ จ๋˜์–ด ์žˆ๋”๋ผ๋„ ํ•œ๊บผ๋ฒˆ์— ๋ชฐ๋ฆฌ๋Š” ์ƒํ™ฉ์ด๋ฉด ์‘๊ธ‰์ฐจ๊ฐ€ ์ด๋™ํ• ์ˆ˜ ์—†๋‹ค. ๋ฏธ์…˜ํฌ๋ฆฌํ‹ฐ์ปฌํ•œ ์„œ๋น„์Šค๋Š” ์„ฑ๋Šฅ๊ณผ ์•ˆ์ •์„ฑ์ด ๋ณด์žฅ๋˜์–ด์•ผํ•œ๋‹ค. ์…‹์งธ ๋†’๊ฒŒ ๋‚ ๋ฉด ์„ธ๋ฐ€ํžˆ ๋ณผ์ˆ˜ ์—†๊ณ  ๋‚ฎ๊ฒŒ ๋‚ ๋ฉด ๋ฉ€๋ฆฌ ๋ณผ์ˆ˜ ์—†๋‹ค. ๋‹ค์‹œ ๋งํ•ด ๋ถ„๋ฅ˜๋ฅผ ์„ธ๋ฐ€ํžˆ ํ•˜์ง€์•Š์œผ๋ฉด ๊ด€๋ฆฌ๋Š” ์‰ฌ์šฐ๋‚˜ ๊ฐ ์—…๋ฌด์˜ ํŠน์„ฑ์— ์ตœ์ ํ™”๋œ ์„ฑ๋Šฅ์ด๋‚˜ ๋ฏผ์ฒฉ์„ฑ์„ ์ œ๊ณตํ•˜๊ธฐ ์–ด๋ ต๋‹ค. ๊ด€๋ฆฌ์—๋“œ๋Š” ์‹œ๊ฐ„๋„ ๋งˆ์ฐฌ๊ฐ€์ง€๋‹ค ๋งˆ์ง€๋ง‰์œผ๋กœ ์–ด๋А์กฐ์ง์ด๋‚˜ ํ•œ๋‘๊ฐœ์˜ ํŠน๋ณ„ํžˆ ๋น ๋ฅธ ์„ฑ๋Šฅ์„ ์š”๊ตฌํ•˜๋Š” ์„œ๋น„์Šค๊ฐ€ ์žˆ์„์ˆ˜ ์žˆ๋Š”๋ฐ. ๋‹จ์ง€ ์ด๋ฅผ ์œ„ํ•ด์„œ ์ „์ฒด๋ฅผ AF๋กœ ๋ฐ”๊พธ๋Š”๊ฒƒ ๋˜ํ•œ ๋‚ญ๋น„๋‹ค. Add PCI scale Show HDD and SSD attach
  • #35ย ํฐ๋ฒ”์ฃผ๋กœ ๋ฉ”๋ชจ๋ฆฌ๊นŒ์ง€๋ฅผ ์ €์žฅ์žฅ์น˜๋กœ ๋ณด์•˜์„๋•Œ (์ธ๋ฉ”๋ชจ๋ฆฌ DB๊ฐ€ ์“ฐ์ด๋Š” ์ด์œ ) ์ตœ๊ทผ ์“ฐ์ด๊ณ  ์žˆ๋Š” DDR4 ๋ฉ”๋ชจ๋ฆฌ๋Š” 8๋ฐฑ๋งŒ IOPS์— ๋‹ฌํ•œ๋‹ค. ๊ฑฐ๊ธฐ์— ๋น„ํ•ด ์ผ๋ฐ˜ ๊ธฐ์—…ํ˜• SSD๋Š” ์‹ญ๋งŒ ์ •๋„๋กœ 80๋ฐฐ์˜ ์„ฑ๋Šฅ์ฐจ์ด๊ฐ€ ์žˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๋„ฅ์Šค์  ์ด ์‚ฌ์šฉํ•˜๋Š” PCIํ”Œ๋ž˜์‰ฌ๋Š” ํ˜„์žฌ๊นŒ์ง€ ๋‚˜์™€์žˆ๋Š” ๊ฐ€์žฅ ๋น ๋ฅธ ๋‚ธ๋“œํ”Œ๋ž˜์‰ฌ๋กœ 38๋งŒIOPS์ด์ƒ์˜ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ธ๋‹ค. PC์šฉ SSD๋Œ€๋น„ 6๋ฐฐ, ๊ธฐ์—…ํ˜• SSD์— ๋Œ€๋น„ํ•ด์„œ๋„ 3๋ฐฐ์ด์ƒ ๋น ๋ฅด๋‹ค.
  • #37ย NexGen data reduction technologies deliver performance, endurance and capacity benefits for our All-fash arrays.
  • #38ย We believe that QoS will be a requirement for the next generation of all-flash arrays. NexGenโ€™s dynamic QoS simplifies performance management and delivers more consistent application performance. Rather than make admins manually input performance targets, weโ€™ve done the work, by creating five simple to assign policies. for them unlike other QoS offerings that force end users to react to a noisy neighbor and manually input minimums, maximums, and burst settings for every single volume in the system. ย  NexGenโ€™s Dynamic Storage QoS has three distinct attributes. The first is that it allows customers to apply performance targets to Volumes and VMs. This manages min and max performance levels in terms of IOPS, TP and Latency. Predefined policies control everything, including how data is prioritized, including intelligent BW throttling and queuing to ensure service levels are met. Finally, since weโ€™re a multi-tier architecture, real-time data placement is paramount. This is where features like prioritized active cache ensures that flash is prioritized for more critical workloads. Emphasize dynamic QoS simplifies performance management and delivers more consistent application performance, unlike other QoS offerings that force end users to react to a noisy neighbor and manually input minimums, maximums, and burst settings for every single volume in the system. ย  Often times customers donโ€™t realize how much performance to provision, NexGenโ€™s patented QoS uses massive amounts of performance data collected from production environments to pre-define 5 simple policies that automate performance management. ย This is especially important for customers considering using Vmware vSphere 6 with virtual volumes (VVOLs). ย VVOLs can increase the number of objects in a data storage system by 30 times, there is no way users can scale by manually inputing minimums and maximums on every single VVOL. ย  PRIORITIES Another shortcoming of other QoS engines is the ability to over-provision system performance. ย Just like thin provisioning in the capacity world, the ability to over provision is key to maximum utilization of resources. ย QoS engines with manual minimum, maximum, and burst settings leave users stranded when it comes to over provisioning as without a way to prioritize between the various performance targets during the times where the system is encountering contention (due to over provisioining), the system canโ€™t make the right trade-offs and ensure mission critical apps get consistent performance. ย  PLACEMENT There is a proliferation of new types of non-volatile media including several different types of flash. ย The only way to ensure customers achieve the most affordable all flash system while avoiding latency spikes and contention is to use multiple tiers of flash managed by a QoS engine. ย NexGen designed itโ€™s architected all management capabilities around the concept of QoS in anticipation of using the best of all worlds from multiple media types and types yet to be discovered. ย  ย  VVOL INTEGRATION Our policies are integrated with Vmwareโ€™s Storage Based Policy engine (SPBM), that radically simplifies VM-level performance management. ย Changing performance is as simple as changing a selection on the drop down menu item โ€“ end users notice the change in seconds.
  • #39ย Add VMs and Applications
  • #40ย Policy based QoS allows any one managing storage to easily assign priorities to a vm or volume. There are 5 preconfigured QoS policies. Youโ€™re looking at the QoS policy for the NexGen n-1500 all flash array. One of the our two existing hybrid policies is shown below so you can see the difference between QoS for multi-tier w/ HDD vs. multi-tier all-flash. Keep in mind that each set of polices were designed. To align w/ the amount of flash in the system.
  • #42ย Why does QoS matter in practice? Because customer data is not homogeneious. They have different performance requirements and SLAs. Take for example VDI. โ€ฆ.
  • #43ย For SQL customers, this allows customers to attain the granular performance mgmt. that exists with stand-alone instances. Policies can be assigned to VMFS, and now with VVLos support, wach instance gets granular performance management. Our policies are integrated with Vmwareโ€™s Storage Based Policy engine (SPBM), that radically simplifies VM-level performance management. ย Changing performance is as simple as changing a selection on the drop down menu item โ€“ end users notice the change in seconds.
  • #49ย Remove it?
  • #51ย 2015 4x 6-core Intel Xeon E5645 2.4GHz (2x CPU per storage processor), 24x physical cores / 48 cores with hyper-threading