Pivot3 technical
Review
Smarter Infrastructure Solutions
Raid 1
์ „ํ†ต์ ์ธ ์Šคํ† ๋ฆฌ์ง€ ํ˜•ํƒœ๋Š”โ€ฆ.
โ€ข ํ˜ธ์ŠคํŠธ์˜ ์„ฑ๋Šฅ์€:
โ€ข RAID set์— ํฌํ•จ๋œ ๋””์Šคํฌ ์ˆ˜์— ๋น„๋ก€
โ€ข ์„ฑ๋Šฅ์€ ํ•ด๋‹น ๋ณผ๋ฅจ์„ ์†Œ์œ ํ•œ Controller์˜
์„ฑ๋Šฅ์— ์ œํ•œ
โ€ข ์„ฑ๋Šฅ์„ ์œ„ํ•ด์„œ๋Š” ํŠน๋ณ„ํ•œ RAID set์„ ๊ตฌํ˜„ํ•ด์•ผํ•จ
โ€ข ์“ฐ์ž„์— ๋”ฐ๋ผ RAID set์„ ๋””์ž์ธํ•˜๊ณ 
๊ด€๋ฆฌํ•˜์—ฌ์•ผํ•จ
โ€ข ์šฉ๋Ÿ‰์„ ์ถฉ๋ถ„ํžˆ ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•จ
โ€ข Volume๋ณ€๊ฒฝ๊ณผ๋Š” ๋‹ฌ๋ฆฌ RAID set์˜ ๋ณ€๊ฒฝ์€ ์ฆ‰์‹œ
๋ณ€๊ฒฝ์ด ์–ด๋ ค์›€
Raid 1
Raid 5
Raid 6
Raid 1
Raid 5
SnapShot
Pool
Spares
Controller 1
R1 R1 R5R5
Vol 0 Vol 1 Vol 2 Vol 3
Controller 2
R6R6 R1R1
Vol 4 Vol 5 Vol 6 Vol 7
vSTAC Cluster
Pivot3๋Š” โ€ฆโ€ฆ.
โ€ข Non RAID๋ฐฉ์‹์œผ๋กœ ๊ธฐ์กด
RAIDset์œ„์— ๋ณผ๋ฅจ์„ ์ƒ์„ฑํ•˜๋Š”
๋ฐฉ์‹์ด ์•„๋‹Œ ์ „์ฒด vPG์—
๋ณผ๋ฅจ์„ ๋ฐ”๋กœ ์ƒ์„ฑ
โ€ข ํ•ด๋‹น ๋ณผ๋ฅจ์€ Online์œผ๋กœ
๋ณดํ˜ธ๋ ˆ๋ฒจ, ์„ฑ๋Šฅ์„ ๋ณ€๊ฒฝํ•  ์ˆ˜
์žˆ์Œ
โ€ข Node๊ฐ€ ๋Š˜์–ด๋‚˜๋ฉด ์šฉ๋Ÿ‰๊ณผ
์„ฑ๋Šฅ์ด ๊ฐ™์ด ์ฆ๊ฐ€
โ€ข ๋ชจ๋“  ๊ด€๋ฆฌ๋Š” Pivot3 vSTAC
Management Console์—์„œ
์‰ฝ๊ฒŒ
vSTAC ClustervPG 1 vPG 2 vPG n
Node 1 Node 2 Node 3
VM VM VM
vPG 1 (All Flash)
APP
OS
APP
OS
APP
OS
Node 1 Node 2 Node 3
VM VM VM
vPG 2 (Hybrid)
APP
OS
APP
OS
APP
OS
DATA STORE 1 DATA STORE 2
๋…์ฐฝ์ ์ธ Erasure Coding๊ธฐ๋ฐ˜์˜ ํšจ์œจ์„ฑ
66%
75%
80%
83%
86% 88% 89%
90%
91% 92% 92%
93% 93%
94%
40%
50%
60%
70%
80%
90%
100%
3 4 5 6 7 8 9 10 11 12 13 14 15 16
STORAGE๊ฐ€์šฉ๊ณต๊ฐ„
๋…ธ๋“œ
โ€ข ํŠนํ—ˆ ๋ฐ›์€ erasure coding ๋ฐฉ์‹์€ ์ตœ๋Œ€์˜ ๊ณต๊ฐ„์„
๋ณด์žฅ
โ€ข๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€ ๋ ์ˆ˜๋ก ์Šคํ† ๋ฆฌ์ง€์˜ ๊ณต๊ฐ„ ํšจ์œจ์€
๋†’์•„์ง
โ€ข ์ „์ฒด ๋…ธ๋“œ์ค‘ 5๊ฐœ์˜ ๋””์Šคํฌ ์žฅ์•  ๋˜๋Š” 1๋…ธ๋“œ+2๊ฐœ์˜
๋””์Šคํฌ ์žฅ์• ๊นŒ์ง€๋ฅผ ํ—ˆ์šฉ
High Storage Efficiency that increases with scale
๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ์ตœ๊ณ 
50%๊นŒ์ง€์˜ ๊ณต๊ฐ„์„ ์‚ฌ์šฉํ•  ์ˆ˜
์žˆ์Œ
๊ฐ€์šฉ๊ณต๊ฐ„
Scale
Mirror50%
33%
16%
โ€ข ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ๊ฐ€์žฅ ์‰ฌ์šฐ๋ฉด์„œ๋„
ํŠน๋ณ„ํ•œ ๊ธฐ์ˆ ์ด ํ•„์š”์—†์Œ
โ€ข ๋…ธ๋“œ๊ฐ€ ๋Š˜์–ด๋‚˜๋”๋ผ๋„ ๊ฐ€์šฉ๊ณต๊ฐ„์˜ ํšจ์œจ์€
์ฆ๊ฐ€๋˜์ง€ ์•Š์Œ
โ€ข ๋””์Šคํฌ์˜ ์žฅ์• ํ—ˆ์šฉ์„ ๋Š˜๋ฆด์ˆ˜๋ก ๊ฐ€์šฉ๊ณต๊ฐ„์€
๊ธ‰์†ํžˆ ๊ฐ์†Œํ•จ
๊ธฐ์กด ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹
3์ค‘ ๋ฏธ๋Ÿฌ
Five Drive Protection
์ฐจ๋ณ„์  #1
Pivot3
๊ธฐํƒ€ ์ œํ’ˆ
๋ณผ๋ฅจ ๋งค๋‹ˆ์ €
๋””์Šคํฌ ์ง์ ‘ ์•ก์„ธ์Šค
ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•œ ์•ก์„ธ์Šค
Hypervisor (ESXi)
HDD
Pivot3 vSTAC OS (VM)
Hyper-Converged
Infrastructure
โ€ขPivot3 Close-to-the-metal methodology
โ€ข ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•˜์ง€ ์•Š๊ณ  ์ง์ ‘ ๋””์Šคํฌ๋ฅผ
์•ก์„ธ์Šคํ•˜์—ฌ 30-40%์˜ ์„ฑ๋Šฅํ–ฅ์ƒ
โ€ข ๋ฌผ๋ฆฌ ๋””์Šคํฌ๋ฅผ ์ง์ ‘ ๊ด€๋ฆฌ
Hypervisor
SSD
VM running
Software-defined storage
Disk access
through the
hypervisor
โ€ข ํ•˜์ดํผ ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•ด ๋ณผ๋ฅจ๋งค๋‹ˆ์ €์— ์•ก์„ธ์Šค ํ•˜๊ฑฐ๋‚˜
โ€ข JBOD์‚ฌ์šฉํ•˜๋”๋ผ๋„ ํ•˜์ดํผ๋ฐ”์ด์ € ํ†ตํ•ด์„œ ๋””์Šคํฌ ์ ‘๊ทผ
โ€ข ๋ฌผ๋ฆฌ๋””์Šคํฌ์˜ ์ง์ ‘๊ด€๋ฆฌ ๋ถˆ๊ฐ€
์ฐจ๋ณ„์  #2
HDD HDD
HDDSSD
์ง์ ‘
์•ก์„ธ์Šค
์บ์‹œ๊ด€๋ฆฌ์ž
Full-time Active/Active/Active
Pivot3 Global Active/Active/Active Conventional SAN: Active / Passive
Active / Passive Redundant Controllers
Storage
controller #0
Storage
controller #1
Passive
Active
Passive
Active
โ€ข VM์€ Pivot3 cluster์˜ ๋ชจ๋“  ์ปจํŠธ๋กค๋Ÿฌ์— ์•ก์„ธ์Šค๊ฐ€
๊ฐ€๋Šฅ
โ€ข ๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€๋ ๋•Œ ๋งˆ๋‹ค ์ปจํŠธ๋กค๋Ÿฌ์˜ ์„ฑ๋Šฅ+
๊ฐ€์šฉ์„ฑ+Bandwidth์ด ์„ ํ˜•์ ์œผ๋กœ ์ฆ๊ฐ€
โ€ข True, optimized Global Active / Active.
โ€ข ๋ณดํ†ต ์Šคํ† ๋ฆฌ์ง€๋Š” 2๊ฐœ์˜ ์ปจํŠธ๋กค๋Ÿฌ๋กœ ๋™์ž‘
โ€ข ๊ฐ ์ปจํŠธ๋กค๋Ÿฌ๋Š” ์„œ๋กœ๊ฐ„์˜ ๋™์ž‘์„ ๊ฐ์‹œํ•˜๋ฉด์„œ A-A
ํ˜น์€ A-S๋กœ ๋™์ž‘
โ€ข ์„ฑ๋Šฅ, ๊ฐ€์šฉ์„ฑ, Bandwidth์€ ๊ณ ์ •๋จ
Node #1
HDD HDD HDD
Node #n
HDD HDD HDD
Node #2
HDD HDD HDD
์ปจํŠธ๋กค๋Ÿฌ
์ปจํŠธ๋กค๋Ÿฌ
์ปจํŠธ๋กค๋Ÿฌ
โˆ‘ (IOPs)Global Active / Active =>
์ฐจ๋ณ„์  #3
Global Virtual Drive Sparing
Pivot3 Virtual Global Sparing Conventional sparing system
Appliance #1
HDD
Spare
HDD
โ€ข ์ „ํ†ต๋ฐฉ์‹์˜ ์ŠคํŽ˜์–ด๋Š” ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ํ•˜๋‚˜์”ฉ์˜
์ŠคํŽ˜์–ด๋ฅผ ํ• ๋‹นํ•˜์—ฌ ๋””์Šคํฌ ์žฅ์• ์— ๋Œ€๋น„ํ•จ
โ€ข HDD ์ „์ฒด ์„ฑ๋Šฅ์€ ๋””์Šคํฌ์ˆ˜-์ŠคํŽ˜์–ด์ˆ˜ (ex 900 IOPS)
โ€ข HDD์žฅ์• ์‹œ ํ•ด๋‹น ๋…ธ๋“œ์˜ ๋ชจ๋“  HDD๊ฐ€ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌํ•˜์—ฌ
์ •์ƒ์ ์ธ ์„ฑ๋Šฅ์„ ๋‚ผ ์ˆ˜ ์—†์Œ
โ€ข ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ์ŠคํŽ˜์–ด ๋””์Šคํฌ๋ฅผ ๋ฏธ๋ฆฌ ์ง€์ •ํ•˜์ง€ ์•Š๊ณ 
์ „์ฒด ๋…ธ๋“œ์— ์ŠคํŽ˜์–ด ๊ณต๊ฐ„์„ ๊ฐ€์ƒ์œผ๋กœ ๋ฐฐ์ •
โ€ข Pivot3 ํด๋Ÿฌ์Šคํ„ฐ์— ํ•˜๋‚˜์˜ ๋””์Šคํฌ ๊ณต๊ฐ„๋งŒํผ๋งŒ
์†Œ๋น„๋จ
โ€ข HDD์˜ ์žฅ์• ๊ฐ€ ๋ฐœ์ƒ ํ•˜๋”๋ผ๋„ ์„ฑ๋Šฅ์˜ ์ €ํ•˜ ์—†์ด
๋ณต๊ตฌ๋˜๋ฉฐ ์ˆ˜๋™์ž‘์—…์€ ์ „ํ˜€ ๋ถˆํ•„์š”
โ€ข HDD์žฅ์• ์‹œ ๋ชจ๋“  ๋…ธ๋“œ์˜ HDD๊ฐ€ ์ž‘์—…์„ ๋‚˜๋ˆ ์„œ
์กฐ๊ธˆ์”ฉ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌ ์„ฑ๋Šฅ์˜ ์ €ํ•˜๊ฐ€ ์—†์Œ
HDD HDD
Appliance #2
HDD
Spare
HDD
HDD HDD
Appliance #n
HDD
Spare
HDD
HDD HDD
Pivot3 Re-positioning
์ฐจ๋ณ„์  #4
Appliance #1
HDD HDD HDD HDD
Virtual sparing
Appliance #n
HDD HDD HDD HDD
Virtual sparing
Appliance #2
HDD HDD HDD HDD
Virtual sparing
Spare disk
1. Read D1
2. Read D3
3. Read D4
4. Read P
5. Read D5
6. Read D6
7. Read D7
8. calculate D2=P1xD1xD3xD4xD5xD6xD7
9. Write D2N
10. X1,000,000
Virtual Spare
1. Read D2,D12โ€ฆ
2. Write D2N
3. Write D12N
4. โ€ฆ..
5. X100,000
No CPU action
D2N
D2N D12N
CPU Disk
CPU Disk
D1 D2 D3 D4 P1 D5 D6 D7
Replace vs Rebuild
D1 D2 D3 D4 P9 D5 D6 D7 D8
D11 D12 D13 P14 D14 D15 D16 D17 D18
HDD ์‘๋‹ต ์‹œ๊ฐ„
HDD1 Drive 2 HDD3
์‚ฌ์ „์˜ˆ์ธก ๋””์Šคํฌ ์žฅ์• ์ฒ˜๋ฆฌ
โ€ข๋Œ€๋ถ€๋ถ„์˜ HDD๋Š” ์™„์ „ํžˆ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ๋ณด๋‹ค, ๋จผ์ € ๋А๋ ค์ง€๊ฑฐ๋‚˜ ๋ฐฐ๋“œ ์„นํ„ฐ๊ฐ€ ๋ฐœ์ƒ
โ€ข๋А๋ ค์ง„ ๋””์Šคํฌ๋Š” ์ „์ฒด ์„ฑ๋Šฅ์—๋„ ์˜ํ–ฅ์„ ์คŒ
โ€ขPivot3 ๋Š” ๋””์Šคํฌ์˜ ์™„์ „ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ์ „์— ๋ฏธ๋ฆฌ ์ŠคํŽ˜์–ด์กฐ์น˜๋ฅผ ์ˆ˜ํ–‰ํ•จ
Cluster
HDD7
Appliance #3
Virtual Global
spare
HDD4
HDD1
HDD8
HDD5
HDD2
HDD9
HDD6
HDD3
Virtual sparing
STEP 2
Global Spare์— ์žฌ๋ฐฐ์น˜๋ฅผ ๋ฏธ๋ฆฌ ์ˆ˜ํ–‰
* ์„ฑ๋Šฅ์ €ํ•˜ ๋ฐœ์ƒ์„ ์‚ฌ์ „ ๋ฐฉ์ง€
2. ์‚ฌ์ „ ๋ฐ์ดํ„ฐ ๋ณต์ œSTEP 1
HDD์˜ ์‘๋‹ต์‹œ๊ฐ„์„ ํ•ญ์ƒ ๊ฐ์‹œ,
๋А๋ ค์ง„ HDD๋ฅผ ๊ฒ€์ถœ
HDD์žฅ์•  ๋ฐœ์ƒ ์ „ ์‚ฌ์ „์˜ˆ์ธก
1. ์žฅ์• ์‚ฌ์ „๊ฒ€์ถœ STEP 3
HDD๊ต์ฒด ์‹œ์—๋„ ์ „ํ˜€ ์„ฑ๋Šฅ์˜
์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š์Œ
3. HDD๊ต์ฒด ํ›„ ๋ฆฌ๋นŒ๋”ฉ ๋ถˆํ•„์š”
Cluster
HDD7
HDD4
HDD1
HDD8
HDD5
HDD9
HDD6
HDD3NewHDD2
HDD1 HDD2 HDD3
Virtual sparing
Virtual sparing
Virtual sparing Virtual sparing
Virtual sparing
Virtual sparing
Virtual sparing
Virtual sparing
์ฐจ๋ณ„์  #5
#1 Node
ElastiCache โ€“ ๋ฉ”๋ชจ๋ฆฌ ๋ฐ์ดํ„ฐ ์บ์‹ฑ
ElastiCache ํŠนํ—ˆ
โ€ข ๊ฐ ๋…ธ๋“œ๋‹นUp 64GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ
๋ฐ์ดํ„ฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ
โ€ข vPG๋‚ด์˜ ๋ฉ”๋ชจ๋ฆฌ๋Š” ๋™์‹œ์— ์‚ฌ์šฉ๋จ
โ€ข vPG ์ตœ๋Œ€ ์บ์‹œ1TB
โ€ข ๋™์  read/write cache
โ€ข cold cache data to HDDs
#3 Node#2 Node
ElastiCache
Memory
FlashCache
SSD
HDD
ElastiCache
Memory
FlashCache
SSD
HDD
ElastiCache
Memory
FlashCache
SSD
HDD
Acknowledgement
APP
OS
์ฐจ๋ณ„์  #6
Node 1 Node 2 Node 3
VM VM VM
vPG (Hybrid)
APP
OS
APP
OS
APP
OS
ElastiCache ์˜ ์“ฐ๊ธฐ ์ตœ์ ํ™”
โ€ข Write cache์˜ ๋‚ด๋ ค์“ฐ๊ธฐ์— ์ตœ์ ํ™”
โ€ข IO type ๊ณผ ๋ถ€ํ•˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ๋‚ด๋ ค์“ฐ๊ธฐ์ˆ˜ํ–‰
โ€ข ํŠนํ—ˆ ๋ฐ›์€ ์Šค๋งˆํŠธ ์บ์‹œ
โ€ข ํฌ๊ณ  ๋ฌด๊ฑฐ์šด ๋ฐ์ดํ„ฐ๋Š” ์บ์‹œ์— ์œ ์ง€
โ€ข ์ž์ฃผ ์“ฐ์ง€ ์•Š๋Š” ๋ฐ์ดํ„ฐ๋Š” ๋””์Šคํฌ์— ๋‚ด๋ ค์“ฐ๊ธฐ
โ€ข ์บ์‹œ๊ฐ€ ํ—ˆ์šฉํ•˜๋Š” ํ•œ ์ตœ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ
์œ ์ง€ํ•˜์—ฌ ์ฝ๊ธฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ
Benefits
โ€ข ๋†’์€ IOPs ๋น ๋ฅธ ๋ ˆ์ดํ„ด์‹œ
โ€ข High performing Hybrid nodes at a better TCO
Regular flusher
frequency
Flushing
rate
Accelerated
flusher frequency
IO rate
Low cache
utilization
Cache
High Random
AccessCache
Flexible Deployment Options
Node 1 Node 2 Node 3
Data
Only
vPG
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
Node 4
Only the drive types/sizes need to be the same
์Šคํ† ๋ฆฌ์ง€์˜ ์ฆ์„ค์€:
โ€ข CPU ์„ฑ๋Šฅ, ๊ฐœ์ˆ˜์— ๋ฌด๊ด€
โ€ข ๋ฉ”๋ชจ๋ฆฌ ์šฉ๋Ÿ‰์— ๋ฌด๊ด€
โ€ข ํ•„์š”์— ๋”ฐ๋ผ ์ฆ์„ค
๋ชจ๋ธ All-flash HCI Watch Data
Compute
Latency
์šฉ๋Ÿ‰
์–ด๋– ํ•œ ์š”๊ตฌ์—๋„ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋Š” ์œ ์—ฐํ•œ
๋ชจ๋ธ ์„ ํƒ์˜ ์ž์œ 
All-flash : ๊ฐ€์žฅ ๋น ๋ฅธ ์‘๋‹ต์‹œ๊ฐ„
HCI : ์ผ๋ฐ˜์ ์ธ ๊ฐ€์ƒํ™”, VDI
Watch : ๋งŽ์€ ์šฉ๋Ÿ‰์ด ์š”๊ตฌ๋˜๋Š” ์„œ๋น„์Šค
Data : ๊ธฐ์กดํ™˜๊ฒฝ์— ์šฉ๋Ÿ‰๋งŒ ์ฆ์„ค
Hybrid / Watch /DATA
12 X 3.5 in drives
Sizes range from:
1 TB = 12 TB Node
2 TB = 24 TB Node
4 TB = 48 TB Node
6 TB = 72 TB Node
8 TB = 96 TB Node
Watch/ DATA
1cpu, 2PCIslot
Data
No ESX
Dual 400
GB SSDs
10GB
Copper or
SFP+
10GB,
Copper or
SFP+
Power
supplies
750W
1
2 3
4
5
1- SSD cache
2- Full x16 PCI-e bay
& x8
3- low profile X8 &
X1 PCI-e slots
4- onboard USB for
usb SSD
5- Dual SD cards for
ESX install
vSTAC HCI/Watch
10 Gb
SFP+
10 Gb
SFP+
Mellanox
10G/40/56Gbps
1 Gb
BT
1 Gb
BT
10 Gb
BT
10 Gb
BTBT
HCI,Watch
PCIeDaughter Card
๋ณด์žฅ๋ ˆ๋ฒจ ๋ฐ ์šฉ๋Ÿ‰ Pivot3 Level
(Proprietary Architecture)
Data / System Protection
EC1
RAID 1E
โ€ข 1 disk or 1 appliance failure
RAID 5E
EC3
RAID 1P โ€ข 3 simultaneous disk
failures or
โ€ข 1 disk + 1 appliance
failure
RAID 6P
RAID 6E
EC5 RAID 6X
โ€ข 5 simultaneous disk
failures or
โ€ข 2 disk + 1 appliance
failure
Pivot3 Level
Single
Data / System Protection
RAID 1 1 disk failure
RAID 5 1 disk failure
RAID 6 2 simultaneous disk failures
๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค
๋ฌธ์˜
HCI@cdit.co.kr
02)3442-5588

Pivot3 tech overview_201704

  • 1.
  • 2.
    Raid 1 ์ „ํ†ต์ ์ธ ์Šคํ† ๋ฆฌ์ง€ํ˜•ํƒœ๋Š”โ€ฆ. โ€ข ํ˜ธ์ŠคํŠธ์˜ ์„ฑ๋Šฅ์€: โ€ข RAID set์— ํฌํ•จ๋œ ๋””์Šคํฌ ์ˆ˜์— ๋น„๋ก€ โ€ข ์„ฑ๋Šฅ์€ ํ•ด๋‹น ๋ณผ๋ฅจ์„ ์†Œ์œ ํ•œ Controller์˜ ์„ฑ๋Šฅ์— ์ œํ•œ โ€ข ์„ฑ๋Šฅ์„ ์œ„ํ•ด์„œ๋Š” ํŠน๋ณ„ํ•œ RAID set์„ ๊ตฌํ˜„ํ•ด์•ผํ•จ โ€ข ์“ฐ์ž„์— ๋”ฐ๋ผ RAID set์„ ๋””์ž์ธํ•˜๊ณ  ๊ด€๋ฆฌํ•˜์—ฌ์•ผํ•จ โ€ข ์šฉ๋Ÿ‰์„ ์ถฉ๋ถ„ํžˆ ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•จ โ€ข Volume๋ณ€๊ฒฝ๊ณผ๋Š” ๋‹ฌ๋ฆฌ RAID set์˜ ๋ณ€๊ฒฝ์€ ์ฆ‰์‹œ ๋ณ€๊ฒฝ์ด ์–ด๋ ค์›€ Raid 1 Raid 5 Raid 6 Raid 1 Raid 5 SnapShot Pool Spares Controller 1 R1 R1 R5R5 Vol 0 Vol 1 Vol 2 Vol 3 Controller 2 R6R6 R1R1 Vol 4 Vol 5 Vol 6 Vol 7
  • 3.
    vSTAC Cluster Pivot3๋Š” โ€ฆโ€ฆ. โ€ขNon RAID๋ฐฉ์‹์œผ๋กœ ๊ธฐ์กด RAIDset์œ„์— ๋ณผ๋ฅจ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ์‹์ด ์•„๋‹Œ ์ „์ฒด vPG์— ๋ณผ๋ฅจ์„ ๋ฐ”๋กœ ์ƒ์„ฑ โ€ข ํ•ด๋‹น ๋ณผ๋ฅจ์€ Online์œผ๋กœ ๋ณดํ˜ธ๋ ˆ๋ฒจ, ์„ฑ๋Šฅ์„ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์Œ โ€ข Node๊ฐ€ ๋Š˜์–ด๋‚˜๋ฉด ์šฉ๋Ÿ‰๊ณผ ์„ฑ๋Šฅ์ด ๊ฐ™์ด ์ฆ๊ฐ€ โ€ข ๋ชจ๋“  ๊ด€๋ฆฌ๋Š” Pivot3 vSTAC Management Console์—์„œ ์‰ฝ๊ฒŒ vSTAC ClustervPG 1 vPG 2 vPG n Node 1 Node 2 Node 3 VM VM VM vPG 1 (All Flash) APP OS APP OS APP OS Node 1 Node 2 Node 3 VM VM VM vPG 2 (Hybrid) APP OS APP OS APP OS DATA STORE 1 DATA STORE 2
  • 4.
    ๋…์ฐฝ์ ์ธ Erasure Coding๊ธฐ๋ฐ˜์˜ํšจ์œจ์„ฑ 66% 75% 80% 83% 86% 88% 89% 90% 91% 92% 92% 93% 93% 94% 40% 50% 60% 70% 80% 90% 100% 3 4 5 6 7 8 9 10 11 12 13 14 15 16 STORAGE๊ฐ€์šฉ๊ณต๊ฐ„ ๋…ธ๋“œ โ€ข ํŠนํ—ˆ ๋ฐ›์€ erasure coding ๋ฐฉ์‹์€ ์ตœ๋Œ€์˜ ๊ณต๊ฐ„์„ ๋ณด์žฅ โ€ข๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€ ๋ ์ˆ˜๋ก ์Šคํ† ๋ฆฌ์ง€์˜ ๊ณต๊ฐ„ ํšจ์œจ์€ ๋†’์•„์ง โ€ข ์ „์ฒด ๋…ธ๋“œ์ค‘ 5๊ฐœ์˜ ๋””์Šคํฌ ์žฅ์•  ๋˜๋Š” 1๋…ธ๋“œ+2๊ฐœ์˜ ๋””์Šคํฌ ์žฅ์• ๊นŒ์ง€๋ฅผ ํ—ˆ์šฉ High Storage Efficiency that increases with scale ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ์ตœ๊ณ  50%๊นŒ์ง€์˜ ๊ณต๊ฐ„์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Œ ๊ฐ€์šฉ๊ณต๊ฐ„ Scale Mirror50% 33% 16% โ€ข ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹์€ ๊ฐ€์žฅ ์‰ฌ์šฐ๋ฉด์„œ๋„ ํŠน๋ณ„ํ•œ ๊ธฐ์ˆ ์ด ํ•„์š”์—†์Œ โ€ข ๋…ธ๋“œ๊ฐ€ ๋Š˜์–ด๋‚˜๋”๋ผ๋„ ๊ฐ€์šฉ๊ณต๊ฐ„์˜ ํšจ์œจ์€ ์ฆ๊ฐ€๋˜์ง€ ์•Š์Œ โ€ข ๋””์Šคํฌ์˜ ์žฅ์• ํ—ˆ์šฉ์„ ๋Š˜๋ฆด์ˆ˜๋ก ๊ฐ€์šฉ๊ณต๊ฐ„์€ ๊ธ‰์†ํžˆ ๊ฐ์†Œํ•จ ๊ธฐ์กด ๋ฆฌํ”Œ๋ฆฌ์ผ€์ด์…˜ ๋ฐฉ์‹ 3์ค‘ ๋ฏธ๋Ÿฌ Five Drive Protection ์ฐจ๋ณ„์  #1 Pivot3 ๊ธฐํƒ€ ์ œํ’ˆ
  • 5.
    ๋ณผ๋ฅจ ๋งค๋‹ˆ์ € ๋””์Šคํฌ ์ง์ ‘์•ก์„ธ์Šค ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•œ ์•ก์„ธ์Šค Hypervisor (ESXi) HDD Pivot3 vSTAC OS (VM) Hyper-Converged Infrastructure โ€ขPivot3 Close-to-the-metal methodology โ€ข ํ•˜์ดํผ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•˜์ง€ ์•Š๊ณ  ์ง์ ‘ ๋””์Šคํฌ๋ฅผ ์•ก์„ธ์Šคํ•˜์—ฌ 30-40%์˜ ์„ฑ๋Šฅํ–ฅ์ƒ โ€ข ๋ฌผ๋ฆฌ ๋””์Šคํฌ๋ฅผ ์ง์ ‘ ๊ด€๋ฆฌ Hypervisor SSD VM running Software-defined storage Disk access through the hypervisor โ€ข ํ•˜์ดํผ ๋ฐ”์ด์ €๋ฅผ ํ†ตํ•ด ๋ณผ๋ฅจ๋งค๋‹ˆ์ €์— ์•ก์„ธ์Šค ํ•˜๊ฑฐ๋‚˜ โ€ข JBOD์‚ฌ์šฉํ•˜๋”๋ผ๋„ ํ•˜์ดํผ๋ฐ”์ด์ € ํ†ตํ•ด์„œ ๋””์Šคํฌ ์ ‘๊ทผ โ€ข ๋ฌผ๋ฆฌ๋””์Šคํฌ์˜ ์ง์ ‘๊ด€๋ฆฌ ๋ถˆ๊ฐ€ ์ฐจ๋ณ„์  #2 HDD HDD HDDSSD ์ง์ ‘ ์•ก์„ธ์Šค ์บ์‹œ๊ด€๋ฆฌ์ž
  • 6.
    Full-time Active/Active/Active Pivot3 GlobalActive/Active/Active Conventional SAN: Active / Passive Active / Passive Redundant Controllers Storage controller #0 Storage controller #1 Passive Active Passive Active โ€ข VM์€ Pivot3 cluster์˜ ๋ชจ๋“  ์ปจํŠธ๋กค๋Ÿฌ์— ์•ก์„ธ์Šค๊ฐ€ ๊ฐ€๋Šฅ โ€ข ๋…ธ๋“œ๊ฐ€ ์ถ”๊ฐ€๋ ๋•Œ ๋งˆ๋‹ค ์ปจํŠธ๋กค๋Ÿฌ์˜ ์„ฑ๋Šฅ+ ๊ฐ€์šฉ์„ฑ+Bandwidth์ด ์„ ํ˜•์ ์œผ๋กœ ์ฆ๊ฐ€ โ€ข True, optimized Global Active / Active. โ€ข ๋ณดํ†ต ์Šคํ† ๋ฆฌ์ง€๋Š” 2๊ฐœ์˜ ์ปจํŠธ๋กค๋Ÿฌ๋กœ ๋™์ž‘ โ€ข ๊ฐ ์ปจํŠธ๋กค๋Ÿฌ๋Š” ์„œ๋กœ๊ฐ„์˜ ๋™์ž‘์„ ๊ฐ์‹œํ•˜๋ฉด์„œ A-A ํ˜น์€ A-S๋กœ ๋™์ž‘ โ€ข ์„ฑ๋Šฅ, ๊ฐ€์šฉ์„ฑ, Bandwidth์€ ๊ณ ์ •๋จ Node #1 HDD HDD HDD Node #n HDD HDD HDD Node #2 HDD HDD HDD ์ปจํŠธ๋กค๋Ÿฌ ์ปจํŠธ๋กค๋Ÿฌ ์ปจํŠธ๋กค๋Ÿฌ โˆ‘ (IOPs)Global Active / Active => ์ฐจ๋ณ„์  #3
  • 7.
    Global Virtual DriveSparing Pivot3 Virtual Global Sparing Conventional sparing system Appliance #1 HDD Spare HDD โ€ข ์ „ํ†ต๋ฐฉ์‹์˜ ์ŠคํŽ˜์–ด๋Š” ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ํ•˜๋‚˜์”ฉ์˜ ์ŠคํŽ˜์–ด๋ฅผ ํ• ๋‹นํ•˜์—ฌ ๋””์Šคํฌ ์žฅ์• ์— ๋Œ€๋น„ํ•จ โ€ข HDD ์ „์ฒด ์„ฑ๋Šฅ์€ ๋””์Šคํฌ์ˆ˜-์ŠคํŽ˜์–ด์ˆ˜ (ex 900 IOPS) โ€ข HDD์žฅ์• ์‹œ ํ•ด๋‹น ๋…ธ๋“œ์˜ ๋ชจ๋“  HDD๊ฐ€ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌํ•˜์—ฌ ์ •์ƒ์ ์ธ ์„ฑ๋Šฅ์„ ๋‚ผ ์ˆ˜ ์—†์Œ โ€ข ๊ฐ ๋…ธ๋“œ๋งˆ๋‹ค ์ŠคํŽ˜์–ด ๋””์Šคํฌ๋ฅผ ๋ฏธ๋ฆฌ ์ง€์ •ํ•˜์ง€ ์•Š๊ณ  ์ „์ฒด ๋…ธ๋“œ์— ์ŠคํŽ˜์–ด ๊ณต๊ฐ„์„ ๊ฐ€์ƒ์œผ๋กœ ๋ฐฐ์ • โ€ข Pivot3 ํด๋Ÿฌ์Šคํ„ฐ์— ํ•˜๋‚˜์˜ ๋””์Šคํฌ ๊ณต๊ฐ„๋งŒํผ๋งŒ ์†Œ๋น„๋จ โ€ข HDD์˜ ์žฅ์• ๊ฐ€ ๋ฐœ์ƒ ํ•˜๋”๋ผ๋„ ์„ฑ๋Šฅ์˜ ์ €ํ•˜ ์—†์ด ๋ณต๊ตฌ๋˜๋ฉฐ ์ˆ˜๋™์ž‘์—…์€ ์ „ํ˜€ ๋ถˆํ•„์š” โ€ข HDD์žฅ์• ์‹œ ๋ชจ๋“  ๋…ธ๋“œ์˜ HDD๊ฐ€ ์ž‘์—…์„ ๋‚˜๋ˆ ์„œ ์กฐ๊ธˆ์”ฉ ๋ฆฌ๋นŒ๋”ฉ์— ์ฐธ์—ฌ ์„ฑ๋Šฅ์˜ ์ €ํ•˜๊ฐ€ ์—†์Œ HDD HDD Appliance #2 HDD Spare HDD HDD HDD Appliance #n HDD Spare HDD HDD HDD Pivot3 Re-positioning ์ฐจ๋ณ„์  #4 Appliance #1 HDD HDD HDD HDD Virtual sparing Appliance #n HDD HDD HDD HDD Virtual sparing Appliance #2 HDD HDD HDD HDD Virtual sparing
  • 8.
    Spare disk 1. ReadD1 2. Read D3 3. Read D4 4. Read P 5. Read D5 6. Read D6 7. Read D7 8. calculate D2=P1xD1xD3xD4xD5xD6xD7 9. Write D2N 10. X1,000,000 Virtual Spare 1. Read D2,D12โ€ฆ 2. Write D2N 3. Write D12N 4. โ€ฆ.. 5. X100,000 No CPU action D2N D2N D12N CPU Disk CPU Disk D1 D2 D3 D4 P1 D5 D6 D7 Replace vs Rebuild D1 D2 D3 D4 P9 D5 D6 D7 D8 D11 D12 D13 P14 D14 D15 D16 D17 D18
  • 9.
    HDD ์‘๋‹ต ์‹œ๊ฐ„ HDD1Drive 2 HDD3 ์‚ฌ์ „์˜ˆ์ธก ๋””์Šคํฌ ์žฅ์• ์ฒ˜๋ฆฌ โ€ข๋Œ€๋ถ€๋ถ„์˜ HDD๋Š” ์™„์ „ํžˆ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ๋ณด๋‹ค, ๋จผ์ € ๋А๋ ค์ง€๊ฑฐ๋‚˜ ๋ฐฐ๋“œ ์„นํ„ฐ๊ฐ€ ๋ฐœ์ƒ โ€ข๋А๋ ค์ง„ ๋””์Šคํฌ๋Š” ์ „์ฒด ์„ฑ๋Šฅ์—๋„ ์˜ํ–ฅ์„ ์คŒ โ€ขPivot3 ๋Š” ๋””์Šคํฌ์˜ ์™„์ „ ์žฅ์• ๊ฐ€ ๋‚˜๊ธฐ์ „์— ๋ฏธ๋ฆฌ ์ŠคํŽ˜์–ด์กฐ์น˜๋ฅผ ์ˆ˜ํ–‰ํ•จ Cluster HDD7 Appliance #3 Virtual Global spare HDD4 HDD1 HDD8 HDD5 HDD2 HDD9 HDD6 HDD3 Virtual sparing STEP 2 Global Spare์— ์žฌ๋ฐฐ์น˜๋ฅผ ๋ฏธ๋ฆฌ ์ˆ˜ํ–‰ * ์„ฑ๋Šฅ์ €ํ•˜ ๋ฐœ์ƒ์„ ์‚ฌ์ „ ๋ฐฉ์ง€ 2. ์‚ฌ์ „ ๋ฐ์ดํ„ฐ ๋ณต์ œSTEP 1 HDD์˜ ์‘๋‹ต์‹œ๊ฐ„์„ ํ•ญ์ƒ ๊ฐ์‹œ, ๋А๋ ค์ง„ HDD๋ฅผ ๊ฒ€์ถœ HDD์žฅ์•  ๋ฐœ์ƒ ์ „ ์‚ฌ์ „์˜ˆ์ธก 1. ์žฅ์• ์‚ฌ์ „๊ฒ€์ถœ STEP 3 HDD๊ต์ฒด ์‹œ์—๋„ ์ „ํ˜€ ์„ฑ๋Šฅ์˜ ์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š์Œ 3. HDD๊ต์ฒด ํ›„ ๋ฆฌ๋นŒ๋”ฉ ๋ถˆํ•„์š” Cluster HDD7 HDD4 HDD1 HDD8 HDD5 HDD9 HDD6 HDD3NewHDD2 HDD1 HDD2 HDD3 Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing Virtual sparing ์ฐจ๋ณ„์  #5
  • 10.
    #1 Node ElastiCache โ€“๋ฉ”๋ชจ๋ฆฌ ๋ฐ์ดํ„ฐ ์บ์‹ฑ ElastiCache ํŠนํ—ˆ โ€ข ๊ฐ ๋…ธ๋“œ๋‹นUp 64GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๋ฐ์ดํ„ฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ โ€ข vPG๋‚ด์˜ ๋ฉ”๋ชจ๋ฆฌ๋Š” ๋™์‹œ์— ์‚ฌ์šฉ๋จ โ€ข vPG ์ตœ๋Œ€ ์บ์‹œ1TB โ€ข ๋™์  read/write cache โ€ข cold cache data to HDDs #3 Node#2 Node ElastiCache Memory FlashCache SSD HDD ElastiCache Memory FlashCache SSD HDD ElastiCache Memory FlashCache SSD HDD Acknowledgement APP OS ์ฐจ๋ณ„์  #6
  • 11.
    Node 1 Node2 Node 3 VM VM VM vPG (Hybrid) APP OS APP OS APP OS ElastiCache ์˜ ์“ฐ๊ธฐ ์ตœ์ ํ™” โ€ข Write cache์˜ ๋‚ด๋ ค์“ฐ๊ธฐ์— ์ตœ์ ํ™” โ€ข IO type ๊ณผ ๋ถ€ํ•˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ๋‚ด๋ ค์“ฐ๊ธฐ์ˆ˜ํ–‰ โ€ข ํŠนํ—ˆ ๋ฐ›์€ ์Šค๋งˆํŠธ ์บ์‹œ โ€ข ํฌ๊ณ  ๋ฌด๊ฑฐ์šด ๋ฐ์ดํ„ฐ๋Š” ์บ์‹œ์— ์œ ์ง€ โ€ข ์ž์ฃผ ์“ฐ์ง€ ์•Š๋Š” ๋ฐ์ดํ„ฐ๋Š” ๋””์Šคํฌ์— ๋‚ด๋ ค์“ฐ๊ธฐ โ€ข ์บ์‹œ๊ฐ€ ํ—ˆ์šฉํ•˜๋Š” ํ•œ ์ตœ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์œ ์ง€ํ•˜์—ฌ ์ฝ๊ธฐ ์บ์‹œ๋กœ ์‚ฌ์šฉ Benefits โ€ข ๋†’์€ IOPs ๋น ๋ฅธ ๋ ˆ์ดํ„ด์‹œ โ€ข High performing Hybrid nodes at a better TCO Regular flusher frequency Flushing rate Accelerated flusher frequency IO rate Low cache utilization Cache High Random AccessCache
  • 12.
    Flexible Deployment Options Node1 Node 2 Node 3 Data Only vPG VM APP OSVM APP OS VM APP OS VM APP OSVM APP OS VM APP OS VM APP OSVM APP OS VM APP OS Node 4 Only the drive types/sizes need to be the same ์Šคํ† ๋ฆฌ์ง€์˜ ์ฆ์„ค์€: โ€ข CPU ์„ฑ๋Šฅ, ๊ฐœ์ˆ˜์— ๋ฌด๊ด€ โ€ข ๋ฉ”๋ชจ๋ฆฌ ์šฉ๋Ÿ‰์— ๋ฌด๊ด€ โ€ข ํ•„์š”์— ๋”ฐ๋ผ ์ฆ์„ค ๋ชจ๋ธ All-flash HCI Watch Data Compute Latency ์šฉ๋Ÿ‰ ์–ด๋– ํ•œ ์š”๊ตฌ์—๋„ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋Š” ์œ ์—ฐํ•œ ๋ชจ๋ธ ์„ ํƒ์˜ ์ž์œ  All-flash : ๊ฐ€์žฅ ๋น ๋ฅธ ์‘๋‹ต์‹œ๊ฐ„ HCI : ์ผ๋ฐ˜์ ์ธ ๊ฐ€์ƒํ™”, VDI Watch : ๋งŽ์€ ์šฉ๋Ÿ‰์ด ์š”๊ตฌ๋˜๋Š” ์„œ๋น„์Šค Data : ๊ธฐ์กดํ™˜๊ฒฝ์— ์šฉ๋Ÿ‰๋งŒ ์ฆ์„ค
  • 13.
    Hybrid / Watch/DATA 12 X 3.5 in drives Sizes range from: 1 TB = 12 TB Node 2 TB = 24 TB Node 4 TB = 48 TB Node 6 TB = 72 TB Node 8 TB = 96 TB Node Watch/ DATA 1cpu, 2PCIslot Data No ESX Dual 400 GB SSDs 10GB Copper or SFP+ 10GB, Copper or SFP+ Power supplies 750W 1 2 3 4 5 1- SSD cache 2- Full x16 PCI-e bay & x8 3- low profile X8 & X1 PCI-e slots 4- onboard USB for usb SSD 5- Dual SD cards for ESX install
  • 14.
    vSTAC HCI/Watch 10 Gb SFP+ 10Gb SFP+ Mellanox 10G/40/56Gbps 1 Gb BT 1 Gb BT 10 Gb BT 10 Gb BTBT HCI,Watch PCIeDaughter Card
  • 15.
    ๋ณด์žฅ๋ ˆ๋ฒจ ๋ฐ ์šฉ๋Ÿ‰Pivot3 Level (Proprietary Architecture) Data / System Protection EC1 RAID 1E โ€ข 1 disk or 1 appliance failure RAID 5E EC3 RAID 1P โ€ข 3 simultaneous disk failures or โ€ข 1 disk + 1 appliance failure RAID 6P RAID 6E EC5 RAID 6X โ€ข 5 simultaneous disk failures or โ€ข 2 disk + 1 appliance failure Pivot3 Level Single Data / System Protection RAID 1 1 disk failure RAID 5 1 disk failure RAID 6 2 simultaneous disk failures
  • 16.

Editor's Notes

  • #7ย A-A๋ฐฉ์‹์ด๋ผ ํ•˜๋”๋ผ๋„ ์„œ๋กœ๊ฐ„์˜ task mirror๋กœ ์„ฑ๋Šฅ์€ 1์ž„.
  • #11ย ElastiCache has a variable cache based on the amount of reads or writes coming into ElastiCacheโ€ฆit is not fixed
  • #12ย Our caching algorithms make adjustments to the flusher speeds based on IO type and IO rate. These algorithms keep our cache at an optimal state where the heavily utilized/active data remains in cache, and the cold data stages down to disk. We have implemented cache differently than traditional storage arrays and is the main reason we get such great performance in our Hybrid arrays with SATA drives.ย  The speed of write cache flushing depends on many factors, obviously one of them will be IO type and IO loads. After flushing, write cache will transition to read cache (our unique implementation) and will stay as long as possible until we need to reclaim the cache for newer IOs.
  • #14ย The 400GB MLC SSDs are used for Flash Cache. We use 25GB of capacity on each 400GB drive for Flash Cache, the remaining capacity on the drives is used for moving the Flash Cache around to extend write wear of the MLC SSD drives.