Node 1 Node2 Node 3
VM VM VM
vPG (Hybrid)
APP
OS
APP
OS
APP
OS
ElastiCache ์ ์ฐ๊ธฐ ์ต์ ํ
โข Write cache์ ๋ด๋ ค์ฐ๊ธฐ์ ์ต์ ํ
โข IO type ๊ณผ ๋ถํ ์ข ๋ฅ์ ๋ฐ๋ผ ๋ด๋ ค์ฐ๊ธฐ์ํ
โข ํนํ ๋ฐ์ ์ค๋งํธ ์บ์
โข ํฌ๊ณ ๋ฌด๊ฑฐ์ด ๋ฐ์ดํฐ๋ ์บ์์ ์ ์ง
โข ์์ฃผ ์ฐ์ง ์๋ ๋ฐ์ดํฐ๋ ๋์คํฌ์ ๋ด๋ ค์ฐ๊ธฐ
โข ์บ์๊ฐ ํ์ฉํ๋ ํ ์ต๋ํ ๋ฐ์ดํฐ๋ฅผ
์ ์งํ์ฌ ์ฝ๊ธฐ ์บ์๋ก ์ฌ์ฉ
Benefits
โข ๋์ IOPs ๋น ๋ฅธ ๋ ์ดํด์
โข High performing Hybrid nodes at a better TCO
Regular flusher
frequency
Flushing
rate
Accelerated
flusher frequency
IO rate
Low cache
utilization
Cache
High Random
AccessCache
12.
Flexible Deployment Options
Node1 Node 2 Node 3
Data
Only
vPG
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
Node 4
Only the drive types/sizes need to be the same
์คํ ๋ฆฌ์ง์ ์ฆ์ค์:
โข CPU ์ฑ๋ฅ, ๊ฐ์์ ๋ฌด๊ด
โข ๋ฉ๋ชจ๋ฆฌ ์ฉ๋์ ๋ฌด๊ด
โข ํ์์ ๋ฐ๋ผ ์ฆ์ค
๋ชจ๋ธ All-flash HCI Watch Data
Compute
Latency
์ฉ๋
์ด๋ ํ ์๊ตฌ์๋ ๋์ํ ์ ์๋ ์ ์ฐํ
๋ชจ๋ธ ์ ํ์ ์์
All-flash : ๊ฐ์ฅ ๋น ๋ฅธ ์๋ต์๊ฐ
HCI : ์ผ๋ฐ์ ์ธ ๊ฐ์ํ, VDI
Watch : ๋ง์ ์ฉ๋์ด ์๊ตฌ๋๋ ์๋น์ค
Data : ๊ธฐ์กดํ๊ฒฝ์ ์ฉ๋๋ง ์ฆ์ค
13.
Hybrid / Watch/DATA
12 X 3.5 in drives
Sizes range from:
1 TB = 12 TB Node
2 TB = 24 TB Node
4 TB = 48 TB Node
6 TB = 72 TB Node
8 TB = 96 TB Node
Watch/ DATA
1cpu, 2PCIslot
Data
No ESX
Dual 400
GB SSDs
10GB
Copper or
SFP+
10GB,
Copper or
SFP+
Power
supplies
750W
1
2 3
4
5
1- SSD cache
2- Full x16 PCI-e bay
& x8
3- low profile X8 &
X1 PCI-e slots
4- onboard USB for
usb SSD
5- Dual SD cards for
ESX install
๋ณด์ฅ๋ ๋ฒจ ๋ฐ ์ฉ๋Pivot3 Level
(Proprietary Architecture)
Data / System Protection
EC1
RAID 1E
โข 1 disk or 1 appliance failure
RAID 5E
EC3
RAID 1P โข 3 simultaneous disk
failures or
โข 1 disk + 1 appliance
failure
RAID 6P
RAID 6E
EC5 RAID 6X
โข 5 simultaneous disk
failures or
โข 2 disk + 1 appliance
failure
Pivot3 Level
Single
Data / System Protection
RAID 1 1 disk failure
RAID 5 1 disk failure
RAID 6 2 simultaneous disk failures
#11ย ElastiCache has a variable cache based on the amount of reads or writes coming into ElastiCacheโฆit is not fixed
#12ย Our caching algorithms make adjustments to the flusher speeds based on IO type and IO rate. These algorithms keep our cache at an optimal state where the heavily utilized/active data remains in cache, and the cold data stages down to disk.
We have implemented cache differently than traditional storage arrays and is the main reason we get such great performance in our Hybrid arrays with SATA drives.ย
The speed of write cache flushing depends on many factors, obviously one of them will be IO type and IO loads. After flushing, write cache will transition to read cache (our unique implementation) and will stay as long as possible until we need to reclaim the cache for newer IOs.
#14ย The 400GB MLC SSDs are used for Flash Cache. We use 25GB of capacity on each 400GB drive for Flash Cache, the remaining capacity on the drives is used for moving the Flash Cache around to extend write wear of the MLC SSD drives.