Pivot3 ํ์ดํผ์ปจ๋ฒ์ ์ค ํ์
PE R F O R M A N C E2x M O R E I O P S P E R
D E S K T O P S4x O P E R A T I N G
E N V I R O N M E N T
O V E R H E A D10%
๋ถ์ฐ์ฒ๋ฆฌ HCI ์ด๋ ์ด์ ์ฝ๋ ํนํ๊ธฐ์ ํจ์จ์ ์ธ ์ด์ ํ๊ฒฝ
SDS
VIRTUAL SERVERS VIRTUAL SAN
VM VM
VM VM
VM VM
x86
11.
๋ ๋์ ํจ๊ณผ๋ฅผ๋ณด์ฅ
VIRTUAL SERVERS VIRTUAL SAN
VM VM
VM VM
VM VM
VM VM
x86
Hyperconverged Infrastructure
ํ์์ ๋ฐ๋ผ ํ์ค x86์๋ฒ๋ฅผ ๋๋ ค๊ฐ๋
์ ์ฐํ ์ํคํ ์ณ
๋ชจ๋๋ฐฉ์
x86 Nodes
๊ทน๋ํ๋ ์์ ํจ์จ์ฑ์ ์ํด
์คํ ๋ฆฌ์ง์ ์ปดํจํ ์ ๋์์
ํ์ฅํ๋ ๋ถ์ฐ Scale-out ๊ตฌ์กฐ
๋ถ์ฐ
Scale-out
7% ์ ์์คํ ๋ฆฌ์์ค๋ง์ ์ฌ์ฉํ๋
ํจ์จ์ ์ธ HCI
ํจ์จ์ ์ธ
์ด์ํ๊ฒฝ
12.
๊ณ ์ฑ๋ฅ์ด ์ ์ง๋๋ ๊ณ ๊ฐ์ฉํ๊ฒฝ
Hyperconverged Infrastructure
์ต๋ 94% utilization ์ ๊ณตํ๋ ์ ์ฐํ
๋ฐ์ดํฐ๋ณดํธ ๊ธฐ์
99.9999%์ ๋ฐ์ดํฐ ์์ ์ฑ
ํนํ ๊ธฐ์
Erasure
Coding
์ฅ์ ์์๋
85%์ด์์ ์ฑ๋ฅ์ ๋ณด์ฅํ๋ฉด์
๊ณ ๊ฐ์ฉ์ฑ์ ๋ณด์ฅ
Availability
With
Performance
VIRTUAL SERVERS
VM VM
VM VM
VM VM
VM VM
x86
VIRTUAL SAN
Node 1 Node2 Node 3
VM VM VM
vPG (Hybrid)
APP
OS
APP
OS
APP
OS
ElastiCache ์ ์ฐ๊ธฐ ์ต์ ํ
โข Write cache์ ๋ด๋ ค์ฐ๊ธฐ์ ์ต์ ํ
โข IO type ๊ณผ ๋ถํ ์ข ๋ฅ์ ๋ฐ๋ผ ๋ด๋ ค์ฐ๊ธฐ์ํ
โข ํนํ ๋ฐ์ ์ค๋งํธ ์บ์
โข ํฌ๊ณ ๋ฌด๊ฑฐ์ด ๋ฐ์ดํฐ๋ ์บ์์ ์ ์ง
โข ์์ฃผ ์ฐ์ง ์๋ ๋ฐ์ดํฐ๋ ๋์คํฌ์ ๋ด๋ ค์ฐ๊ธฐ
โข ์บ์๊ฐ ํ์ฉํ๋ ํ ์ต๋ํ ๋ฐ์ดํฐ๋ฅผ
์ ์งํ์ฌ ์ฝ๊ธฐ ์บ์๋ก ์ฌ์ฉ
Benefits
โข ๋์ IOPs ๋น ๋ฅธ ๋ ์ดํด์
โข High performing Hybrid nodes at a better TCO
Regular flusher
frequency
Flushing
rate
Accelerated
flusher frequency
IO rate
Low cache
utilization
Cache
High Random
AccessCache
25.
Flexible Deployment Options
Node1 Node 2 Node 3
Data
Only
vPG
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
VM
APP
OSVM
APP
OS
VM
APP
OS
Node 4
Only the drive types/sizes need to be the same
์คํ ๋ฆฌ์ง์ ์ฆ์ค์:
โข CPU ์ฑ๋ฅ, ๊ฐ์์ ๋ฌด๊ด
โข ๋ฉ๋ชจ๋ฆฌ ์ฉ๋์ ๋ฌด๊ด
โข ํ์์ ๋ฐ๋ผ ์ฆ์ค
๋ชจ๋ธ All-flash HCI Watch Data
Compute
Latency
์ฉ๋
์ด๋ ํ ์๊ตฌ์๋ ๋์ํ ์ ์๋ ์ ์ฐํ
๋ชจ๋ธ ์ ํ์ ์์
All-flash : ๊ฐ์ฅ ๋น ๋ฅธ ์๋ต์๊ฐ
HCI : ์ผ๋ฐ์ ์ธ ๊ฐ์ํ, VDI
Watch : ๋ง์ ์ฉ๋์ด ์๊ตฌ๋๋ ์๋น์ค
Data : ๊ธฐ์กดํ๊ฒฝ์ ์ฉ๋๋ง ์ฆ์ค
26.
Hybrid / Watch/DATA
12 X 3.5 in drives
Sizes range from:
1 TB = 12 TB Node
2 TB = 24 TB Node
4 TB = 48 TB Node
6 TB = 72 TB Node
8 TB = 96 TB Node
Watch/ DATA
1cpu, 2PCIslot
Data
No ESX
Dual 400
GB SSDs
10GB
Copper or
SFP+
10GB,
Copper or
SFP+
Power
supplies
750W
1
2 3
4
5
1- SSD cache
2- Full x16 PCI-e bay
& x8
3- low profile X8 &
X1 PCI-e slots
4- onboard USB for
usb SSD
5- Dual SD cards for
ESX install
๋ณด์ฅ๋ ๋ฒจ ๋ฐ ์ฉ๋Pivot3 Level
(Proprietary Architecture)
Data / System Protection
EC1
RAID 1E
โข 1 disk or 1 appliance failure
RAID 5E
EC3
RAID 1P โข 3 simultaneous disk
failures or
โข 1 disk + 1 appliance
failure
RAID 6P
RAID 6E
EC5 RAID 6X
โข 5 simultaneous disk
failures or
โข 2 disk + 1 appliance
failure
Pivot3 Level
Single
Data / System Protection
RAID 1 1 disk failure
RAID 5 1 disk failure
RAID 6 2 simultaneous disk failures
#7ย What is Hyper-Convergence? At the most basic level
Combined storage and computing platform
Off The Shelf (OTS) Components, X86 Processors vs specialized ASICS
Delivered in a Appliance Model
Software Defined Storage โ what makes this possible
X86
Lets dig a bit deeper on the X86 economics and ecosystem, because itโs one of the real drivers of HCI
As of 2014, from Gartner:
X86 servers represent 99% of the servers in terms of Units Shipped and >75% by $$$$, These % figures continue to increase year over year. This is result continual increase in capability and lower costs of X86 processors driven ever higher volumes.
Also due to competition, the Vendors have had to settle for lower margins. HP have 25% GM and Dell 23% verses IBM's >50% GM which is why they exited the X86 Server market by selling that business to Lenovo (12% GM)
HCI is bringing the same economies, same trends that have unfolded in the server world to the combined Storage & Compute environment
Caption: Modular / Scalable Architecture, Simplicity, Generalists (vs specialists), Lower TCO
Bringing the benefits that Google, Amazon, etc. have seen to the Enterprise Scale
When talking to customers that are not familiar with the concept of HC, you may need to elaborate a but further in the case where this particular definition is not 100% clear to them as they already buy servers today, which have CPUโs, Disks, Memory, and NICโs all in a rack chassis. ย These could be considered as off-the-shelf since they can buy it from Dell/HP/Lenovo/whoever, and just like their fridge, itโs an appliance. ย "So what are you trying to tell me HC is? Isnโt that whatโs in a server I am buying from Dell today?โ Since the majority are users of some sort of virtual machine environment (Vmware/Hyper-V/KVM) they get that and we may be able to use that as a launching pad to describe HC as a combination of "VIRTUALIZED storage + VIRTUALIZED compute + VIRTUALIZED networkingโ that is put together as an appliance with the same familiar x86 technologies they currently work with. ย The mechanical boundaries of the appliance are transparent in a fully virtualized, or hyper-converged system. ย The finishing message that should be conveyed is that hyper-convergence is the brining together of all the independently virtualized resources of the data center into a simple appliance based deployment.
#25ย ElastiCache has a variable cache based on the amount of reads or writes coming into ElastiCacheโฆit is not fixed
#26ย Our caching algorithms make adjustments to the flusher speeds based on IO type and IO rate. These algorithms keep our cache at an optimal state where the heavily utilized/active data remains in cache, and the cold data stages down to disk.
We have implemented cache differently than traditional storage arrays and is the main reason we get such great performance in our Hybrid arrays with SATA drives.ย
The speed of write cache flushing depends on many factors, obviously one of them will be IO type and IO loads. After flushing, write cache will transition to read cache (our unique implementation) and will stay as long as possible until we need to reclaim the cache for newer IOs.
#28ย The 400GB MLC SSDs are used for Flash Cache. We use 25GB of capacity on each 400GB drive for Flash Cache, the remaining capacity on the drives is used for moving the Flash Cache around to extend write wear of the MLC SSD drives.
#37ย NexGen data reduction technologies deliver performance, endurance and capacity benefits for our All-fash arrays.
#38ย We believe that QoS will be a requirement for the next generation of all-flash arrays. NexGenโs dynamic QoS simplifies performance management and delivers more consistent application performance. Rather than make admins manually input performance targets, weโve done the work, by creating five simple to assign policies. for them unlike other QoS offerings that force end users to react to a noisy neighbor and manually input minimums, maximums, and burst settings for every single volume in the system. ย
NexGenโs Dynamic Storage QoS has three distinct attributes. The first is that it allows customers to apply performance targets to Volumes and VMs. This manages min and max performance levels in terms of IOPS, TP and Latency.
Predefined policies control everything, including how data is prioritized, including intelligent BW throttling and queuing to ensure service levels are met.
Finally, since weโre a multi-tier architecture, real-time data placement is paramount. This is where features like prioritized active cache ensures that flash is prioritized for more critical workloads.
Emphasize dynamic QoS simplifies performance management and delivers more consistent application performance, unlike other QoS offerings that force end users to react to a noisy neighbor and manually input minimums, maximums, and burst settings for every single volume in the system. ย
Often times customers donโt realize how much performance to provision, NexGenโs patented QoS uses massive amounts of performance data collected from production environments to pre-define 5 simple policies that automate performance management. ย This is especially important for customers considering using Vmware vSphere 6 with virtual volumes (VVOLs). ย VVOLs can increase the number of objects in a data storage system by 30 times, there is no way users can scale by manually inputing minimums and maximums on every single VVOL.
ย
PRIORITIES
Another shortcoming of other QoS engines is the ability to over-provision system performance. ย Just like thin provisioning in the capacity world, the ability to over provision is key to maximum utilization of resources. ย QoS engines with manual minimum, maximum, and burst settings leave users stranded when it comes to over provisioning as without a way to prioritize between the various performance targets during the times where the system is encountering contention (due to over provisioining), the system canโt make the right trade-offs and ensure mission critical apps get consistent performance.
ย
PLACEMENT
There is a proliferation of new types of non-volatile media including several different types of flash. ย The only way to ensure customers achieve the most affordable all flash system while avoiding latency spikes and contention is to use multiple tiers of flash managed by a QoS engine. ย NexGen designed itโs architected all management capabilities around the concept of QoS in anticipation of using the best of all worlds from multiple media types and types yet to be discovered. ย
ย
VVOL INTEGRATION
Our policies are integrated with Vmwareโs Storage Based Policy engine (SPBM), that radically simplifies VM-level performance management. ย Changing performance is as simple as changing a selection on the drop down menu item โ end users notice the change in seconds.
#40ย Policy based QoS allows any one managing storage to easily assign priorities to a vm or volume. There are 5 preconfigured QoS policies.
Youโre looking at the QoS policy for the NexGen n-1500 all flash array.
One of the our two existing hybrid policies is shown below so you can see the difference between QoS for multi-tier w/ HDD vs. multi-tier all-flash.
Keep in mind that each set of polices were designed. To align w/ the amount of flash in the system.
#42ย Why does QoS matter in practice? Because customer data is not homogeneious.
They have different performance requirements and SLAs.
Take for example VDI. โฆ.
#43ย For SQL customers, this allows customers to attain the granular performance mgmt. that exists with stand-alone instances.
Policies can be assigned to VMFS, and now with VVLos support, wach instance gets granular performance management.
Our policies are integrated with Vmwareโs Storage Based Policy engine (SPBM), that radically simplifies VM-level performance management. ย Changing performance is as simple as changing a selection on the drop down menu item โ end users notice the change in seconds.