SlideShare a Scribd company logo
1 of 4
Download to read offline
VirtualStorm
Cloud Enabled Virtual Desktop Infrastructure
based on industry leading technologies.

By                                             Storage
                                            Architectures
                                                                                                              © 2010 DinamiQs


                                               The future of VDI
VirtualStorm is hardware agnostic. In other words, it         Small scale environments
will run on any combination of servers, storage and
network. However, the system is designed in such a way        Current server hardware is very powerful. Considering
that it tries to make optimal use of of the components that   that most users deploy their applications and work by
are available. So there are some things to take into          typing text or opening and closing applications and any
account when designing an environment and especially          combination thereof and that read time for what appears
larger environments require thought and careful design to     on screen is also required, it stands to reason that in
make the most of the VirtualStorm software stack.             general end-users utilize only a very small portion of
                                                              available resources.
Below is a general architecture that describes how the
VDI environment is best created. There are many               Combine this with the VirtualStorm optimized Windows
implementations that can deliver different results. The       (XP or 7) stack running on a hardware supported
goal of this document is to describe and make clear the       virtualization hypervisor such as VMware’s ESX and it
ideal situation for most environments, ranging from           becomes obvious that high densities of VMs can be
simple, small environments to large enterprise VDI            achieved using regular compute hardware and relatively
structures and private and public clouds.                     small storage.

                                                              The smallest environment for VirtualStorm would be a
                                                              single server with some internal disks, running ESX and
                                                              having a Desktop folder and a Repository folder. The first
                                                              is for the deployment of Windows desktops, the second is
                                                              for deployment of one or more application repositories
                                                              and to hold the master-image.

                                                              The Desktop folder should be sequential IO optimized
                                                              whereas the Repository folder must be random IO
                                                              optimized. With regular internal disks (SCSI or SAS) you
                                                              will need at minimum a RAID card to create additional
                                                              volumes apart from the mirrored disk to install ESX.
                                                              Depending on the number of CPUs and cores inside the
                                                              server and the available memory, you might run 50 to 200
                                                              virtual desktops on this single server. A single box should
                                                              have at least one VM running at least Windows 2003 with
                                                              Active Directory and a separate vSphere vCenter server
                                                              (or VirtualCenter.)
                                                              This box would also be responsible for handling the
                                                              domain and handing out DHCP addresses to the end-
                                                              points, as well as running the central DinamiQs
                                                              Management Console to manage all Virtual Desktops.
                                                              This means you should reserve between 4 GB and 8 GB
                                                              of physical memory for management servers.
                                                              A typical Windows XP desktop requires around 150 MB
                                                              of physical memory for average use. Power users may
                                                              require 250-300 MB for proper operation. This means you
                                                              can scale your server to these numbers. If you have a
                                                              total of 24GB, you have 16GB available for end-users,
                                                              meaning you can run between 50 and 100 VMs besides
                                                              the management servers.
Version 1.1
Small to midscale environments
Of course having all virtual desktops inside a single box is
not an ideal situation. It is in fact suited at most for a   Up to 1000 users can be housed in a small to mid-range
Proof of Concept environment, definitely not for produc-     SAN with 2-4 Gbit/s FC or multiple 1Gbit/s connectors
tion where you need resilience and redundancy.               with NFS or iSCSI. The way VirtualStorm co-operates
                                                             with the cache the desktops require an average of 0-2.5
                                                             IOPS from disk while the application repository requires
                                                             2-4 IOPS from disk to operate per user. An important part
                                                             of the SAN is the cache, which handles big application
                                                             workloads. Since all applications are equal for all users,
                                                             the cache-hit ratio will be very high.
                                                             This means that a 1000 user SAN wil require between 0
                                                             and 2500 IOPS for the desktop volumes and between
                                                             2000 and 4000 IOPS for the repository.
                                                             The repository can be constructed from a number of
                                                             regular FC or SAS drives. This will give raw performance
                                                             of 1000-1500 IOPS for a 5 or 6 drive RAID5 volume,
                                                             which is enhanced by the cache in most storage arrays.
                                                             More cache definitely improves performance, specifically
                                                             because of the static nature of VirtualStorm.




             Single server VDI setup with VirtualStorm

So how do you scale a system like this? It is very impor-
tant to understand that VirtualStorm scales linearly. That
means that with each server you add, you can increase
the number of VMs by roughly 200 for each extra server
on average (depending on CPU type and memory)

The minimum setup would be a 2 server system. Each of
these servers would only require 2 disks, internally mir-
rored, to host ESX on. Then there is a Storage Network,
be it Fiber Channel, InfiniBand or NFS/ iSCSI over
Ethernet.
The only thing to really keep in mind is to separate the         Scalable infra for one thousand+ users. The Core/Edge config is optional

presentation network from the storage network. This is      Since images require up to 2.4GB of diskspace, a thou-
very important as this will separate different workloads    sand user environment requires a maximum of 2.4TB of
from each other.                                            disk space. The maximum number of VMs in a LUN
                                                            should not exceed 500 VMs. (to avoid SCSI command
In the past NFS and iSCSI were not the storage protocols queuing), which means that a thousand users can be put
of choice, mostly due to misunderstandings in the proper in two volumes of roughly 1.2 TB each. This can be
scaling of these protocols over the LAN. VMware would in achieved through RAID10 volumes of 12—14 146GB FC
fact not support any sort of performance over these proto- or SAS disks each or 4 volumes of 600GB each, requiring
cols.                                                       6-8 drives each. Volumes must remain below 2000GB
It is our experience that has shown that these protocols    anyway (ESX limitation) and 2000GB is not 2TB!
are in fact viable, with proper performance, as long as the The total number of drives comes to 24-32 drives for se-
architecture is properly designed and the equipment used quential IO and another 5-7 drives for random IO. A thou-
adheres to certain standards.                               sand users can therefore be hosted on a relatively small
In Fiber Channel and Infiniband networks, the protocol is SAN.
very efficient and, very important, transports data at very Important to keep in mind is that most people will start
low latency. Especially in application landscapes that re- only a few applications, meaning that they will not load up
quire lots of random IO and quick retrieval of files, that  dozens of applications and therefore will not fill up their
low latency helps improve the end-user experience.          page file to the maximum. The total size of each user’s
                                                            files can be anywhere between 1.6 and 2.4 GB; this can
With the arrival of economical low-latency 10 Gb/s inter- be taken into account when sizing the number of disks.
connects, NFS and iSCSI protocol running across these It is recommended not to use very large disks, as the sys-
interconnects demonstrate very acceptable performance tem does require sufficient disks to deliver the necessary
for VirtualStorm VDI.                                       IO and bandwidth to provision the applications to the Vir-
                                                            tual Machines. 146GB drives are recommended for Virtu-
                                                            alStorm environments.
Large scale environments                                     thousands of users simultaneously, the solution provides
                                                             a scalable environment for large enterprises.
When sizing and scaling a large environment, care must
be taken to size all parts of the infrastructure. Current    There are many ways to scale up a setup and to deploy
limitations in VDI hover at numbers between 3000 and         across datacenters using active/passive or active/active
5000 users for traditional fat client VM farms. Using        setups. As you scale up you will find that the datacenters
VirtualStorm allows to scale beyond those numbers and        are more and more becoming a consolidated, centralized
still maintain performance.                                  environment for shared resources and you should treat
Management at these scales is usually done through           them as such.
creation of vCenter silos of one thousand VMs,
sometimes in different datacenters. Again, VirtualStorm      Cloud Enabled Desktops -
allows creation of a single large environment.               DaaS with VirtualStorm
                                                             Using VirtualStorm allows for large scale deployments.
                                                             This has been demonstrated in several showcases. Using
                                                             the software has allowed hosts to be pushed to 800+
                                                             VMs on a single box. (See below)

                                                             Using these
                                                             numbers of VMs in
                                                             a production
                                                             environment is of
                                                             course not
                                                             recommended, but
                                                             it shows the
                                                             capabilities of both
                                                             ESX and the
                                                             VirtualStorm
                                                             Software
                                                             Infrastructure. The    Single dual socket quad core host 826 VMs
                                                             approach
                                                             VirtualStorm takes allows for integration with Cloud
                                                             Management tools for rapid deployment and fast
                                                             upscaling of resources, with resource overcommit as
                                                             depicted above to create full blown desktop clouds.

                                                             As there are no real standards for Cloud Computing
         Multi-datacenter VirtualStorm environment           interfaces apart from the de-facto standards by some of
vCenter has linking capabilities which allow it to link      the large cloud providers, integrating into new cloud
multiple vCenters together to scale up the management        initiatives is a matter of setting up the proper interfacing
of the environment.                                          between the customer facing cloud front-ends and the
The DinamiQs Management Console has been designed deployment, operational and billing structures of the
and constructed to manage a limitless number of virtual      backend, ie the VirtualStorm Software Infrastructure.
desktops across LAN and WAN networks and it
integrates with the desktop automation module.               Cloud Desktop deployment for tens of thousands of
The scaling of the environment requires the setup of         desktops which used to be a matter of weeks can now be
sufficient storage, compute and interconnects.               resolved in days, hours even, as the technology allows
The averages that have been found so far for larger          for multi-desktop deployment across the whole of the
environments show that a regular server connected with infrastructure and application deployment through simple
1 or 2 Gbit/s network interfaces will run 200-300 VMs.       activation and de-activation mechanisms.
Linear scalability requires that storage is connected at a
ratio of up to 7 or 8 servers per 10Gbit/s interface (6-7 on
an 8Gbit/s FC interface.) In this case connecting with
multiple interfaces scales up the environment by these
numbers of users.

At this moment extreme scale Desktop environments are
rare for precisely the problems that VirtualStorm aims to
solve: VM density and Storage requirements. The all
important third factor here is the management of the
environment, which is addressed through the
Management Console that allows scaling to tens of
thousands of Desktops in one environment. Combined
with massive deployment capabilities of VirtualStorm and
system-wide instantaneous updates and upgrades for              A VirtualStorm Software Infrastructure Desktop Cloud
IOPS and bandwidth in a                                        This page file contains application stripes of applications
 VirtualStorm Environment                                       that were read from the repository. (See also the MES
                                                                explained” document.)
 Having a centralized storage usually means that a spe-         Whenever an application is stopped and restarted, the
 cific controller is used that virtualizes a number of drives   VirtualStorm drivers first check the cache. If the applica-
 and presents the virtualized ‘disks’ (also called LUNs) to     tion is available, it is read, else the driver checks the
 the servers through the storage network (also called the       page file and if the application resides there, it is read as
 Storage Area Network or SAN.)                                  sequential IO with speeds of up to 50MB/s per drive and
 A typical VirtualStorm setup will have a LUN that is ran-      fed into the RAM of the Virtual Machine.
 dom IO optimized, for instance through a RAID5 volume          The way the system works is that all applications slowly
 consisting of four to eight drives. This LUN is used for the   get spread across the SAN as sequential IO optimized
 application repository. In addition there will be several      stripes with extremely low overhead from VMware itself.
 LUNs for desktop provisioning. These LUNs should be            The result is a construct that allows application provision-
 sequential IO optimized, which can be achieved through         ing to be spread across the complete storage array, lev-
 creating RAID10 volumes.                                       eraging disk bandwidth through sequential IO until the
 The LUNs are shared among all ESX hosts that are con-          bandwidth limits of the interconnects are reached.
 nected to the central storage. The interconnects can be
                                                                The design of VirtualStorm allows for environments that
 Fiber Channel, 1 or 10 Gbit Ethernet (using NFS or iSCSI
                                                                get faster as they are being used. This has been wit-
 to mount the volumes) or even Infiniband. You need to
                                                                nessed in multi-thousand desktop environments.
 take care to tune the storage bandwidth to the maximum
 bandwidth of the connected servers. So if you have a           Although VirtualStorm has run over 400 VMs on a dual
 10Gig connected storage array, make sure your total ag-        socket quad core server with sufficient memory, this
 gregate server connectivity does not go too far beyond         number is getting close to the scheduling limits of ESX
 that 10Gig Ethernet.                                           itself. Fortunately in VirtualStorm most of the scheduling
 The same arguments apply to FC and Infiniband.                 issues found in ESX are alleviated due to the low de-
 Latency of the interconnects is important. Use of large        pendence on the engine’s scheduling. Proper use of the
 frames also helps speed up the environment as it re-           integrated hardware acceleration of modern CPUs and
 duces TCP/IP overhead and at the same time correlates          the use of hardware supported transparent page caching
 with the sequential IO fingerprint of the storage. Most        and other techniques and not using VMware overcommit
 switches have latencies that run in the microseconds and       and memory ballooning, result in ESX only having to
 are often blocking type. The best Ethernet connectivity is     handle the actual running of the VMs and doing some
 non-blocking with sub microsecond latencies and the            display-protocol work for the virtual NICs. It may not
 largest possible frames.                                       come as a surprise that ESX is actually quite capable of
                                                                handling such workloads.
 Since VirtualStorm scales linearly where it concerns
 servers, the limits that you reach are to be found in the      For production purposes DinamiQs advises the use of no
 interconnects and the number and configurations of the         more than 200 VMs per dual socket quad core server
 connected drives in the central storage.                       with 48GB of memory. This allows for sufficient headway
 The desktop volumes are sequential IO optimized. This          for maintaining production performance levels even in the
 happens because the applications that are read from the        case of failure of several ESX servers.
 repository and fed into the RAM of the virtual machine
 are simultaneously written into a special page file next to    In case of doubt, do get in touch with us or one of our
 the XP image VMDK file.                                        resellers. We’re happy to help you design and scale your
                                                                VDI environment.

                                                The future of VDI




DinamiQs B.V.                                                                                North American Distribution
Kabelstraat 3
1322 AD Almere
The Netherlands
tel: +31 36 524 11 60
fax: +31 36 524 11 10                                                                       Web: http://www.VirtualStormNow.com
                                                                                             Email: sales@virtualstormnow.com
email: info@dinamiqs.com                                                                            Call: +1 (603) 766 4986

More Related Content

What's hot

Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V SphereAnne Achleman
 
Virtualization And Disk Performance
Virtualization And Disk PerformanceVirtualization And Disk Performance
Virtualization And Disk PerformanceDiskeeper
 
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM India Smarter Computing
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Sverige
 
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...Agora Group
 
Dell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructuresDell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructuresPrincipled Technologies
 
Vmware documentação tecnica
Vmware documentação tecnicaVmware documentação tecnica
Vmware documentação tecnicaALEXANDRE MEDINA
 
Esx mem-osdi02
Esx mem-osdi02Esx mem-osdi02
Esx mem-osdi0235146895
 
Adaptec Hybrid RAID Whitepaper
Adaptec Hybrid RAID WhitepaperAdaptec Hybrid RAID Whitepaper
Adaptec Hybrid RAID WhitepaperAdaptec by PMC
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices EMC
 
1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release
1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release
1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software ReleaseIsabella789
 
Memory Sizing for WebSphere Applications on System z Linux
Memory Sizing for WebSphere Applications on System z LinuxMemory Sizing for WebSphere Applications on System z Linux
Memory Sizing for WebSphere Applications on System z LinuxIBM India Smarter Computing
 
Running without a ZFS system pool
Running without a ZFS system poolRunning without a ZFS system pool
Running without a ZFS system poolBill Pijewski
 
Hyper V And Scvmm Best Practis
Hyper V And Scvmm Best PractisHyper V And Scvmm Best Practis
Hyper V And Scvmm Best PractisBlauge
 
70-410 Practice Test
70-410 Practice Test70-410 Practice Test
70-410 Practice Testwrailebo
 
Dynamic Storage Mobility for Hyper V
Dynamic Storage Mobility for Hyper VDynamic Storage Mobility for Hyper V
Dynamic Storage Mobility for Hyper VDataCore Software
 
virtualization tutorial at ACM bangalore Compute 2009
virtualization tutorial at ACM bangalore Compute 2009virtualization tutorial at ACM bangalore Compute 2009
virtualization tutorial at ACM bangalore Compute 2009ACMBangalore
 

What's hot (20)

Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V Sphere
 
Virtualization And Disk Performance
Virtualization And Disk PerformanceVirtualization And Disk Performance
Virtualization And Disk Performance
 
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
 
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
Sun Storage F5100 Flash Array, Redefining Storage Performance and Efficiency-...
 
Dell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructuresDell PowerEdge C6220: Performance for large infrastructures
Dell PowerEdge C6220: Performance for large infrastructures
 
Vmware documentação tecnica
Vmware documentação tecnicaVmware documentação tecnica
Vmware documentação tecnica
 
Esx mem-osdi02
Esx mem-osdi02Esx mem-osdi02
Esx mem-osdi02
 
Adaptec Hybrid RAID Whitepaper
Adaptec Hybrid RAID WhitepaperAdaptec Hybrid RAID Whitepaper
Adaptec Hybrid RAID Whitepaper
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices
 
1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release
1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release
1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release
 
Memory Sizing for WebSphere Applications on System z Linux
Memory Sizing for WebSphere Applications on System z LinuxMemory Sizing for WebSphere Applications on System z Linux
Memory Sizing for WebSphere Applications on System z Linux
 
IBM System x3850 M2 / x3950 M2
IBM System x3850 M2 / x3950 M2IBM System x3850 M2 / x3950 M2
IBM System x3850 M2 / x3950 M2
 
Running without a ZFS system pool
Running without a ZFS system poolRunning without a ZFS system pool
Running without a ZFS system pool
 
Hyper V And Scvmm Best Practis
Hyper V And Scvmm Best PractisHyper V And Scvmm Best Practis
Hyper V And Scvmm Best Practis
 
70-410 Practice Test
70-410 Practice Test70-410 Practice Test
70-410 Practice Test
 
VMware Performance
VMware Performance VMware Performance
VMware Performance
 
Dynamic Storage Mobility for Hyper V
Dynamic Storage Mobility for Hyper VDynamic Storage Mobility for Hyper V
Dynamic Storage Mobility for Hyper V
 
virtualization tutorial at ACM bangalore Compute 2009
virtualization tutorial at ACM bangalore Compute 2009virtualization tutorial at ACM bangalore Compute 2009
virtualization tutorial at ACM bangalore Compute 2009
 
IBM System x3690 X5 Product Guide
IBM System x3690 X5 Product GuideIBM System x3690 X5 Product Guide
IBM System x3690 X5 Product Guide
 

Similar to Vstorm Storage Architecture V1.1

VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld
 
VMware Interview questions and answers
VMware Interview questions and answersVMware Interview questions and answers
VMware Interview questions and answersvivaankumar
 
Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Manoj Kumar S
 
The Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged EnvironmentsThe Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged EnvironmentsTony Pearson
 
StarWind Virtual SAN Overview
StarWind Virtual SAN OverviewStarWind Virtual SAN Overview
StarWind Virtual SAN OverviewStarWind Software
 
Make room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storageMake room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storagePrincipled Technologies
 
vmware interview Q and a
vmware interview Q and avmware interview Q and a
vmware interview Q and arajasekar1712
 
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Principled Technologies
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld
 
Linux one vs x86 18 july
Linux one vs x86 18 julyLinux one vs x86 18 july
Linux one vs x86 18 julyDiego Rodriguez
 
OSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel BeveridgeOSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel BeveridgeOpenStorageSummit
 
VMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsVMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsAnne Achleman
 
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANThe Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANEMC
 
Cost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructureCost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructurePrincipled Technologies
 
RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...
RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...
RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...Redis Labs
 
Making clouds: turning opennebula into a product
Making clouds: turning opennebula into a productMaking clouds: turning opennebula into a product
Making clouds: turning opennebula into a productCarlo Daffara
 
Making Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a ProductMaking Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a ProductNETWAYS
 
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebula Project
 

Similar to Vstorm Storage Architecture V1.1 (20)

VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2 VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
VMworld 2013: Enterprise Architecture Design for VMware Horizon View 5.2
 
Storage
StorageStorage
Storage
 
VMware Interview questions and answers
VMware Interview questions and answersVMware Interview questions and answers
VMware Interview questions and answers
 
Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01Vmwareinterviewqa 100927111554-phpapp01
Vmwareinterviewqa 100927111554-phpapp01
 
The Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged EnvironmentsThe Pendulum Swings Back: Converged and Hyperconverged Environments
The Pendulum Swings Back: Converged and Hyperconverged Environments
 
StarWind Virtual SAN Overview
StarWind Virtual SAN OverviewStarWind Virtual SAN Overview
StarWind Virtual SAN Overview
 
Make room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storageMake room for more virtual desktops with fast storage
Make room for more virtual desktops with fast storage
 
vmware interview Q and a
vmware interview Q and avmware interview Q and a
vmware interview Q and a
 
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...
Update your private cloud with 14th generation Dell EMC PowerEdge FC640 serve...
 
VMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep DiveVMworld 2014: Virtual SAN Architecture Deep Dive
VMworld 2014: Virtual SAN Architecture Deep Dive
 
Linux one vs x86
Linux one vs x86 Linux one vs x86
Linux one vs x86
 
Linux one vs x86 18 july
Linux one vs x86 18 julyLinux one vs x86 18 july
Linux one vs x86 18 july
 
OSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel BeveridgeOSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel Beveridge
 
VMware vSphere Storage Enhancements
VMware vSphere Storage EnhancementsVMware vSphere Storage Enhancements
VMware vSphere Storage Enhancements
 
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SANThe Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
The Best Infrastructure for OpenStack: VMware vSphere and Virtual SAN
 
Cost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructureCost and performance comparison for OpenStack compute and storage infrastructure
Cost and performance comparison for OpenStack compute and storage infrastructure
 
RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...
RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...
RedisConf17 - Building Large High Performance Redis Databases with Redis Ente...
 
Making clouds: turning opennebula into a product
Making clouds: turning opennebula into a productMaking clouds: turning opennebula into a product
Making clouds: turning opennebula into a product
 
Making Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a ProductMaking Clouds: Turning OpenNebula into a Product
Making Clouds: Turning OpenNebula into a Product
 
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
OpenNebulaConf 2013 - Making Clouds: Turning OpenNebula into a Product by Car...
 

Vstorm Storage Architecture V1.1

  • 1. VirtualStorm Cloud Enabled Virtual Desktop Infrastructure based on industry leading technologies. By Storage Architectures © 2010 DinamiQs The future of VDI VirtualStorm is hardware agnostic. In other words, it Small scale environments will run on any combination of servers, storage and network. However, the system is designed in such a way Current server hardware is very powerful. Considering that it tries to make optimal use of of the components that that most users deploy their applications and work by are available. So there are some things to take into typing text or opening and closing applications and any account when designing an environment and especially combination thereof and that read time for what appears larger environments require thought and careful design to on screen is also required, it stands to reason that in make the most of the VirtualStorm software stack. general end-users utilize only a very small portion of available resources. Below is a general architecture that describes how the VDI environment is best created. There are many Combine this with the VirtualStorm optimized Windows implementations that can deliver different results. The (XP or 7) stack running on a hardware supported goal of this document is to describe and make clear the virtualization hypervisor such as VMware’s ESX and it ideal situation for most environments, ranging from becomes obvious that high densities of VMs can be simple, small environments to large enterprise VDI achieved using regular compute hardware and relatively structures and private and public clouds. small storage. The smallest environment for VirtualStorm would be a single server with some internal disks, running ESX and having a Desktop folder and a Repository folder. The first is for the deployment of Windows desktops, the second is for deployment of one or more application repositories and to hold the master-image. The Desktop folder should be sequential IO optimized whereas the Repository folder must be random IO optimized. With regular internal disks (SCSI or SAS) you will need at minimum a RAID card to create additional volumes apart from the mirrored disk to install ESX. Depending on the number of CPUs and cores inside the server and the available memory, you might run 50 to 200 virtual desktops on this single server. A single box should have at least one VM running at least Windows 2003 with Active Directory and a separate vSphere vCenter server (or VirtualCenter.) This box would also be responsible for handling the domain and handing out DHCP addresses to the end- points, as well as running the central DinamiQs Management Console to manage all Virtual Desktops. This means you should reserve between 4 GB and 8 GB of physical memory for management servers. A typical Windows XP desktop requires around 150 MB of physical memory for average use. Power users may require 250-300 MB for proper operation. This means you can scale your server to these numbers. If you have a total of 24GB, you have 16GB available for end-users, meaning you can run between 50 and 100 VMs besides the management servers. Version 1.1
  • 2. Small to midscale environments Of course having all virtual desktops inside a single box is not an ideal situation. It is in fact suited at most for a Up to 1000 users can be housed in a small to mid-range Proof of Concept environment, definitely not for produc- SAN with 2-4 Gbit/s FC or multiple 1Gbit/s connectors tion where you need resilience and redundancy. with NFS or iSCSI. The way VirtualStorm co-operates with the cache the desktops require an average of 0-2.5 IOPS from disk while the application repository requires 2-4 IOPS from disk to operate per user. An important part of the SAN is the cache, which handles big application workloads. Since all applications are equal for all users, the cache-hit ratio will be very high. This means that a 1000 user SAN wil require between 0 and 2500 IOPS for the desktop volumes and between 2000 and 4000 IOPS for the repository. The repository can be constructed from a number of regular FC or SAS drives. This will give raw performance of 1000-1500 IOPS for a 5 or 6 drive RAID5 volume, which is enhanced by the cache in most storage arrays. More cache definitely improves performance, specifically because of the static nature of VirtualStorm. Single server VDI setup with VirtualStorm So how do you scale a system like this? It is very impor- tant to understand that VirtualStorm scales linearly. That means that with each server you add, you can increase the number of VMs by roughly 200 for each extra server on average (depending on CPU type and memory) The minimum setup would be a 2 server system. Each of these servers would only require 2 disks, internally mir- rored, to host ESX on. Then there is a Storage Network, be it Fiber Channel, InfiniBand or NFS/ iSCSI over Ethernet. The only thing to really keep in mind is to separate the Scalable infra for one thousand+ users. The Core/Edge config is optional presentation network from the storage network. This is Since images require up to 2.4GB of diskspace, a thou- very important as this will separate different workloads sand user environment requires a maximum of 2.4TB of from each other. disk space. The maximum number of VMs in a LUN should not exceed 500 VMs. (to avoid SCSI command In the past NFS and iSCSI were not the storage protocols queuing), which means that a thousand users can be put of choice, mostly due to misunderstandings in the proper in two volumes of roughly 1.2 TB each. This can be scaling of these protocols over the LAN. VMware would in achieved through RAID10 volumes of 12—14 146GB FC fact not support any sort of performance over these proto- or SAS disks each or 4 volumes of 600GB each, requiring cols. 6-8 drives each. Volumes must remain below 2000GB It is our experience that has shown that these protocols anyway (ESX limitation) and 2000GB is not 2TB! are in fact viable, with proper performance, as long as the The total number of drives comes to 24-32 drives for se- architecture is properly designed and the equipment used quential IO and another 5-7 drives for random IO. A thou- adheres to certain standards. sand users can therefore be hosted on a relatively small In Fiber Channel and Infiniband networks, the protocol is SAN. very efficient and, very important, transports data at very Important to keep in mind is that most people will start low latency. Especially in application landscapes that re- only a few applications, meaning that they will not load up quire lots of random IO and quick retrieval of files, that dozens of applications and therefore will not fill up their low latency helps improve the end-user experience. page file to the maximum. The total size of each user’s files can be anywhere between 1.6 and 2.4 GB; this can With the arrival of economical low-latency 10 Gb/s inter- be taken into account when sizing the number of disks. connects, NFS and iSCSI protocol running across these It is recommended not to use very large disks, as the sys- interconnects demonstrate very acceptable performance tem does require sufficient disks to deliver the necessary for VirtualStorm VDI. IO and bandwidth to provision the applications to the Vir- tual Machines. 146GB drives are recommended for Virtu- alStorm environments.
  • 3. Large scale environments thousands of users simultaneously, the solution provides a scalable environment for large enterprises. When sizing and scaling a large environment, care must be taken to size all parts of the infrastructure. Current There are many ways to scale up a setup and to deploy limitations in VDI hover at numbers between 3000 and across datacenters using active/passive or active/active 5000 users for traditional fat client VM farms. Using setups. As you scale up you will find that the datacenters VirtualStorm allows to scale beyond those numbers and are more and more becoming a consolidated, centralized still maintain performance. environment for shared resources and you should treat Management at these scales is usually done through them as such. creation of vCenter silos of one thousand VMs, sometimes in different datacenters. Again, VirtualStorm Cloud Enabled Desktops - allows creation of a single large environment. DaaS with VirtualStorm Using VirtualStorm allows for large scale deployments. This has been demonstrated in several showcases. Using the software has allowed hosts to be pushed to 800+ VMs on a single box. (See below) Using these numbers of VMs in a production environment is of course not recommended, but it shows the capabilities of both ESX and the VirtualStorm Software Infrastructure. The Single dual socket quad core host 826 VMs approach VirtualStorm takes allows for integration with Cloud Management tools for rapid deployment and fast upscaling of resources, with resource overcommit as depicted above to create full blown desktop clouds. As there are no real standards for Cloud Computing Multi-datacenter VirtualStorm environment interfaces apart from the de-facto standards by some of vCenter has linking capabilities which allow it to link the large cloud providers, integrating into new cloud multiple vCenters together to scale up the management initiatives is a matter of setting up the proper interfacing of the environment. between the customer facing cloud front-ends and the The DinamiQs Management Console has been designed deployment, operational and billing structures of the and constructed to manage a limitless number of virtual backend, ie the VirtualStorm Software Infrastructure. desktops across LAN and WAN networks and it integrates with the desktop automation module. Cloud Desktop deployment for tens of thousands of The scaling of the environment requires the setup of desktops which used to be a matter of weeks can now be sufficient storage, compute and interconnects. resolved in days, hours even, as the technology allows The averages that have been found so far for larger for multi-desktop deployment across the whole of the environments show that a regular server connected with infrastructure and application deployment through simple 1 or 2 Gbit/s network interfaces will run 200-300 VMs. activation and de-activation mechanisms. Linear scalability requires that storage is connected at a ratio of up to 7 or 8 servers per 10Gbit/s interface (6-7 on an 8Gbit/s FC interface.) In this case connecting with multiple interfaces scales up the environment by these numbers of users. At this moment extreme scale Desktop environments are rare for precisely the problems that VirtualStorm aims to solve: VM density and Storage requirements. The all important third factor here is the management of the environment, which is addressed through the Management Console that allows scaling to tens of thousands of Desktops in one environment. Combined with massive deployment capabilities of VirtualStorm and system-wide instantaneous updates and upgrades for A VirtualStorm Software Infrastructure Desktop Cloud
  • 4. IOPS and bandwidth in a This page file contains application stripes of applications VirtualStorm Environment that were read from the repository. (See also the MES explained” document.) Having a centralized storage usually means that a spe- Whenever an application is stopped and restarted, the cific controller is used that virtualizes a number of drives VirtualStorm drivers first check the cache. If the applica- and presents the virtualized ‘disks’ (also called LUNs) to tion is available, it is read, else the driver checks the the servers through the storage network (also called the page file and if the application resides there, it is read as Storage Area Network or SAN.) sequential IO with speeds of up to 50MB/s per drive and A typical VirtualStorm setup will have a LUN that is ran- fed into the RAM of the Virtual Machine. dom IO optimized, for instance through a RAID5 volume The way the system works is that all applications slowly consisting of four to eight drives. This LUN is used for the get spread across the SAN as sequential IO optimized application repository. In addition there will be several stripes with extremely low overhead from VMware itself. LUNs for desktop provisioning. These LUNs should be The result is a construct that allows application provision- sequential IO optimized, which can be achieved through ing to be spread across the complete storage array, lev- creating RAID10 volumes. eraging disk bandwidth through sequential IO until the The LUNs are shared among all ESX hosts that are con- bandwidth limits of the interconnects are reached. nected to the central storage. The interconnects can be The design of VirtualStorm allows for environments that Fiber Channel, 1 or 10 Gbit Ethernet (using NFS or iSCSI get faster as they are being used. This has been wit- to mount the volumes) or even Infiniband. You need to nessed in multi-thousand desktop environments. take care to tune the storage bandwidth to the maximum bandwidth of the connected servers. So if you have a Although VirtualStorm has run over 400 VMs on a dual 10Gig connected storage array, make sure your total ag- socket quad core server with sufficient memory, this gregate server connectivity does not go too far beyond number is getting close to the scheduling limits of ESX that 10Gig Ethernet. itself. Fortunately in VirtualStorm most of the scheduling The same arguments apply to FC and Infiniband. issues found in ESX are alleviated due to the low de- Latency of the interconnects is important. Use of large pendence on the engine’s scheduling. Proper use of the frames also helps speed up the environment as it re- integrated hardware acceleration of modern CPUs and duces TCP/IP overhead and at the same time correlates the use of hardware supported transparent page caching with the sequential IO fingerprint of the storage. Most and other techniques and not using VMware overcommit switches have latencies that run in the microseconds and and memory ballooning, result in ESX only having to are often blocking type. The best Ethernet connectivity is handle the actual running of the VMs and doing some non-blocking with sub microsecond latencies and the display-protocol work for the virtual NICs. It may not largest possible frames. come as a surprise that ESX is actually quite capable of handling such workloads. Since VirtualStorm scales linearly where it concerns servers, the limits that you reach are to be found in the For production purposes DinamiQs advises the use of no interconnects and the number and configurations of the more than 200 VMs per dual socket quad core server connected drives in the central storage. with 48GB of memory. This allows for sufficient headway The desktop volumes are sequential IO optimized. This for maintaining production performance levels even in the happens because the applications that are read from the case of failure of several ESX servers. repository and fed into the RAM of the virtual machine are simultaneously written into a special page file next to In case of doubt, do get in touch with us or one of our the XP image VMDK file. resellers. We’re happy to help you design and scale your VDI environment. The future of VDI DinamiQs B.V. North American Distribution Kabelstraat 3 1322 AD Almere The Netherlands tel: +31 36 524 11 60 fax: +31 36 524 11 10 Web: http://www.VirtualStormNow.com Email: sales@virtualstormnow.com email: info@dinamiqs.com Call: +1 (603) 766 4986