2. Small to midscale environments
Of course having all virtual desktops inside a single box is
not an ideal situation. It is in fact suited at most for a Up to 1000 users can be housed in a small to mid-range
Proof of Concept environment, definitely not for produc- SAN with 2-4 Gbit/s FC or multiple 1Gbit/s connectors
tion where you need resilience and redundancy. with NFS or iSCSI. The way VirtualStorm co-operates
with the cache the desktops require an average of 0-2.5
IOPS from disk while the application repository requires
2-4 IOPS from disk to operate per user. An important part
of the SAN is the cache, which handles big application
workloads. Since all applications are equal for all users,
the cache-hit ratio will be very high.
This means that a 1000 user SAN wil require between 0
and 2500 IOPS for the desktop volumes and between
2000 and 4000 IOPS for the repository.
The repository can be constructed from a number of
regular FC or SAS drives. This will give raw performance
of 1000-1500 IOPS for a 5 or 6 drive RAID5 volume,
which is enhanced by the cache in most storage arrays.
More cache definitely improves performance, specifically
because of the static nature of VirtualStorm.
Single server VDI setup with VirtualStorm
So how do you scale a system like this? It is very impor-
tant to understand that VirtualStorm scales linearly. That
means that with each server you add, you can increase
the number of VMs by roughly 200 for each extra server
on average (depending on CPU type and memory)
The minimum setup would be a 2 server system. Each of
these servers would only require 2 disks, internally mir-
rored, to host ESX on. Then there is a Storage Network,
be it Fiber Channel, InfiniBand or NFS/ iSCSI over
Ethernet.
The only thing to really keep in mind is to separate the Scalable infra for one thousand+ users. The Core/Edge config is optional
presentation network from the storage network. This is Since images require up to 2.4GB of diskspace, a thou-
very important as this will separate different workloads sand user environment requires a maximum of 2.4TB of
from each other. disk space. The maximum number of VMs in a LUN
should not exceed 500 VMs. (to avoid SCSI command
In the past NFS and iSCSI were not the storage protocols queuing), which means that a thousand users can be put
of choice, mostly due to misunderstandings in the proper in two volumes of roughly 1.2 TB each. This can be
scaling of these protocols over the LAN. VMware would in achieved through RAID10 volumes of 12—14 146GB FC
fact not support any sort of performance over these proto- or SAS disks each or 4 volumes of 600GB each, requiring
cols. 6-8 drives each. Volumes must remain below 2000GB
It is our experience that has shown that these protocols anyway (ESX limitation) and 2000GB is not 2TB!
are in fact viable, with proper performance, as long as the The total number of drives comes to 24-32 drives for se-
architecture is properly designed and the equipment used quential IO and another 5-7 drives for random IO. A thou-
adheres to certain standards. sand users can therefore be hosted on a relatively small
In Fiber Channel and Infiniband networks, the protocol is SAN.
very efficient and, very important, transports data at very Important to keep in mind is that most people will start
low latency. Especially in application landscapes that re- only a few applications, meaning that they will not load up
quire lots of random IO and quick retrieval of files, that dozens of applications and therefore will not fill up their
low latency helps improve the end-user experience. page file to the maximum. The total size of each user’s
files can be anywhere between 1.6 and 2.4 GB; this can
With the arrival of economical low-latency 10 Gb/s inter- be taken into account when sizing the number of disks.
connects, NFS and iSCSI protocol running across these It is recommended not to use very large disks, as the sys-
interconnects demonstrate very acceptable performance tem does require sufficient disks to deliver the necessary
for VirtualStorm VDI. IO and bandwidth to provision the applications to the Vir-
tual Machines. 146GB drives are recommended for Virtu-
alStorm environments.
3. Large scale environments thousands of users simultaneously, the solution provides
a scalable environment for large enterprises.
When sizing and scaling a large environment, care must
be taken to size all parts of the infrastructure. Current There are many ways to scale up a setup and to deploy
limitations in VDI hover at numbers between 3000 and across datacenters using active/passive or active/active
5000 users for traditional fat client VM farms. Using setups. As you scale up you will find that the datacenters
VirtualStorm allows to scale beyond those numbers and are more and more becoming a consolidated, centralized
still maintain performance. environment for shared resources and you should treat
Management at these scales is usually done through them as such.
creation of vCenter silos of one thousand VMs,
sometimes in different datacenters. Again, VirtualStorm Cloud Enabled Desktops -
allows creation of a single large environment. DaaS with VirtualStorm
Using VirtualStorm allows for large scale deployments.
This has been demonstrated in several showcases. Using
the software has allowed hosts to be pushed to 800+
VMs on a single box. (See below)
Using these
numbers of VMs in
a production
environment is of
course not
recommended, but
it shows the
capabilities of both
ESX and the
VirtualStorm
Software
Infrastructure. The Single dual socket quad core host 826 VMs
approach
VirtualStorm takes allows for integration with Cloud
Management tools for rapid deployment and fast
upscaling of resources, with resource overcommit as
depicted above to create full blown desktop clouds.
As there are no real standards for Cloud Computing
Multi-datacenter VirtualStorm environment interfaces apart from the de-facto standards by some of
vCenter has linking capabilities which allow it to link the large cloud providers, integrating into new cloud
multiple vCenters together to scale up the management initiatives is a matter of setting up the proper interfacing
of the environment. between the customer facing cloud front-ends and the
The DinamiQs Management Console has been designed deployment, operational and billing structures of the
and constructed to manage a limitless number of virtual backend, ie the VirtualStorm Software Infrastructure.
desktops across LAN and WAN networks and it
integrates with the desktop automation module. Cloud Desktop deployment for tens of thousands of
The scaling of the environment requires the setup of desktops which used to be a matter of weeks can now be
sufficient storage, compute and interconnects. resolved in days, hours even, as the technology allows
The averages that have been found so far for larger for multi-desktop deployment across the whole of the
environments show that a regular server connected with infrastructure and application deployment through simple
1 or 2 Gbit/s network interfaces will run 200-300 VMs. activation and de-activation mechanisms.
Linear scalability requires that storage is connected at a
ratio of up to 7 or 8 servers per 10Gbit/s interface (6-7 on
an 8Gbit/s FC interface.) In this case connecting with
multiple interfaces scales up the environment by these
numbers of users.
At this moment extreme scale Desktop environments are
rare for precisely the problems that VirtualStorm aims to
solve: VM density and Storage requirements. The all
important third factor here is the management of the
environment, which is addressed through the
Management Console that allows scaling to tens of
thousands of Desktops in one environment. Combined
with massive deployment capabilities of VirtualStorm and
system-wide instantaneous updates and upgrades for A VirtualStorm Software Infrastructure Desktop Cloud
4. IOPS and bandwidth in a This page file contains application stripes of applications
VirtualStorm Environment that were read from the repository. (See also the MES
explained” document.)
Having a centralized storage usually means that a spe- Whenever an application is stopped and restarted, the
cific controller is used that virtualizes a number of drives VirtualStorm drivers first check the cache. If the applica-
and presents the virtualized ‘disks’ (also called LUNs) to tion is available, it is read, else the driver checks the
the servers through the storage network (also called the page file and if the application resides there, it is read as
Storage Area Network or SAN.) sequential IO with speeds of up to 50MB/s per drive and
A typical VirtualStorm setup will have a LUN that is ran- fed into the RAM of the Virtual Machine.
dom IO optimized, for instance through a RAID5 volume The way the system works is that all applications slowly
consisting of four to eight drives. This LUN is used for the get spread across the SAN as sequential IO optimized
application repository. In addition there will be several stripes with extremely low overhead from VMware itself.
LUNs for desktop provisioning. These LUNs should be The result is a construct that allows application provision-
sequential IO optimized, which can be achieved through ing to be spread across the complete storage array, lev-
creating RAID10 volumes. eraging disk bandwidth through sequential IO until the
The LUNs are shared among all ESX hosts that are con- bandwidth limits of the interconnects are reached.
nected to the central storage. The interconnects can be
The design of VirtualStorm allows for environments that
Fiber Channel, 1 or 10 Gbit Ethernet (using NFS or iSCSI
get faster as they are being used. This has been wit-
to mount the volumes) or even Infiniband. You need to
nessed in multi-thousand desktop environments.
take care to tune the storage bandwidth to the maximum
bandwidth of the connected servers. So if you have a Although VirtualStorm has run over 400 VMs on a dual
10Gig connected storage array, make sure your total ag- socket quad core server with sufficient memory, this
gregate server connectivity does not go too far beyond number is getting close to the scheduling limits of ESX
that 10Gig Ethernet. itself. Fortunately in VirtualStorm most of the scheduling
The same arguments apply to FC and Infiniband. issues found in ESX are alleviated due to the low de-
Latency of the interconnects is important. Use of large pendence on the engine’s scheduling. Proper use of the
frames also helps speed up the environment as it re- integrated hardware acceleration of modern CPUs and
duces TCP/IP overhead and at the same time correlates the use of hardware supported transparent page caching
with the sequential IO fingerprint of the storage. Most and other techniques and not using VMware overcommit
switches have latencies that run in the microseconds and and memory ballooning, result in ESX only having to
are often blocking type. The best Ethernet connectivity is handle the actual running of the VMs and doing some
non-blocking with sub microsecond latencies and the display-protocol work for the virtual NICs. It may not
largest possible frames. come as a surprise that ESX is actually quite capable of
handling such workloads.
Since VirtualStorm scales linearly where it concerns
servers, the limits that you reach are to be found in the For production purposes DinamiQs advises the use of no
interconnects and the number and configurations of the more than 200 VMs per dual socket quad core server
connected drives in the central storage. with 48GB of memory. This allows for sufficient headway
The desktop volumes are sequential IO optimized. This for maintaining production performance levels even in the
happens because the applications that are read from the case of failure of several ESX servers.
repository and fed into the RAM of the virtual machine
are simultaneously written into a special page file next to In case of doubt, do get in touch with us or one of our
the XP image VMDK file. resellers. We’re happy to help you design and scale your
VDI environment.
The future of VDI
DinamiQs B.V. North American Distribution
Kabelstraat 3
1322 AD Almere
The Netherlands
tel: +31 36 524 11 60
fax: +31 36 524 11 10 Web: http://www.VirtualStormNow.com
Email: sales@virtualstormnow.com
email: info@dinamiqs.com Call: +1 (603) 766 4986