The enterprise datacenter is undergoing a massive transformation. Is your organization prepared for what’s next?
Share in proven strategies for delivering frictionless IT services while retaining the precise control your business needs.
See the groundbreaking new Nutanix platform capabilities that tear down IT silos and unify the technology stack.
Engage with peers on best practices in virtualization, application design and cloud technologies
4. 4
Optimizing your desktop
What?
What to use? Results?
VMware OS optimization fling 18-20% higher density
Bpanalyzer 20-22% higher density
Use best practice guides Mileage may vary
5. 5
Optimizing your desktop
Choose your deployment method wisely – MCS/PVS &
Linked/Full Clones
Antivirus – Full scan on sealing your image
Windows Update – Disable when possible
Application distribution – Application
virtualization/layering
7. 7
Get your sizing done: the right way!
Assessments: LWL Stratusphere/Dell DPACK
If all fails: Go for task, knowledge, power users.
Test drive
before
making
capital
investment
decision
User
acceptance will
increase if
there’s basic
knowledge
POC acts as a
blueprint
accelerator and
helps to avoid
changes in
scope
Gaps in the
architecture
will be
identified
Mitigate investment
risks
Knowledge transfer
effect
Reduce implementation
time & cost
Mitigate deployment risks
P
O
C
8. 8
How to Size Compute for Your Environment
(((Cores x Sockets) x hyperthreading ratio) – CVM CPU)
x Overcommit Ratio
Hyperthreading = 1.3
CVM cores = 8
Overcommit Ratio = TBD
-20% for 2vCPU desktops
+20% for running on Haswell
+15% for running on Broadwell
9. 9
Turn on Compression, Dedupe & Erasure Coding
What’s your Problem? Capacity or Performance
Full Clones: Dedupe
Everything Else: Compression
Erasure Coding: Not for VDI
Courtesy of Designbrews.com
13. 13
Using Linked Clone + unoptimized Distributed Filesystem
Master vDisk
Distributed datastore
Clone…
Host local storage
Diff
I
D
Diff
I
D
Snapshot
Host local storage
Diff
I
D
Diff
I
D
Host local storage
Diff
I
D
Diff
I
D
110011010101110000101011110110101011000010111110101010101011101010101000111010101011110101001110101101010111000101
0101010101011
Diff
I
D
Diff
I
D
Diff
I
D
Diff
I
D
Constant reads over the network
14. 14
Optimized distributed storage with Shadow Clones
Master vDisk
Clone…
Host local storage
Diff
I
D
Diff
I
D
Snapshot
Host local storage
Diff
I
D
Diff
I
D
Host local storage
Diff
I
D
Diff
I
D
110011010101110000101011110110101011000010111110101010101011101010101000111010101011110101001110101101010111000101
0101010101011
Once 2 remote (CVM) connections are detected
the master vdisk is cached locally on each host,
block by block on read.
Shadow Clone Shadow Clone
Block copy on remote read
Distributed datastore
15. 15
0
200
400
600
800
0 50 100 150 200 250 300
TimetobootallMVs
Total VMs booting
Linked Clone Boot Time
(7 Node Cluster)
Shadow
Clones
Less is better
Improving User Experience using Distributed Caching with
Shadow Clones
• Distributed caching of vDisks using local SSDs
• Great for XenDesktop MCS and other multi reader scenarios
• As much as 50% reduction in boot time
17. 17
Legacy Graphical Users are Difficult to Support
Need for high Spec PC’s:
CPU
Memory
Disk IO
GPU
Screen sizes
Expensive licenses
Data distribution
&
Protection of IP
Tough to mobilize
Expensive
laptops & the risk
of theft
18. 18
GPUs help Solve the Transformation Problem
Centralizing
physical
resources
and redistribute
with better
efficiency
Centralizing
apps and data
Making the
most
demanding
users mobile
while
maintaining
end user
experience
19. 19
Nutanix Solves the Scalability Problem
19
Linear scale-out
cluster design
Scale node by
node
Making the
storage
infrastructure
invisible, lowering
administrative
burden and costs
Delivering high
performance for
the entire virtual
desktop,
ensuring great
end-user
experience
24. 24
Machine Creation Services mechanics
• Fully integrated in Citrix Studio, no separate
infrastructure
• MCS is a VM creation framework with a translator
• Method differ depending on the hypervisor used
• MCS fully controls the creation of VM’s and it’s
associated disks on the hypervisor platforms
25. 25
MCS high level vdisk architecture
Virtual Machine
Diff disk
Root vDisk
Datastore
Virtual Machine Virtual Machine
ID disk
Diff disk
ID disk
Diff disk
ID disk
Master VM
snapshots
flattened copy…
…to every
configured
datastore
Persistent
identity
16MB
Space
reclaimed at
every reboot
One copy of the vdisk
shared by all VM’s
26. 26
MCS architecture with Nutanix AHV
Citrix Studio
Citrix Services
Provisioning SDK
PS Cmdlets
MCS–AHV interface
Nutanix AHV plugin needs to be installed on all XD controllers
REST API
Services:
Brokering
Host
Machine Creation
AD Identity
snapshot
Cloning
ID
Power Management &
Provisioning
XenDesktop
Controller
30. 30
Like What You Just Heard… There’s More!
READ
Citrix XenDesktop on Nutanix AHV – CVS
Nutanix Ebook: Definitive Guide to VDI
USE
Nutanix Sizer: Effectively Size your Target Environment !
Dedupe only for Full clones
110 Desktops – 36 GB for user data
Compression 2.5 – 4.0 X savings
A great way to solve this is to go the software defined route and opt for distributed filesystem.
While this still leverages local storage (SSD’s preferably so it can’t bottleneck like a SAN would), it is not managed as local storage.
The hypervisor just sees a single datastore, which also means you only need to configure it only once in XenDesktop.
Net benefit of this is that it also just requires just a single copy to be made on image roll out..
There however is a problem with this if you used just a typical run of the mill distributed filesystem, that has no techniques to truly localize data) since it will not be optimized for running a multi-reader scenario.
What will happen is that when the golden master is created, this process will write the vdisk to the storage of the host that performs the copy. While the VM’s local to that host will read the master vdisk locally, the other hosts in the cluster will access the vdisk over the network. This could become a bottleneck when enough VM’s start to read from the master. Even though the vDisk is probably replicated to other hosts , this is only done to assure data availability in case of a host failure. It will not distribute the disk load balancing pursposes. At most, each host might have a small cache to try to avoid some of the read traffic, but these caches are not optimized for multireader scenario’s (only work for 1 VM reading its own disk) or the caches are simply just too small to house the vdisk in the first place.
Nutanix solves this problem ahead of time by way of Shadow Cloning.
As soon as we detect that multiple VM’s (2 or more remote readers) pull data from the vdisk, we mark the main vdisk as immutable which allows for distributed caching of the entire disk. This is done on reads, block by block. This way each VM will automatically work with localized data.
This not only relieves the network for reads (writes are local anyway), but it also seriously improves performance.
This technology is enabled by default and requires no configuration whatsoever.
Shadow clones offer distributed caching of vDisks or VM data that is in a multi-reader scenarios
Up to 50% performance optimization during VDI boot-storms and other multi reader scenario’s.
Machine Creation Services mechanics:
MCS is fully integrated into Citrix Studio and does not require a separate installer or configuration. It’s there on each XenDesktop Controller.
MCS itself is a VM creation framework that can be learned to understand different hypervisors. No code is actually placed on the hypervisor.
The method used to provision VMs and link disks to it, differs per hypervisor, but MCS fully controls the creation of the VM’s and fires of the commands to the hypervisor’s management platform or API’s.
The last hypervisor we look at (and most certainly not the least!) , is the Nutanix Acropolis Hypervisor.
AHV has been around for a while but we’ve only really started calling it AHV since june last year when we released a new version of it at Nutanix .Next in Miami (which will be held in Vegas in 2 weeks time btw).
We’re proud to announce full GUI based support for AHV in XenDesktop, which includes the use of Machine Creation Services.
How does it work?
First of all you need XenDesktop 7.9, which has the latest version of the Provisioing SDK installed on it.
Together with the Nutanix AHV plugin you install on every controller, you can now have Studio talk natively to Acropolis.
Nutatix clusters are automatically highly available because of the distributed architecture, so you only need to point Studio to the Nutanix cluster IP address and you’re done.
It works the same way from that point on, and you create catalogs based on snapshots, which will then lead to provisioned VM’s with ID disks and writecaches.
Under water we do a couple of things differently than the previsou 3 hypervisors.
We use a full clone approach, with copy on write functionality.
Each VM is linked to the master image, but will show up as the full vdisk, also the 16 MB ID disk is beeing attached.
These vDisks are thin provisioned, and while they show a usage of 10 GB in the above example, the actual data usage will be much lower, since we deduplicate and compress data on the storage level.
After every restart of the VM, the writecache disk is reset.
Here you see the actual logical files when looked at the datastore through a WinSCP client, with the writecache disks at the top, and the ID disks at the bottom.