PVS and MCS Webinar - Technical Deep Dive
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

PVS and MCS Webinar - Technical Deep Dive

on

  • 4,116 views

This webinar will cover the current state of MCS and PVS. We'll look at how MCS and PVS work differently on hypervisors like ESXi and Hyper-V. We will look at new target platforms such as Windows ...

This webinar will cover the current state of MCS and PVS. We'll look at how MCS and PVS work differently on hypervisors like ESXi and Hyper-V. We will look at new target platforms such as Windows Server 2012 R2 to see if PVS or MCS behave differently.

And lastly we will dive into the new VHDX-based PVS wC option and why you should be using it for all your workloads.

The webinar will be presented by Nick Rintalan

Statistics

Views

Total Views
4,116
Views on SlideShare
4,049
Embed Views
67

Actions

Likes
8
Downloads
462
Comments
0

5 Embeds 67

https://twitter.com 49
http://www.slideee.com 8
https://www.linkedin.com 7
https://tweetdeck.twitter.com 2
http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • http://xenserver.org/discuss-virtualization/virtualization-blog/entry/read-caching.html
  • http://xenserver.org/discuss-virtualization/virtualization-blog/entry/read-caching.html
  • http://xenserver.org/discuss-virtualization/virtualization-blog/entry/read-caching.html
  • http://blogs.citrix.com/2014/04/18/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-one/ <br /> E: Drive Test:  This IOMETER test used an 8 GB file configured to write directly on write-cache disk (E:) bypassing PVS.  This test would allow us to know the true underlying IOPS provided by the SAN. <br /> New PVS RAM Cache with disk Overflow:  We configured the new RAM cache to use up to 10 GB RAM and ran the IOMETER test with an 8 GB file so that all I/O would remain in the RAM. <br /> New PVS RAM Cache with disk Overflow: We configured the new RAM cache to use up to 10 GB RAM and ran the IOMETER test with a 15 GB file so that at least 5 GB of I/O would overflow to disk. <br /> Old PVS Cache in Device RAM: We used the old PVS Cache in RAM feature and configured it for 10 GB RAM.  We ran the IOMETER test with an 8 GB file so that the RAM cache would not run out, which would make the VM crash! <br /> PVS Cache on Device Hard Disk:  We configured PVS to cache on device hard disk and ran IOMETER test with 8 GB file. <br /> With the exception of the size of the IOMETER test file as detailed above, all of the IOMETER tests were run with the following parameters: <br /> 4 workers configured <br /> Depth Queue set to 16 for each worker <br /> 4 KB block size <br /> 80% Writes / 20% Reads <br /> 90% Random IO / 10% Sequential IO <br /> 30 minute test duration
  • Windows 7 – PVS 7.1 RAM Cache with 256 MB on Hyper-V 2012 R2 <br /> This test was configured just like the MCS baseline test and run on the same hardware. <br /> Single Hyper-V host with hyper-threaded Quad Core CPU and 32 GB RAM <br /> A single dedicated 7200 RPM SATA 3 disk with 64 MB cache was used for hosting the write cache disk for the Windows 7 VMs <br /> Windows 7 x64 VMs: 2 vCPU with 2.5 GB RAM <br /> PVS 7.1 Standard Image with RAM Cache set at 256 MB (PVS on separate host) <br /> Windows Event Logs were redirected directly to the write cache disk so that they persist and their I/O would not be cached in RAM <br /> The profile was fully optimized with UPM and Folder Redirection (profile share on separate host) <br /> <br /> <br /> http://blogs.citrix.com/2014/07/07/turbo-charging-your-iops-with-the-new-pvs-cache-in-ram-with-disk-overflow-feature-part-two/

PVS and MCS Webinar - Technical Deep Dive Presentation Transcript

  • 1. PVS vs. MCS  PVS & MCS! Nick Rintalan Technical Deep Dive Lead Architect, Americas Consulting, Citrix Consulting August 5, 2014
  • 2. © 2014 Citrix. Confidential.2 Agenda Myth Busting The New PVS wC Option Detailed Performance Testing & Results Key Takeaways Q & A
  • 3. Myth Busting!
  • 4. © 2014 Citrix. Confidential.4 Myth #1 – PVS is Dead! PVS is alive and well Not only are we (now) enhancing the product and implementing new features like RAM cache with overflow to disk, but it has a healthy roadmap Why the Change of Heart? • We realized PVS represents a HUGE competitive advantage versus VMW • We realized our large PVS customers need a longer “runway” • We actually had a key ASLR bug in our legacy wC modes we had to address
  • 5. © 2014 Citrix. Confidential.5 Myth #2 – MCS Cannot Scale Most people that say this don’t really know what they are talking about  And the few that do might quote “1.6x IOPS compared to PVS” The 1.6x number was taken WAY out off context a few years back (it took into account boot and logon IOPS, too) Reality: MCS generates about 1.2x IOPS compared to PVS in the steady-state • 8% more writes and 13% more reads, to be exact • We have a million technologies to handle those additional reads! But performance (and IOPS) are only one aspect you need to consider when deciding between PVS and MCS…
  • 6. © 2014 Citrix. Confidential.6 Myth #3 – MCS (or Composer) is Simple on a Large Scale MCS (or any technology utilizing linked clone’ish technology) still leaves a bit to be desired from an operations and management perspective today • Significant time required when updating a large number of VDIs (or rolling back) • Controlled promotional model • Support for things like vMotion • Some scripting may be required to replicate parent disks efficiently, etc. MCS is Simple/Easy • I’d agree as long as it is somewhat small’ish (less than 1k VDIs or 5k XA users) • But at real scale, MCS is arguably more complex than PVS • How do you deploy MCS or Composer to thousands of desktops residing on hundreds of LUNs, multiple datastores and instances of vCenter, for example? • This is where PVS really shines today
  • 7. © 2014 Citrix. Confidential.7 Myth #4 – PVS is Complex Make no mistake, the insane scalability that PVS provides doesn’t come absolutely “free”, so there is some truth to this statement  BUT, have you noticed what we’ve done over the last few years to address this? • vDisk Versioning • Native TFTP Load Balancing via NS 10.1+ • We are big endorsers of virtualizing PVS (even on that vSphere thing) • We have simplified sizing the wC file and we also endorse thin provisioning these days - RAM Cache w/ overflow to disk (and thin provision the “overflow” disk = super easy)
  • 8. © 2014 Citrix. Confidential.8 Myth #5 – PVS Can Cause Outages So can humans!  And if architected correctly, using a pod architecture, PVS cannot and should not take down your entire environment Make sure every layer is resilient and fault tolerant • Don’t forget about Offline Database Support and SQL HA technologies (mirroring) We still recommend multiple PVS farms with isolated SQL infrastructure for our largest customers – not really for scalability or technical reasons, but to minimize the failure domain
  • 9. © 2014 Citrix. Confidential.9 Myth #6 – XenServer is dead and MCS only works with IntelliCache Just like PVS, XenServer is also alive and well • We just shifted our focus a bit • Contrary to popular belief, we are still actively developing it We are implementing hypervisor level RAM-based read caching in XS.next • Think “IntelliCache 2.0” (no disks or SSDs required this time!) • The new in-memory read caching feature and the old IC feature can even be combined! Did you know that MCS already works and is supported with CSV Caching in Hyper-V today? Did you know that MCS also works with CBRC? • We even have customers using it in production! (Just don’t ask for official support)
  • 10. The new PVS wC Option aka “The Death of IOPS”
  • 11. © 2014 Citrix. Confidential.11 RAM Cache with Overflow to Disk – Details First and foremost, this RAM Caching is NOT the same as the old PVS RAM Cache feature • This one uses non-paged pool memory and we no longer manage internal cache lists, etc. (let Windows do it – it is pretty good at this stuff as it turns out!) • Actually compared the old vs. new RAM caching and found about 5x improvement in throughput Pretty simple concept: leverage memory first, then gracefully spill over to disk • VHDX-based as opposed to all other “legacy” wC modes, which are VHD-based - vdiskdif.vhdx vs. .vdiskcache • Requires PVS 7.1+ and Win7/2008R2+ targets • Also supports TRIM operations (shrink/delete!)
  • 12. © 2014 Citrix. Confidential.12 RAM Cache with Overflow to Disk – Details Con’t The VHDX spec uses 2 MB chunks or block sizes, so that is how you’ll see the wC grow (in 2 MB chunks) The wC file will initially be larger than the legacy wC file, but over time, it will not be significantly larger as data will “backfill” into those 2 MB reserved blocks This new wC option has nothing to do with “intermediate buffering” – totally replaces it This new wC option is where we want all our customers to move ASAP, for not only performance reasons but stability reasons (ASLR)
  • 13. © 2014 Citrix. Confidential.13 Why it works so well with only a little RAM A small amount of RAM will give a BIG boost! All writes (mostly random 4K) first hit memory They get realigned and put into 2 MB memory blocks in Non-Paged Pool If they must flush to disk, they get written as 2 MB sequential VHDX blocks • We convert all random 4K write IO into 2 MB sequential write IO Since Non-Paged Pool and VHDX are used we support TRIM operations • Non-Paged Pool memory can be reduced and the VHDX can shrink!!!! • This is very different than all our old/legacy VHD-based wC options
  • 14. Performance Results
  • 15. © 2014 Citrix. Confidential.15 Our First Field Test (XA workloads w/ vSphere) Used IOMETER to compare legacy wC options and new wC option • #1 – “line test” (i.e. no PVS) • #2 and #3 – new wC option • #4 – legacy RAM cache option • #5 – legacy disk cache option (which 90% of our customers use today!!!)
  • 16. © 2014 Citrix. Confidential.16 Our Second Field Test (XD workloads w/ vSphere and Hyper-V) Win7 on Hyper-V 2012 R2 with 256 MB buffer size (with bloated profile): Win7 on vSphere 5.5 with 256 MB buffer size (with bloated profile):
  • 17. © 2014 Citrix. Confidential.17 And Even More Testing from our Solutions Lab LoginVSI 4.0 Variables • Product version • Hypervisor • Image delivery • Workload • Policy Hardware • HP DL380p G8 • (2) Intel Xeon E5-2697 • 384 GB RAM • (16) 300 GB 15,000 RPM spindles in RAID 10
  • 18. © 2014 Citrix. Confidential.18 0 50 100 150 200 250 2008R2 2012R2 2012R2 2012R2 2012R2 2012R2 2008R2 2012R2 2012R2 2012R2 2012R2 2012R2 UX UX UX Scale UX UX UX UX UX Scale UX UX Medium Light Medium Medium Medium Medium Medium Light Medium Medium Medium Medium PVS (Disk) MCS MCS MCS PVS (Disk) PVS (RAM) PVS (Disk) MCS MCS MCS PVS (Disk) PVS (RAM) Hyper-V Hyper-V Hyper-V Hyper-V Hyper-V Hyper-V vSphere vSphere vSphere vSphere vSphere vSphere 6.5 7.5 7.5 7.5 7.5 7.5 6.5 7.5 7.5 7.5 7.5 7.5 VSI Max (XenApp 7.5 - LoginVSI 4)
  • 19. © 2014 Citrix. Confidential.19 0 50 100 150 200 250 Hyper-V 2012R2 vSphere 5.5 MCS PVS (Disk) PVS (RAM with Overflow) PVS vs MCS Notable XenApp 7.5 Results Imaging platform does NOT impact single server scalability
  • 20. © 2014 Citrix. Confidential.20 0 10 20 30 40 50 60 70 80 90 Hyper-V 2012R2 vSphere 5.5 MCS PVS (Disk) PVS (RAM with Overflow) PVS vs MCS Notable XenDesktop 7.5 Results Imaging platform does NOT impact single server scalability
  • 21. © 2014 Citrix. Confidential.21 0 1 2 3 4 5 6 7 8 9 10 Hyper-V vSphere MCS PVS (Disk) PVS (RAM with Overflow) MCS vs PVS (Disk) vs PVS (RAM with Overflow) Notable XenDesktop 7.5 Results PVS (RAM with Overflow) less than 0.1 IOPS with 512MB RAM Cache!!! 0.1 IOPS per user
  • 22. © 2014 Citrix. Confidential.22 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 PVS (RAM with Overflow) 512MB 256MB PVS (RAM with Overflow) 512 MB vs 256 MB Notable XenDesktop 7.5 Results • 512 MB RAM = .09 IOPS • 256 MB RAM = .45 IOPS
  • 23. © 2014 Citrix. Confidential.23 0 20 40 60 80 100 120 140 160 180 1 12 23 34 45 56 67 78 89 100 111 122 133 144 155 166 177 188 199 210 221 232 243 254 265 276 287 298 309 320 331 342 353 364 375 386 397 408 419 430 441 452 463 474 485 496 507 518 529 540 551 562 573 584 595 606 617 628 639 IOPS Total Host IOPS (100 users on host) PhysicalDisk -- Disk Transfers/sec -- _Total Peak IOPS Notable XenApp 7.5 Results Peak = 155 IOPS
  • 24. Key Takeaways & Wrap-Up
  • 25. © 2014 Citrix. Confidential.25 Key Takeaways Performance/Scalability is just one element to weigh when deciding between MCS and PVS • Do NOT forget about manageability and operational readiness • How PROVEN is the solution? • How COMPLEX is the solution? • Do you have the ABILITY & SKILLSET to manage the solution? • Will it work at REAL SCALE with thousands of devices? The new VHDX-based PVS 7.x write cache option is the best thing we have given away for FREE since Secure Gateway (IMHO) It doesn’t require a ton of extra memory/RAM – a small buffer will go a long way
  • 26. © 2014 Citrix. Confidential.26 Key Takeaways – Con’t For XD workloads, start with 256-512 MB buffer per VM For XA workloads, start with 2-4 GB buffer per VM If you are considering vSAN, buying SSDs or a niche storage array, STOP immediately what you’re doing, test this feature and then have a beer to celebrate We just put IOPS ON NOTICE! • http://blogs.citrix.com/2014/07/22/citrix-puts-storage-on-notice/ • Now all you really have to worry about are the IOPS associated with things like the pagefile and event logs
  • 27. © 2014 Citrix. Confidential.27 WORK BETTER. LIVE BETTER.