STS VMware Domino Presentation for GLUG

1,324 views

Published on

My best practices presentation from the Greenville Lotus User Group (GLUG) from Q3 2009.

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,324
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
75
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

STS VMware Domino Presentation for GLUG

  1. 1. Running IBM Lotus Domino on VMware Or “We want to virtualize absolutely everything” Fall 2009 Greenville, SC Lotus User Group Darren Duke, Technical Lead, STS
  2. 2. About me – my fav slide <ul><li>Domino consultant for over a decade
  3. 3. Domino, VMware and BlackBerry certified
  4. 4. http://blog.darrenduke.net
  5. 5. Oddles of DAOS and VMware experience </li></ul>
  6. 6. Agenda <ul><li>Myths, Truths and Old Wives Tales
  7. 7. Should you virtualize?
  8. 8. Easy ones and the basics
  9. 9. Performance
  10. 10. Domino infrastructure
  11. 11. Vmware infrastructure
  12. 12. Finally.... </li></ul>
  13. 13. Myths, Truths and Old Wives Tales <ul><li>Can you run Domino on VMWare? </li></ul><ul><ul><li>Yes, but only with proper planning, testing and tuning </li></ul></ul><ul><li>One should not run high I/O apps (like e-mail) in VMWare </li></ul><ul><ul><li>False, but you should plan, test and tune </li></ul></ul><ul><li>The bottlenecks are not always where you think </li></ul>
  14. 14. Why you should virtualize <ul><li>Your boss tells you that you have to - ;)
  15. 15. You have a business case: </li></ul><ul><ul><li>For DR/HA via VMware Site Recovery, et al
  16. 16. Consolidation/Upgrade refresh
  17. 17. Consolidation of servers
  18. 18. Ease issues with hardware upgrades
  19. 19. Your current Domino server is 15 years old </li><ul><li>No, really we see this all the time </li></ul></ul></ul>
  20. 20. Why you should NOT virtualize <ul><li>Your boss tells you that you have to
  21. 21. You are doing it to be “cool”
  22. 22. You are lacking a specific business case
  23. 23. You are using a pSeries or an iSeries </li></ul><ul><ul><li>Really? You want this kind of headache?
  24. 24. You already have 99.999% up-time </li></ul></ul><ul><li>You have iNotes users and run Windows
  25. 25. To replace Domino clustering </li></ul>
  26. 26. The easy ones <ul><li>Do not use any P2V tool </li></ul><ul><ul><li>Rebuilt it, they will come
  27. 27. Crap in, crap out </li></ul></ul><ul><li>Start small, pick BES not a 2,000 user mail server </li></ul><ul><ul><li>You will learn a whole lot! </li></ul></ul><ul><li>Know what you current environment is doing before you virtualize it </li></ul>
  28. 28. The easy ones - cont <ul><li>Know your hardware </li><ul><li>And the impact Domino 8.5.x will have on it </li></ul><li>Are you currently using shared storage? </li><ul><li>Are you going to it during this “migration” </li></ul><li>Know the license ramifications </li></ul><ul><ul><li>Speak to your IBM Partner about this. This is important!
  29. 29. PVU to vPVU, Nehalem, etc </li></ul></ul>
  30. 30. The easy ones - cont <ul><li>Domino virtualization is a team sport </li><ul><li>Domino admins
  31. 31. SAN admins
  32. 32. Network admins
  33. 33. VM admins </li></ul><li>But each has a different agenda </li><ul><li>You can please some of the people some of the time...... </li></ul></ul>
  34. 34. The Basics <ul><li>Domino runs best on a single vCPU </li><ul><li>Try it, you'll see, however try to keep your v-specs the same as a physical server </li></ul><li>Storage options </li><ul><li>As fast as you can afford, both drive speed and connectivity </li><ul><li>15k+ RPM and smaller sized drives are better </li></ul><li>RAID 10 can be your friend
  35. 35. Local
  36. 36. SAN/NAS </li></ul></ul>
  37. 37. The Basics - cont <ul><li>We are talking about ESX and ESXi </li><ul><li>Not VMware Server
  38. 38. Not VMware Workstation
  39. 39. And certainly not HyperV </li></ul><li>Yes, ESXi is absolutely fine </li><ul><li>Buy support if you plan to run in production </li><ul><li>Platinum = 24 x 7
  40. 40. Gold = 12 x 5 </li></ul></ul></ul>
  41. 41. The Basics - cont <ul><li>There is currently an issue with ESX and Windows Domino web servers </li><ul><li>Sluggish response
  42. 42. VMware are aware of the issue
  43. 43. See IBM Tech-note 1331074 </li></ul><li>Never, ever, let the server RAM balloon </li><ul><li>Give it all the RAM is wants
  44. 44. vSphere 4 is your friend </li></ul></ul>
  45. 45. Performance RAM <ul><li>If you are using 64 bit Windows </li><ul><li>Use a 64 bit OS
  46. 46. Use a 64 bit Domino server
  47. 47. Give it as much RAM as you can </li></ul><li>For 32 bit Windows </li><ul><li>Give it 4GB of RAM </li></ul><li>Enable “unlimited” memory in VIC </li></ul>
  48. 48. Performance RAM - cont <ul><li>If you are using Linux </li><ul><li>It doesn't have the RAM issues Windows has
  49. 49. Give it 4GB RAM </li></ul><li>Do not, ever, let the server RAM balloon </li><ul><li>Give it all the RAM is wants
  50. 50. vSphere 4 is your friend </li><ul><li>Hot add </li></ul></ul></ul>
  51. 51. Performance – SAN Disks <ul><li>A single LUN per VM disk </li><ul><li>Do not share! </li><ul><li>This is why RDMs can look, feel and behave faster </li></ul><li>This can be a VMDK (see above) </li></ul><li>Separate LUN for OS, Page and Domino code
  52. 52. Separate LUN for Domino Data
  53. 53. Separate LUN for Transaction Logs
  54. 54. Yes, your SAN admin will hate you! </li></ul>
  55. 55. Performance – SAN Disks <ul><li>Neither NFS nor 1GB iSCSI is recommended
  56. 56. Fast HBA and fabric </li><ul><li>4Gb is 2x faster than 2Gb
  57. 57. 8Gb is 2x faster than 4Gb
  58. 58. No, it really is that simple </li></ul><li>Follow best practices for your SAN and fabric </li><ul><li>Be sure to align if you need to </li></ul></ul>
  59. 59. Performance – Other disks <ul><li>Local disk </li><ul><li>Multiple servers on same local disk...NO! </li><ul><li>Not supported by IBM
  60. 60. Well, maybe if you have 10 or so users </li></ul><li>RAID 10 is your friend
  61. 61. Can use local disk tx logging for low user counts </li><ul><li>< 250, be sure to test </li></ul></ul><li>NFS </li><ul><li>Use this only for ISOs and exe storage </li></ul></ul>
  62. 62. Performance – Stats <ul><li>Domino Statistics </li><ul><li>Disk Queue length should be as close to 2
  63. 63. Degraded if >= 12, significantly so </li></ul><li>ESX </li><ul><li>Esxtop is your friend, see what your server is doing
  64. 64. Disk latency </li><ul><li>5ms is ideal
  65. 65. >= 10ms needs looking at </li></ul></ul></ul>
  66. 66. Performance – Stats <ul><li>If you have an issue, needle in a haystack </li><ul><li>SAN cache
  67. 67. Incorrect fiber configuration
  68. 68. Slow SAN
  69. 69. HBA configuration issues </li></ul><li>Know your hardware before you load it
  70. 70. iSCSI @ 10G Ethernet
  71. 71. Fiber @ 4+ Gbps (8 if you plan on scaling) </li></ul>
  72. 72. Performance – Disk types <ul><li>Like religion, politics and anti-virus providers...
  73. 73. VMDK vs RDM </li><ul><li>I personally have seen better performance post implementation using RDM (see below on why)
  74. 74. However, IF you to adhere to one VMDK per LUN </li><ul><li>This can be faster and recommended </li></ul></ul><li>Bottom-line, test, test, test </li><ul><li>Prior to implementation </li></ul><li>Align if needed - http://tinyurl.com/y3gdup </li></ul>
  75. 75. Performance – Networking <ul><li>Segment different traffic to separate physical NICs </li><ul><li>Server to server (non cluster) </li><ul><li>Replication
  76. 76. Mail routing </li></ul><li>Server to client, client to server
  77. 77. Clustering </li></ul><li>Remember 4 vNIC max per VM, use them
  78. 78. If you have the CPU cycles, compress the TCP port traffic (on Domino) </li></ul>
  79. 79. Performance – Networking - cont <ul><li>If your bottleneck is not disk I/O then </li><ul><li>It is probably NIC related
  80. 80. They are cheap, yet time and time again we see issues in this area
  81. 81. It could be your switches or the configuration thereof </li><ul><li>Linksys != Cisco :) </li></ul></ul></ul>
  82. 82. Performance – Domino <ul><li>Disable all un-used tasks in the server notes.ini
  83. 83. Disable TX Logs for ancillary NSF files </li><ul><li>See Andy Pedisich's blog, http://tinyurl.com/lqwv8v </li></ul><li>Make sure your VMDK versions are updated </li><ul><li>Should match you ESX version
  84. 84. ESX 3.0 is much faster I/O than 2.x </li></ul><li>Domino 8.5.x has 30-35% less I/O
  85. 85. Prevent ballooning at all costs </li></ul>
  86. 86. Performance – Domino - cont <ul><li>Are you sure you need to AV scan EVERY write? </li><ul><li>Investigate having a central AV Domino server
  87. 87. Maybe even (shock!) a non VM </li></ul><li>Install VMware Tools (and keep updated) </li><ul><li>Ensure OS time is sync'd </li></ul><li>Separate LUNs
  88. 88. Start with 1 vCPU </li><ul><li>If you must do 2, check it is being used
  89. 89. UPDATERS=x (where x is vCPU count) </li></ul></ul>
  90. 90. Domino Infrastructure <ul><li>Using LDAP? </li><ul><li>Create a Domino server just for that
  91. 91. You can have more than 1 LDAP server </li></ul><li>Move the Administration Server to distinct Domino server, makes future upgrades simple
  92. 92. You may need to mix and match drive types </li><ul><li>VMDK for data
  93. 93. RDM for TX Logs </li></ul></ul>
  94. 94. Domino Infrastructure - cont <ul><li>N/D 8.5.1 and DAOS is your friend </li><ul><li>Server to server replication </li><ul><li>DAOS will NOT resend known NLO's
  95. 95. Does not work for clustering </li></ul><li>Client to server </li><ul><li>Reply, reply to all and forward will NOT send (from the client) known NLOs </li></ul><li>Less network, less I/O, less CPU </li></ul></ul>
  96. 96. Domino Infrastructure - cont <ul><li>Do not try to match your physical servers </li><ul><li>One 8 way x64 != One single vCPU ESX guest
  97. 97. Split the load between many, smaller guests
  98. 98. Keep away from 4 vCPU guests </li><ul><li>Indeed, try to keep to 1 vCPU </li></ul></ul><li>Do not share NICs with Domino </li><ul><li>Give each Domino guest a dedicated NIC
  99. 99. Compress TCP port on server AND client </li></ul></ul>
  100. 100. VMware Infrastructure <ul><li>Watch your shares </li><ul><li>Both RAM, CPU and disk
  101. 101. Assign as appropriate </li></ul><li>Jumbo frames and vLANs can be your friend
  102. 102. Do you really need to DRS or HA Domino? </li><ul><li>Domino clustering is much, much easier
  103. 103. High I/O loads are slow to DRS </li></ul><li>Do not over commit resources Domino hosts </li></ul>
  104. 104. VMware Infrastructure - cont <ul><li>Remove snapshots as soon as practicably possible
  105. 105. Don't forget to defrag Windows guests
  106. 106. vSphere 4 can be 3-10% faster depending on loads </li><ul><li>Only runs on x64 host hardware
  107. 107. For x32 hosts you will still need ESX 3.5 </li></ul><li>Intel Nehalem CPUs can provide a boost w/4.x </li></ul>
  108. 108. VMware Infrastructure - cont <ul><li>Keep your ESX servers patched and current </li><ul><li>Including U levels </li></ul><li>Watch for updated drivers from VMware </li><ul><li>See if they are a better match for your environment
  109. 109. Specifically NIC drivers, jumbo frames, etc </li></ul></ul>
  110. 110. And Finally.... <ul><li>There is no silver bullet - sorry
  111. 111. Each VMware environment is different
  112. 112. Test, test and test
  113. 113. Try different configurations </li><ul><li>Server.Load / NotesBench </li></ul><li>In production, be sure to monitor </li><ul><li>VMware AppSpeed </li></ul><li>YMMV (your mileage may vary) </li></ul>
  114. 114. We are here to help <ul><li>For further information contact or to schedule services </li><ul><li>Lisa Duke, li [email_address] or 678 378 4278
  115. 115. Ernie Sutter, ernie.sutter@simplified-tech.com or 404 931 5786 </li></ul><li>Lots more information on the STS web site and blog: </li><ul><li>http://www.simplified-tech.com
  116. 116. http://blog.darrenduke.net
  117. 117. Twitter – be sure to follow darrenduke and simplifiedtech </li></ul><li>We are an authorized IBM, RIM, VMware and Symantec reseller for new sales and renewals
  118. 118. R6.5 is being “End Of Life” in April 2010. </li></ul>

×