VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small Business

6,463 views

Published on

Slide deck from VMworld 2010 session - “Building an Affordable vSphere Environment for a Lab or Small Business”

Published in: Technology
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
6,463
On SlideShare
0
From Embeds
0
Number of Embeds
902
Actions
Shares
0
Downloads
0
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • Eric
  • Simon S:Why Build a vSphere Lab?- Small Business Exam Study Hands-on Learning Centralized Home Infrastructure
  • Simon S:There are many components that make up a vSphere lab: Server Storage (physical & virtual (VSA)) Network (switches & in some cases routers – though there are VA router options) Hypervisor (vSphere ESX/ESXi) Operating System (eg: Windows, RedHat) Power & Cooling: this is a particular consideration if running your lab from home. Time: Large amounts of time can be spent building and working with your lab  Be warned.
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Simon S:vSphere lab servers can come in a range of different sizes and form factors – all varying in age, physical resource capabilities and manufacturers: Laptop/Desktop PCWhite BoxEntry Level ServerOld Enterprise Server
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:You can never have enough memory. In the average lab and production vSphere environment you will experience memory limitations before that of any of the other physical server resources which as CPU, network and often storage. *Though providing insufficient IOPS to a VM is also a common source of performance bottleneck.Most laptops, PCs and white box solutions based on commodity mother/system boards will only have 4-6 DIMM sockets with a 8GB limit. This of course is changing with time as higher capacity DIMMs are becoming more of norm. More high end commodity mother/system boards are now starting to provide 12GB+ of maximum memory capacity as standard. Event entry level SMB servers such as the HP ML110/115 have a relatovely limited maximum memory configuration of 8GB. The benefit of using enterprise level servers is that provide more DIMM sockets though the downside is that populating these DIMM sockets with enterprise level registered memory can be a costly affair.
  • Simon S:Error Correction Code (ECC) memory - This type of memory is often found in servers, as it is able to detect multiple- bit and correct single-bit errors during the transmission and storage of data on the DIMM. On ECC memory DIMMs, there are extra modules that store parity or ECC information. ECC memory is generally (though not always on low end DIMMs) more expensive than non-ECC.Registered (aka buffered)and unregistered memory – Often confused with ECC/Non-ECC memory. It contains a register on the DIMM that operates as a temporary holding area (buffer) for address and command signals moving between the memory module and CPU which increases the reliability of the data flow to and from the DIMM. Almost always found in enterprise level servers only.
  • Simon S:
  • Simon S:
  • Simon S:Most home lab switches will be Layer 2 (ie: non-routing) For routing within a vSphere lab environment consider using the popular Vyatta router VA – there is a free version!What to look for in a network switch:VLAN TaggingQoSJumbo FramesPopular gigabit switches – Linksys SLM series smart switches, HP ProCurve 1810G
  • Simon S:
  • Simon S:
  • Simon S:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Simon S:
  • Simon S & Eric:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small Business

    1. 1. Building an affordable vSphere environment for<br />a lab or small business<br /> Presented By:<br /><ul><li>Eric Siebert (vSphere-land)
    2. 2. Simon Seagrave (TechHead)
    3. 3. Simon Gallagher (vinf)</li></li></ul><li>Goal of this session<br />This session will cover how to build an affordable vSphere home lab or an environment for use in a small business. <br />Virtualization doesn't have to be expensive, we’ll show you how you can use vSphere on a budget<br />We’ll explain how to navigate through the many different options you will face when architecting a small vSphere environment so you can make the right decisions<br />
    4. 4. Why build a vSphere lab?<br />Common Reasons…<br /><ul><li>Exam Study
    5. 5. Hands On Learning
    6. 6. Home Infrastructure
    7. 7. Because you can...</li></li></ul><li>What makes up a vSphere lab?<br />
    8. 8. Hardware Compatibility Guide<br />HCG lists all the various hardware components that are supported by each version of ESX & ESXi<br />Split up into different sub-guides which include systems (server make/models), storage devices (SAN/iSCSI/NFS) and I/O devices (NICs/Storage Controllers)<br />
    9. 9. Hardware Compatibility Guide<br />Updated frequently with new hardware being added and older hardware removed<br />Why this guide is important? <br />ESX/ESXi has a limited set of hardware device drivers <br />VMware only provides support for server hardware that is listed on the HCG<br />
    10. 10. Hardware Compatibility Guide<br />Hardware may still work if not listed on the HCG<br />Critical area is with I/O adapters<br />Vendors are responsible for certifying that their h/w for HCG<br />Must fill out application, after VMware approval 3rd party testing lab certifies h/w for vSphere<br />
    11. 11. Hardware Compatibility Guide<br />VMware does not enforce an expiration period for h/w added to the HCG, up to each vendor to certify their h/w for the most current VMware product releases<br />VMware GSS will provide support for vSphere running on h/w not listed on HCG if problem is not h/w related<br />
    12. 12. Hardware Compatibility Guide<br />Check guide before buying h/w!<br />Also check un-official guides (vm-help.com)<br />For newer hardware not yet listed on HCG contact h/w vendor<br />
    13. 13. Ensuring Hardware Compatibility<br />If you plan on using features that require specific h/w (i.e Fault Tolerance), do your homework<br />Check with vendors to see if they have the required h/w (i.e. Intel VT-d), also check HCG<br />CPU choice can be critical, check VMware KB and Intel/AMD websites for CPU features<br />
    14. 14. Ensuring Hardware Compatibility<br />Checking for CPU p-state/c-state support can be tricky<br />Make no assumptions with I/O adapters, on-board whitebox NICs are often not supported<br />SATA adapters is OK, but SATA w/RAID is not supported<br />Almost all shared storage will work<br />
    15. 15. Features that require specific server hardware<br />
    16. 16. vSphere Lab Servers<br />vSphere lab servers come in all shapes & sizes…<br />
    17. 17. vSphere Lab Server – Branded PC/Laptop<br />Ideal for VMware Workstation or Server use<br /><ul><li> Low Cost
    18. 18. Laptop – highly portable vSphere lab
    19. 19. Easy to obtain
    20. 20. Cheap to run
    21. 21. Quiet
    22. 22. Limited compatibility (using ESX/ESXi)
    23. 23. Small memory capability
    24. 24. Potential vSphere compatibility issues</li></li></ul><li>vSphere Lab Server – White Box<br />Build your own!<br /><ul><li> Fun (if you enjoy this type of thing)
    25. 25. More bang for your buck</li></ul> - Cheaper CPU & Memory<br /> - More recent hardware<br /><ul><li> Cheap to run (compared to a server)
    26. 26. Unlikely to be on the VMware Compatibility List
    27. 27. Need some hardware know-how
    28. 28. Potential vSphere compatibility issues
    29. 29. Lacking Enterprise level server features such as hot pluggable drives and general hardware resilience</li></li></ul><li>vSphere Lab Server – Entry Level Server<br />Many of the mainstream server manufacturers offer<br />a SMB entry level server<br /><ul><li> Reasonable cost
    30. 30. Branded hardware
    31. 31. Usually quiet
    32. 32. Brand familiarization, eg: management utilities
    33. 33. Larger memory capacity
    34. 34. Some enterprise server features, eg: Xeon /Opteron CPU</li></ul> hardware based array controller<br /><ul><li> Unlikely to be on the VMware compatibility list
    35. 35. Lacking med/high-end enterprise level features
    36. 36. Potential vSphere compatibility issues</li></li></ul><li>vSphere Lab Server – Old Enterprise Server<br />Give an old dog a new home….<br /><ul><li> Cheap (or free) to obtain
    37. 37. Use vendor enterprise level utilities
    38. 38. More CPU sockets & disks
    39. 39. Resilience, eg: hard disks, fans, PSU
    40. 40. Hardware based remote management </li></ul> capability<br /><ul><li> Memory DIMMs hold price – expensive
    41. 41. Costly to run
    42. 42. Noisy</li></li></ul><li>CPU Considerations 101<br /><ul><li> AMD CPU: AMD-V
    43. 43. Intel CPU: EM64T & Intel VT</li></ul> See VMware Knowledge Base article http://kb.vmware.com/kb/1003945 for more details regarding the prerequisites for running x64-based VMs.<br /><ul><li> Ensure AMD-V or Intel VT in enabled in BIOS
    44. 44. Hyperthreading?
    45. 45. Use the same processor make &</li></ul> model if you want to use “fun” <br /> features such as VMotion<br /> incl. DRS, HA<br />
    46. 46. CPU Considerations – CPU ID<br />For CPU Details including 64 bit details use CPU ID Utility from VMware<br />Download from http://www.vmware.com/download/<br />shared_utilities.html<br />
    47. 47. CPU Considerations - EVC<br />Designed to further ensure CPU compatibility<br />between ESX hosts<br />Enhanced VMotion Compatibility (EVC)<br />
    48. 48. CPU Considerations – FT <br />List of Fault Tolerance (FT) compatible CPUs:<br />http://kb.vmware.com/kb/1008027<br />Also, VMware SiteSurvey<br />
    49. 49. CPU Considerations – Power Saving<br /><ul><li> Enhanced SpeedStep by Intel
    50. 50. Enhanced PowerNow! by AMD </li></ul>These technologies enable a server to dynamically switch CPU frequencies and voltages (referred to as Dynamic Voltage & Frequency Scaling or DVFS)<br />
    51. 51. Memory<br />Memory is King!<br />DIMM Sockets – the more the merrier<br />
    52. 52. Memory – ECC & Registered<br />More Lower Capacity DIMMs Vs Less Higher Capacity DIMMS<br />ECC or Non ECC? (That is the question)<br />Registered Vs Non-Registered DIMMS<br />
    53. 53. Disks & Storage Controller<br />Most problematic component with regard to compatibility<br />Lots of choices: RAID, SAS, SATA, SSD. IOPS versus Capacity<br />ESXi can be run from USB memory stick/SD Card & if shared storage appliance used local disk controller not important<br />
    54. 54. Disks & Storage Controller<br /><ul><li> Onboard RAID controllers on entry level servers & SMB/Home level mother/system boards are often insufficient
    55. 55. Dedicated hardware based (eg: PCIe) array controllers are preferable
    56. 56. Do you actually need RAID in your lab? Production use = RAID essential!</li></li></ul><li>Networking<br />A Few Basic Questions? <br /><ul><li> How many NICs?
    57. 57. Using VLANs?
    58. 58. What ESX/ESXi</li></ul> features?<br /><ul><li> NIC Expansion Options:</li></ul> - PCI, PCI-X, PCIe<br /><ul><li> NIC Speeds – Gigabit highly recommended</li></li></ul><li>Networking – # of Ports<br />
    59. 59. Networking<br /><ul><li> Popular PCIe-based network card models are the Intel Pro 1000 PT/MT and the HP NC380T
    60. 60. Quad port cards are good but $$$$
    61. 61. EBay a good source of 2nd hand cards</li></li></ul><li>Networking - Switches<br /><ul><li> Layer 2 switch is sufficient for most lab or SMB environments.
    62. 62. Features to look for:
    63. 63. Gigabit ports
    64. 64. Managed or Smart Switch
    65. 65. VLAN tagging (IEEE 802.1Q)
    66. 66. QoS
    67. 67. Jumbo Frames
    68. 68. Use Vyatta Core VA for routing</li></ul>requirements – it’s free!<br />
    69. 69. Installing ESXi on to a USB flash drive<br />Very convenient and easy way to use ESXi<br />Simple requirements: 1Gb flash drive, ESXi Installable ISO image<br />
    70. 70. Installing ESXi on to a USB flash drive<br />Can use any flash drive, officially only supported on h/w vendor supplied flash drives<br />Performance can vary widely between brands, sizes & models<br />Server must support booting from USB drive<br />Use internally instead of externally<br />
    71. 71. Installing ESXi on to a USB flash drive<br />Install ESXi as normal but select USB flash drive instead<br />Can also use Workstation to install to a VM<br />Quality flash drives can last many years and over 10,000 write cycles<br />Use USB image tools to clone or backup flash drives<br />
    72. 72. Shared Storage – Physical Devices<br />Lots of devices to choose from<br />
    73. 73. Shared Storage – Physical Devices<br />Popular devices include:<br />
    74. 74. Shared Storage – Physical Devices<br />When using shared storage 1GB networking is a must<br />iSCSI/NFS are built into vSphere and work with any pNICs<br />Most affordable shared storage devices are listed on vSphere HCG<br />Many units have lots of advanced features, are multi-functional, multi-RAID levels & multi-NICs<br />
    75. 75. Shared Storage – Physical Devices<br />Choosing between iSCSI & NFS often personal preference<br />Offer similar performance but have different characteristics<br />Some storage units support both<br />Budget often dictates what you get<br />In general, the more you spend, the better performance you’ll get<br />
    76. 76. Shared Storage – Physical Devices<br />Many units offer special RAID technology, try not to mix drive speeds/sizes<br />More spindles – better performance<br />Many units are expandable<br />Low cost rack mount units available as well (Synology RS409, Iomega ix12-300r, NetgearReadyNAS 2100)<br />
    77. 77. Shared Storage - VSAs<br />Virtual Storage Appliances can turn local storage into iSCSI/NFS shared storage<br />Can run physical or virtual<br />Available to any host<br />Can be cheaper then buying a dedicated device<br />More complicated to setup and maintain<br />
    78. 78. Shared Storage - VSAs<br />Many VSA products to chose from<br />Paid apps offer more features such as clustering, replicationand snapshots<br />
    79. 79. Shared Storage - VSAs<br />OpenFiler a popular choice<br />Available as ISO image to install bare-metal on a server or as a pre-built virtual machine<br />Managed via web browser<br />Many advanced features: NIC-bonding, iSCSI or NFS, clustering<br />Paid support is available<br />
    80. 80. vSphere Editions<br />
    81. 81. Must Have Software<br />
    82. 82. vTARDIS:nano Architecture<br />44<br />
    83. 83. Transportable Awfully Revolutionary Datacentre of Invisible Servers {small}(vT.A.R.D.I.S:nano)<br />1 x Physical HP ML115 G5 with 8Gb RAM<br />128Gb SSD<br />iSCSI Virtual SAN(s)<br />vSphere 4 Classic<br />8 x ESXi Virtual Machines<br />60 x Nested Virtual Machines<br />It’s bigger inside than the outside<br />45<br />
    84. 84. Nested VMs - .VMX Hackery<br />Asprin at the ready….<br />ESX as a Virtual Machine, running its own virtual machines<br />Run a VM INSIDE another VM<br />This isn’t a supported configuration, but hey it’s for lab/playing<br />Enable VM’s to be run inside another VM monitor_control.restrict_backdoor TRUE on Virtual ESXi hosts only<br />46<br />
    85. 85. Nested ESX, cool…but what about nested…?<br />Hyper-V<br />With .VMX hacks can install the role in a VM, but it cannot run nested VMs – not possible<br />XenServer<br />Can run Nested Linux VMs (not tried)<br />Can’t run Nested Windows VMs<br />47<br />
    86. 86. vTARDIS:nano Demo<br />VM Provisioning Script (PowerShell)<br />It’s bigger on the inside than it is outside<br />.VMX hackery<br />48<br />
    87. 87. T.A.R.D.I.S Configuration Notes<br /><ul><li>Separate VLAN’s for storage, vMotion, FT, management
    88. 88. ESX VM Template with multiple vNICs & mounted .ISO, ready to start Install (or use PXE)
    89. 89. Do not clone installed ESXi/Classic!
    90. 90. Physical Host – set vSwitch to allow promiscuous mode otherwise guest VM networking will not work
    91. 91. Pay attention to max number of ESX hosts to a single shared LUN (or it will stop working)
    92. 92. Nested VM with FT needs further .VMX hackery & doesn’t work brilliantly, but is ok for learning the configurations
    93. 93. Virtual ESX servers need monitor_control.restrict_backdoor TRUE setting to run nested VMs
    94. 94. AMD CPU is required to run Nested Virtual Machines, does not work on any Intel Xeon CPU I have tried
    95. 95. AMD-V Nested Paging feature</li></ul>49<br />
    96. 96. vTARDIS – Network Diagram<br />50<br />VM Network for guest iSCSI VLAN<br />Physical Host Network Config<br />VM Network for guest vMotion VLAN<br />10.0.0.x<br />Admin Network<br />VMKernel Ports<br />For physical hosts<br />
    97. 97. vTARDIS – Network Diagram<br />51<br />These are really vNICs<br />VMKernel Ports in ESXi Guest <br />Virtual ESXi Guest Network<br />Note: no need to specify VLAN tag – it is done on host<br />
    98. 98. vTARDIS: nanoNetworking<br />All in-memory, no external switching<br />Cross-over cable to admin console (my laptop)<br />Physical vSwitch to promiscuous mode<br />dvSwitch for VM Traffic<br />52<br />
    99. 99. Layer 3 Routing<br />Complete Software Solution<br />Multiple vNICs to VLANs<br />Simple routing configuration<br />VyattaCore virtual router community edition - free<br />Internet Access<br />Smoothwallor IPCop – Opensource firewall/NAT router and proxy<br />Simon Gallagher (vinf.net), VMworld 2010<br />53<br />
    100. 100. Storage - Performance<br />SSD & SATA combo is the way to go<br />128Gb SSD – lots of IOPS! ~$400<br />OpenFiler Virtual Machine<br />30Gb VMDK on SSD<br />iSCSI Target for ESXi cluster nodes<br />All disk access is in-memory, no physical networking<br />Heavy use of thin-provisioning & linked clones<br />54<br />
    101. 101. Thank you!<br />www.vsphere-land.com<br />www.techhead.co.uk<br />vinf.net<br />

    ×