Your SlideShare is downloading. ×
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small Business
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small Business

5,849
views

Published on

Slide deck from VMworld 2010 session - “Building an Affordable vSphere Environment for a Lab or Small Business”

Slide deck from VMworld 2010 session - “Building an Affordable vSphere Environment for a Lab or Small Business”

Published in: Technology

0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
5,849
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
4
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Eric
  • Simon S:Why Build a vSphere Lab?- Small Business Exam Study Hands-on Learning Centralized Home Infrastructure
  • Simon S:There are many components that make up a vSphere lab: Server Storage (physical & virtual (VSA)) Network (switches & in some cases routers – though there are VA router options) Hypervisor (vSphere ESX/ESXi) Operating System (eg: Windows, RedHat) Power & Cooling: this is a particular consideration if running your lab from home. Time: Large amounts of time can be spent building and working with your lab  Be warned.
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Simon S:vSphere lab servers can come in a range of different sizes and form factors – all varying in age, physical resource capabilities and manufacturers: Laptop/Desktop PCWhite BoxEntry Level ServerOld Enterprise Server
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:
  • Simon S:You can never have enough memory. In the average lab and production vSphere environment you will experience memory limitations before that of any of the other physical server resources which as CPU, network and often storage. *Though providing insufficient IOPS to a VM is also a common source of performance bottleneck.Most laptops, PCs and white box solutions based on commodity mother/system boards will only have 4-6 DIMM sockets with a 8GB limit. This of course is changing with time as higher capacity DIMMs are becoming more of norm. More high end commodity mother/system boards are now starting to provide 12GB+ of maximum memory capacity as standard. Event entry level SMB servers such as the HP ML110/115 have a relatovely limited maximum memory configuration of 8GB. The benefit of using enterprise level servers is that provide more DIMM sockets though the downside is that populating these DIMM sockets with enterprise level registered memory can be a costly affair.
  • Simon S:Error Correction Code (ECC) memory - This type of memory is often found in servers, as it is able to detect multiple- bit and correct single-bit errors during the transmission and storage of data on the DIMM. On ECC memory DIMMs, there are extra modules that store parity or ECC information. ECC memory is generally (though not always on low end DIMMs) more expensive than non-ECC.Registered (aka buffered)and unregistered memory – Often confused with ECC/Non-ECC memory. It contains a register on the DIMM that operates as a temporary holding area (buffer) for address and command signals moving between the memory module and CPU which increases the reliability of the data flow to and from the DIMM. Almost always found in enterprise level servers only.
  • Simon S:
  • Simon S:
  • Simon S:Most home lab switches will be Layer 2 (ie: non-routing) For routing within a vSphere lab environment consider using the popular Vyatta router VA – there is a free version!What to look for in a network switch:VLAN TaggingQoSJumbo FramesPopular gigabit switches – Linksys SLM series smart switches, HP ProCurve 1810G
  • Simon S:
  • Simon S:
  • Simon S:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Eric:
  • Simon S:
  • Simon S & Eric:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Simon G:
  • Transcript

    • 1. Building an affordable vSphere environment for
      a lab or small business
      Presented By:
      • Eric Siebert (vSphere-land)
      • 2. Simon Seagrave (TechHead)
      • 3. Simon Gallagher (vinf)
    • Goal of this session
      This session will cover how to build an affordable vSphere home lab or an environment for use in a small business.
      Virtualization doesn't have to be expensive, we’ll show you how you can use vSphere on a budget
      We’ll explain how to navigate through the many different options you will face when architecting a small vSphere environment so you can make the right decisions
    • 4. Why build a vSphere lab?
      Common Reasons…
      • Exam Study
      • 5. Hands On Learning
      • 6. Home Infrastructure
      • 7. Because you can...
    • What makes up a vSphere lab?
    • 8. Hardware Compatibility Guide
      HCG lists all the various hardware components that are supported by each version of ESX & ESXi
      Split up into different sub-guides which include systems (server make/models), storage devices (SAN/iSCSI/NFS) and I/O devices (NICs/Storage Controllers)
    • 9. Hardware Compatibility Guide
      Updated frequently with new hardware being added and older hardware removed
      Why this guide is important?
      ESX/ESXi has a limited set of hardware device drivers
      VMware only provides support for server hardware that is listed on the HCG
    • 10. Hardware Compatibility Guide
      Hardware may still work if not listed on the HCG
      Critical area is with I/O adapters
      Vendors are responsible for certifying that their h/w for HCG
      Must fill out application, after VMware approval 3rd party testing lab certifies h/w for vSphere
    • 11. Hardware Compatibility Guide
      VMware does not enforce an expiration period for h/w added to the HCG, up to each vendor to certify their h/w for the most current VMware product releases
      VMware GSS will provide support for vSphere running on h/w not listed on HCG if problem is not h/w related
    • 12. Hardware Compatibility Guide
      Check guide before buying h/w!
      Also check un-official guides (vm-help.com)
      For newer hardware not yet listed on HCG contact h/w vendor
    • 13. Ensuring Hardware Compatibility
      If you plan on using features that require specific h/w (i.e Fault Tolerance), do your homework
      Check with vendors to see if they have the required h/w (i.e. Intel VT-d), also check HCG
      CPU choice can be critical, check VMware KB and Intel/AMD websites for CPU features
    • 14. Ensuring Hardware Compatibility
      Checking for CPU p-state/c-state support can be tricky
      Make no assumptions with I/O adapters, on-board whitebox NICs are often not supported
      SATA adapters is OK, but SATA w/RAID is not supported
      Almost all shared storage will work
    • 15. Features that require specific server hardware
    • 16. vSphere Lab Servers
      vSphere lab servers come in all shapes & sizes…
    • 17. vSphere Lab Server – Branded PC/Laptop
      Ideal for VMware Workstation or Server use
      • Low Cost
      • 18. Laptop – highly portable vSphere lab
      • 19. Easy to obtain
      • 20. Cheap to run
      • 21. Quiet
      • 22. Limited compatibility (using ESX/ESXi)
      • 23. Small memory capability
      • 24. Potential vSphere compatibility issues
    • vSphere Lab Server – White Box
      Build your own!
      • Fun (if you enjoy this type of thing)
      • 25. More bang for your buck
      - Cheaper CPU & Memory
      - More recent hardware
      • Cheap to run (compared to a server)
      • 26. Unlikely to be on the VMware Compatibility List
      • 27. Need some hardware know-how
      • 28. Potential vSphere compatibility issues
      • 29. Lacking Enterprise level server features such as hot pluggable drives and general hardware resilience
    • vSphere Lab Server – Entry Level Server
      Many of the mainstream server manufacturers offer
      a SMB entry level server
      • Reasonable cost
      • 30. Branded hardware
      • 31. Usually quiet
      • 32. Brand familiarization, eg: management utilities
      • 33. Larger memory capacity
      • 34. Some enterprise server features, eg: Xeon /Opteron CPU
      hardware based array controller
      • Unlikely to be on the VMware compatibility list
      • 35. Lacking med/high-end enterprise level features
      • 36. Potential vSphere compatibility issues
    • vSphere Lab Server – Old Enterprise Server
      Give an old dog a new home….
      • Cheap (or free) to obtain
      • 37. Use vendor enterprise level utilities
      • 38. More CPU sockets & disks
      • 39. Resilience, eg: hard disks, fans, PSU
      • 40. Hardware based remote management
      capability
      • Memory DIMMs hold price – expensive
      • 41. Costly to run
      • 42. Noisy
    • CPU Considerations 101
      • AMD CPU: AMD-V
      • 43. Intel CPU: EM64T & Intel VT
      See VMware Knowledge Base article http://kb.vmware.com/kb/1003945 for more details regarding the prerequisites for running x64-based VMs.
      • Ensure AMD-V or Intel VT in enabled in BIOS
      • 44. Hyperthreading?
      • 45. Use the same processor make &
      model if you want to use “fun”
      features such as VMotion
      incl. DRS, HA
    • 46. CPU Considerations – CPU ID
      For CPU Details including 64 bit details use CPU ID Utility from VMware
      Download from http://www.vmware.com/download/
      shared_utilities.html
    • 47. CPU Considerations - EVC
      Designed to further ensure CPU compatibility
      between ESX hosts
      Enhanced VMotion Compatibility (EVC)
    • 48. CPU Considerations – FT
      List of Fault Tolerance (FT) compatible CPUs:
      http://kb.vmware.com/kb/1008027
      Also, VMware SiteSurvey
    • 49. CPU Considerations – Power Saving
      • Enhanced SpeedStep by Intel
      • 50. Enhanced PowerNow! by AMD
      These technologies enable a server to dynamically switch CPU frequencies and voltages (referred to as Dynamic Voltage & Frequency Scaling or DVFS)
    • 51. Memory
      Memory is King!
      DIMM Sockets – the more the merrier
    • 52. Memory – ECC & Registered
      More Lower Capacity DIMMs Vs Less Higher Capacity DIMMS
      ECC or Non ECC? (That is the question)
      Registered Vs Non-Registered DIMMS
    • 53. Disks & Storage Controller
      Most problematic component with regard to compatibility
      Lots of choices: RAID, SAS, SATA, SSD. IOPS versus Capacity
      ESXi can be run from USB memory stick/SD Card & if shared storage appliance used local disk controller not important
    • 54. Disks & Storage Controller
      • Onboard RAID controllers on entry level servers & SMB/Home level mother/system boards are often insufficient
      • 55. Dedicated hardware based (eg: PCIe) array controllers are preferable
      • 56. Do you actually need RAID in your lab? Production use = RAID essential!
    • Networking
      A Few Basic Questions?
      • How many NICs?
      • 57. Using VLANs?
      • 58. What ESX/ESXi
      features?
      • NIC Expansion Options:
      - PCI, PCI-X, PCIe
      • NIC Speeds – Gigabit highly recommended
    • Networking – # of Ports
    • 59. Networking
      • Popular PCIe-based network card models are the Intel Pro 1000 PT/MT and the HP NC380T
      • 60. Quad port cards are good but $$$$
      • 61. EBay a good source of 2nd hand cards
    • Networking - Switches
      • Layer 2 switch is sufficient for most lab or SMB environments.
      • 62. Features to look for:
      • 63. Gigabit ports
      • 64. Managed or Smart Switch
      • 65. VLAN tagging (IEEE 802.1Q)
      • 66. QoS
      • 67. Jumbo Frames
      • 68. Use Vyatta Core VA for routing
      requirements – it’s free!
    • 69. Installing ESXi on to a USB flash drive
      Very convenient and easy way to use ESXi
      Simple requirements: 1Gb flash drive, ESXi Installable ISO image
    • 70. Installing ESXi on to a USB flash drive
      Can use any flash drive, officially only supported on h/w vendor supplied flash drives
      Performance can vary widely between brands, sizes & models
      Server must support booting from USB drive
      Use internally instead of externally
    • 71. Installing ESXi on to a USB flash drive
      Install ESXi as normal but select USB flash drive instead
      Can also use Workstation to install to a VM
      Quality flash drives can last many years and over 10,000 write cycles
      Use USB image tools to clone or backup flash drives
    • 72. Shared Storage – Physical Devices
      Lots of devices to choose from
    • 73. Shared Storage – Physical Devices
      Popular devices include:
    • 74. Shared Storage – Physical Devices
      When using shared storage 1GB networking is a must
      iSCSI/NFS are built into vSphere and work with any pNICs
      Most affordable shared storage devices are listed on vSphere HCG
      Many units have lots of advanced features, are multi-functional, multi-RAID levels & multi-NICs
    • 75. Shared Storage – Physical Devices
      Choosing between iSCSI & NFS often personal preference
      Offer similar performance but have different characteristics
      Some storage units support both
      Budget often dictates what you get
      In general, the more you spend, the better performance you’ll get
    • 76. Shared Storage – Physical Devices
      Many units offer special RAID technology, try not to mix drive speeds/sizes
      More spindles – better performance
      Many units are expandable
      Low cost rack mount units available as well (Synology RS409, Iomega ix12-300r, NetgearReadyNAS 2100)
    • 77. Shared Storage - VSAs
      Virtual Storage Appliances can turn local storage into iSCSI/NFS shared storage
      Can run physical or virtual
      Available to any host
      Can be cheaper then buying a dedicated device
      More complicated to setup and maintain
    • 78. Shared Storage - VSAs
      Many VSA products to chose from
      Paid apps offer more features such as clustering, replicationand snapshots
    • 79. Shared Storage - VSAs
      OpenFiler a popular choice
      Available as ISO image to install bare-metal on a server or as a pre-built virtual machine
      Managed via web browser
      Many advanced features: NIC-bonding, iSCSI or NFS, clustering
      Paid support is available
    • 80. vSphere Editions
    • 81. Must Have Software
    • 82. vTARDIS:nano Architecture
      44
    • 83. Transportable Awfully Revolutionary Datacentre of Invisible Servers {small}(vT.A.R.D.I.S:nano)
      1 x Physical HP ML115 G5 with 8Gb RAM
      128Gb SSD
      iSCSI Virtual SAN(s)
      vSphere 4 Classic
      8 x ESXi Virtual Machines
      60 x Nested Virtual Machines
      It’s bigger inside than the outside
      45
    • 84. Nested VMs - .VMX Hackery
      Asprin at the ready….
      ESX as a Virtual Machine, running its own virtual machines
      Run a VM INSIDE another VM
      This isn’t a supported configuration, but hey it’s for lab/playing
      Enable VM’s to be run inside another VM monitor_control.restrict_backdoor TRUE on Virtual ESXi hosts only
      46
    • 85. Nested ESX, cool…but what about nested…?
      Hyper-V
      With .VMX hacks can install the role in a VM, but it cannot run nested VMs – not possible
      XenServer
      Can run Nested Linux VMs (not tried)
      Can’t run Nested Windows VMs
      47
    • 86. vTARDIS:nano Demo
      VM Provisioning Script (PowerShell)
      It’s bigger on the inside than it is outside
      .VMX hackery
      48
    • 87. T.A.R.D.I.S Configuration Notes
      • Separate VLAN’s for storage, vMotion, FT, management
      • 88. ESX VM Template with multiple vNICs & mounted .ISO, ready to start Install (or use PXE)
      • 89. Do not clone installed ESXi/Classic!
      • 90. Physical Host – set vSwitch to allow promiscuous mode otherwise guest VM networking will not work
      • 91. Pay attention to max number of ESX hosts to a single shared LUN (or it will stop working)
      • 92. Nested VM with FT needs further .VMX hackery & doesn’t work brilliantly, but is ok for learning the configurations
      • 93. Virtual ESX servers need monitor_control.restrict_backdoor TRUE setting to run nested VMs
      • 94. AMD CPU is required to run Nested Virtual Machines, does not work on any Intel Xeon CPU I have tried
      • 95. AMD-V Nested Paging feature
      49
    • 96. vTARDIS – Network Diagram
      50
      VM Network for guest iSCSI VLAN
      Physical Host Network Config
      VM Network for guest vMotion VLAN
      10.0.0.x
      Admin Network
      VMKernel Ports
      For physical hosts
    • 97. vTARDIS – Network Diagram
      51
      These are really vNICs
      VMKernel Ports in ESXi Guest
      Virtual ESXi Guest Network
      Note: no need to specify VLAN tag – it is done on host
    • 98. vTARDIS: nanoNetworking
      All in-memory, no external switching
      Cross-over cable to admin console (my laptop)
      Physical vSwitch to promiscuous mode
      dvSwitch for VM Traffic
      52
    • 99. Layer 3 Routing
      Complete Software Solution
      Multiple vNICs to VLANs
      Simple routing configuration
      VyattaCore virtual router community edition - free
      Internet Access
      Smoothwallor IPCop – Opensource firewall/NAT router and proxy
      Simon Gallagher (vinf.net), VMworld 2010
      53
    • 100. Storage - Performance
      SSD & SATA combo is the way to go
      128Gb SSD – lots of IOPS! ~$400
      OpenFiler Virtual Machine
      30Gb VMDK on SSD
      iSCSI Target for ESXi cluster nodes
      All disk access is in-memory, no physical networking
      Heavy use of thin-provisioning & linked clones
      54
    • 101. Thank you!
      www.vsphere-land.com
      www.techhead.co.uk
      vinf.net