Successfully reported this slideshow.

How I reshaped my lab environment

525 views

Published on

Mike Laverick's lab environment over the years

Published in: Technology
  • Be the first to comment

  • Be the first to like this

How I reshaped my lab environment

  1. 1. +From Zero to Colo - vCloud Director in my labWith Mike Laverick (VMware)Blog: www.mikelaverick.comEmail: mike@mikelaverick.comTwitter: @mike_laverick
  2. 2. + Before I begin  Thank You VMUG Leaders!  Competition Is Good…  www.eucbook.com
  3. 3. +
  4. 4. + Agenda  The Home Lab Backstory - Long, Long ago in a galaxy called 2003….  Former vSphere Setup  CH-CH-CH Changes - vSphere5.1 Setup  Compute  Network  Storage  vCD Lesson Learned…  My Lab To-Do List…
  5. 5. + What is vCloud Director?
  6. 6. + The Home Lab Backstory - Long, Long ago in 2003…  My first attempt with ESX 2.0/vCenter 1.0  Location: Under my desk  Girlfriend Impact: NIL
  7. 7. + The vCloud Suite: SDDC Era  Virtual Appliances where possible/necessary  vCenter Server Appliance (VCSA)  Feature Parity with Windows version  Switch allowed me to completely reconfigure resources around vCloud/SDDC agenda  Reduce “infrastructure VM” footprint  Beware of plug-ins; Support for the web-client (e.g. NetApp VSC)  vCloud Director Virtual Appliances (vCD-VA)  Use built-in Oracle XE DB  Dead easy to setup (No Packages, DB setup)  Beware: No multi-cell, No migration  Beware: Demo only; Labs; Training purposes…  vShield Manager Virtual Appliance (Mandatory)  vSphere Replication Appliance (VR)  vSphere Data Protection Appliance (vDP)
  8. 8. + vSphere5/SRM5.0/View5.1 Era –  SRM 5.0 Period (2011)  Hello 2x Dell Equallogics  Hello 1x NS-120 & 1x NS-20  Hello 2x NetApp 2040s  Hello massive colocation bill!!!  VMware Employee Period (2012) >>>>>>>>>>>  HomeLab & ProLab Merge  Goodbye EMC  Goodbye 2xPDU  Hello 24U of extra racks space  Hello to 14 AMPs extra power!  Location: Quality Colocation  Costs: £870 GBP, $1,300 USD  Girlfriend Impact: Married 2013,4th May
  9. 9. + Virtual Silos  The VMware Cluster as the New Silo?  Discrete Blocks of:  Compute  Network  Storage  Q. Why do we like silos?  Q. Why do we hate silos?
  10. 10. + Compute…
  11. 11. + Compute Continued…  One Site; Two Clusters  “Infrastructure” Resource Pool – No Management Cluster   GOAL: Maximize Resource; Setup Tiered Clusters  Decisions:  Different CPU types forced DRS separation  Gold Cluster = HP DL 385s  WHY? = More memory & FC connected to SAS storage  Silver Cluster = Lenovo TS200  WHY? = Less RAM, Only 1GP pipe to either SAS/SATA on NFS/iSCSI
  12. 12. + Storage…
  13. 13. + Storage Continued…  Destroyed Datastores!  Except  Infrastructure datastore  Templates  Software (iso, templates, miscs)  Originally a much more complicated layout – 7 Tiers!!!  4 Tiers  1. Platinum (NetApp, FC, SAS, SnapMirror – 5min RPO)  2. Gold (Dell, iSCSI, SAS, Replication – 1hr RPO)  3. Silver (NetApp, NFS, SATA, Datastore Cluster, 3x300 GB, Thinly Provisioned)  4. Bronze (Dell, iSCSI, SATA, Thinly Provisioned)  vSphere Replication for Replication from 3>4 or 4<3
  14. 14. + Storage Anxieties…  Many Organizational Tenants sharing the SAME datastore  What about Site Recovery Manager?  What about performance – Capacity management isn’t the issue  With Array-based Replication (ABR)  One Failover to rule them all?  No per-vApp Failover  No per-Organization failover  Solutions?  Platinum/Gold datastores per-Organization   vSphere Replication   VMware vVols 
  15. 15. + Network…
  16. 16. + Network Continued…  Goodbye Standard Switch  Struggle to provide redundancy/separate with the “Combo Approach”  Many of the Adv features of vCD require Distributed vSwitch  Classical Approach:  Two DvSwitches  One for internal vSphere Networking (vMotion, IP Storage, FT, Management)  One for Virtual DataCenter  Backed by two VMNICs each…
  17. 17. + Network Anxieties…  All my Provider vDCs share the SAME DvSwitch  What about “Fat Finger Syndrome”?  How realistic is that?  Time to re-examine “Best Practices”  Do best practices represents an ideal OR an ideal filtered through the limitations of a technology  Provider vDCs in vCD 1.x – One Cluster, No Tiering of Storage  Provider vDCs in vCD 5.x – Many clusters, Tiering of Storage
  18. 18. + Lesson Learned  When thinking about a Provider vDC  All the resources matter  Compute + Storage + Networking  By far the easiest for me was compute  But my “Gold” cluster has no FT Support   Prepare to make compromises/trade offs  UNLESS all your hosts are the SAME  VXLAN needs enabling on Distributed Switches via vSphere Client  Prior to creating a Provider vDC  Watch out with VMs already on the cluster – vCD ESX Agent  Running existing “infrastructure” VMs on a cluster  Stops the install of the vCD Agent  Has to be done on per-ESX host basis (easy)
  19. 19. + More Lessons Learned…  Get your VLANs sorted BEFORE you use them in vCD…  Beware of Orphaned VLAN references in the vCD Databases  http://kb.vmware.com/kb/2003988
  20. 20. + Work out your IP before you start!  “Wrong”  192.168.3.x – “External Network”  172.168.x.x – “Organization Network”  10.x.x.x – “vApp Network”  “Right”  10.x.x.x– “External Network”  172.168.x.x – “Organization Network”  192.168.1.x – “vApp Network”  Keep it simple – whole ranges dedicated
  21. 21. + IP Ranges can be tricky to change  Even with vApps powered off – options unavailable  Gateway Address  Network Mask  Resolution involves admin:  Add new vApp Network  Remap all VMs to new vApp Network  Remove old vApp Network
  22. 22. + vApp Networks & Edge Gateway  Every vApp Network you create:  Creates a vCNS Edge Gateway  Consumes resources  Solution  Create two vApps per Organization  TypeA: One on the Organization Network  TypeB: One on its own vApp Network  Power off the Type B vApp to save resources  Beware of static MAC/IP on Power Offs
  23. 23. + Establish a meaningful naming convention…  I KNOW EVERYONE SAYS THIS, BUT IN A HOME LAB DON’T YOU CUT CORNERS SOMETIMES?  <ORGNAME><NetworkType><Purposes>  CORPHQ-OrgNetCorp-EdgeGateway  CORPHQ-vAppNet-WebGateway  Makes screengrabs, documentation & troubleshooting soooo much easier…  Register Edge Gateway devices in DNS…  Helps with SysLog – watch out for stale DNS Records…
  24. 24. + OVFs – Portable?
  25. 25. + OVFs – Portable?
  26. 26. + Your Lab?  Nested ESX  http://communities.vmware.com/community/vmtn/bestpractices /nested  vTARDIS? - http://vinf.net/2011/10/03/vtardis-5-at-a-vmug-near- you/  Workstation/Fusion – Out of the box nested ESX…  AutoLab - http://www.labguides.com/autolab/  Redundant Array of Inexpensive PCs (RAIPC)  Community Hardware Page:  http://communities.vmware.com/community/vmtn/bestpractices /crc
  27. 27. + vINCEPTION – Home Lab?
  28. 28. + vINCEPTION Levels  vINCEPTION Level 0  Physical ESX hosts, Virtual Everything Else (DC, vCenter, vCD)  vINCEPTION Level 1  vApp of vSphere running under Level 0  Including vCD, vCNS Manager  vINCEPTION Level 2  vApps running under Level 1…
  29. 29. + vCloud vSphere vApp!
  30. 30. + Cloud Nesting
  31. 31. +
  32. 32. + Lab Future…  The DONE List  Make my External Juniper Firewall work with vShield  Need new servers?  Dell?
  33. 33. + Follow my journey journal…  Text Blogpost  Follow my journey!  Search for “Mike Laverick – My vCloud Journey Journal”  Subscribe iTunes:  http://tinyurl.com/audiowag (Audio)  http://tinyurl.com/videowag (Video)
  34. 34. +Questions (Welcomed) & (Attempts at) AnswersBlog: www.mikelaverick.comEmail: mike@mikelaverick.comTwitter: @mike_laverick

×