Your SlideShare is downloading. ×
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices and Troubleshooting
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices and Troubleshooting

757

Published on

VMworld 2013 …

VMworld 2013

Richard Cockett, VMware
Umesh Goyal, VMware Software India Pvt ltd

Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
757
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
24
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. vSphere Networking and vCloud Networking Suite Best Practices and Troubleshooting Richard Cockett, VMware Umesh Goyal, VMware Software India Pvt ltd VSVC5103 #VSVC5103
  • 2. 22 Agenda – vSphere Networking  Anatomy of Virtual Network  Basics of Virtual Networking  Teaming - Redundancy and Load Balancing  VLAN Implementation  Distributed Virtual Network  Network IO Control  Configuration Best Practices
  • 3. 33 Anatomy of Virtual Networking Service Console Physical Network VM0 VM1 VM2 VM3 ESX/ESXi Host vmkernel Port Group Virtual NIC (vnic) Physical NIC (vmnic or pnic) Physical switch Service Console (vswif) Vmkernel (vmknic) Uplinks vSwitch NIC Teams
  • 4. 44 vNetwork Concepts  Virtual Network Adapters • vNic – VM’s interface to the network • vmknic – vSphere hypervisor’s interface to network (nfs, iSCSI, vMotion, FT, Management) • vswif – Interface for Service Console (not present on ESXi)  Physical Network Adapter • pNic – for communicating with entities outside ESX/ESXi host  Virtual Switch • vSwitch – forwards packets between vNics, vmknics, and pNics  Port Group • Group of ports sharing the same configuration (e.g. vlan)  Uplinks: connections to physical switches  NIC Team: a group of pNics connected to the same physical network
  • 5. 55 Three Types of Virtual Switches  vNetwork Standard Switch (vSS) • Created and managed on a per-host basis • Support basic features such as VLAN, NIC teaming, port security  vNetwork Distributed Switch (vDS) • Created and managed at vSphere vCenter • Supports all vSS features and more (PVLAN, traffic management, etc.) • NOTE: vSS/vDS share same etherswitch module, only control path differ  Cisco Nexus 1000v (N1K) • Created and managed by VSM (either VM or hardware/Nexus 1010) • Supports features typically available in Cisco hardware switches
  • 6. 66 ESX/ESXi Network Traffic - Classification  Virtual Machine Traffic • Traffic sourced and received from virtual machine(s) • Isolated from each other  VMotion Traffic • Traffic sent when moving a virtual machine from one ESX/ESXi host to another • Must be dedicated and isolated  Management Traffic • Should be isolated from VM traffic • If VMware HA is enabled, includes heartbeats  IP Storage Traffic—NFS, iSCSI • If using the software iSCSI initiator • FT Traffic • Should be isolated completely • Generally heavy I/O’s and low latency (< 1 ms)
  • 7. 77 NIC Teaming
  • 8. 88 Load Balancing - Originating Virtual Port ID Based Default mode, distributes load on a per vnic basis Physical switches not aware/involved Virtual NICs VM ports uplink ports Teamed physical NICs
  • 9. 99 Load Balancing - MAC Based Teaming Distributes load on a source MAC hash basis Physical switches not aware/involved VM ports uplink ports Virtual NICs Teamed physical NICs
  • 10. 1010 Load Balancing - IP Hash Based Distributes load on a per SRC IP/DST IP basis (hash) Requires Portchannel/Etherchannel on physical switches VM ports uplink ports Virtual NICs Teamed physical NICs PM0 PM2PM1 SRC IP “A” DST IP “D” DST IP “E” DST IP “F” SRC IP “B” SRC IP “C”
  • 11. 1111 Load Based Teaming  Introduced in vSphere 4.1  Only traffic-load-aware teaming policy  Supported only with the vNetwork Distributed Switch (vDS)  Reshuffles the port binding dynamically  Only move a flow when the mean send or receive utilization on an uplink exceeds 75% of capacity  Default Change over time is 30 Seconds  In combination with VMware Network IO Control (NetIOC), LBT offers a powerful solution Refer: http://blogs.vmware.com/performance/2010/12/vmware-load-based- teaming-lbt-performance.html 11
  • 12. 1212 VLAN Implementation
  • 13. 1313 VLAN Tagging Options vnic vnic vnic vSwitch Physical Switch vnic vnic vnic vSwitch Physical Switch vnic vnic vnic vSwitch Physical Switch VST – Virtual Switch Tagging VGT – Virtual Guest Tagging EST – External Switch Tagging VLAN Tags applied in vSwitch VLAN Tags applied in Guest PortGroup set to VLAN “4095” External Physical switch applies VLAN tagsVST is the preferred and most common method Port Groups assigned to a VLAN
  • 14. 1414 vNetwork Distributed Switch
  • 15. 1515 Distributed Virtual Network (vNetwork) vCentervCenter Standard vSwitch vNetwork & dvSwitch
  • 16. 1616 vDistributed Switch Architecture  Control Plane (CP) and Data Plane, or IP Plane are separated. • CP, responsible for configuring dvSwitches,dvPortgroups, dvPorts, Uplinks, NICTeaming and so on, and for coordinating the migration of the ports, runs on vCenter • DP, responsible for performing the forwarding, runs inside the VMKernel of the ESX/ESXi (vSwitch). vCenter ESX ESX ESX Distributed vSwitch vSwitch vSwitch vSwitch Distributed vSwitch vSwitch Control Plane I/O Plane
  • 17. 1717 vSwitch vs DVSwitch Vs Cisco N1K 17 Capabilities vSwitch dvSwitch Cisco N1K L2 Switch Yes Yes Yes VLAN Segmentation Yes Yes Yes 802.1Q Tagging Yes Yes Yes Link Aggregation Static Static & LACP Static & LACP TX Rate Limiting Yes Yes Yes RX Rate Limiting No Yes Yes Unified Management Interface vSphere Client @Host vSphere Client @Vcenter Cisco CLI PVLAN No Yes Yes Network I/O Control No Yes Yes Port Mirroring No Yes Yes SNMP, Netflow, etc. No Yes Yes Load Based Teaming No Yes No
  • 18. 1818 Network IO Control
  • 19. 1919 Introduction vSphere Network IO Control prioritize network access by continuously monitoring I/O load over the network and dynamically allocating available I/O resources according to specific business needs
  • 20. 2020 NIOC at a Glance Improve and meet service levels for business-critical applications  Reduces the amount of active performance management required  Bridge virtual and physical infrastructure quality of service with per resource 802.1 tagging  Set, view and monitor network resource shares and limits Optimize your workloads  Virtualize more types of workloads, including I/O-intensive business-critical applications  Ensure that each cloud tenant gets their assigned share of I/O resources  Set and enforce network priorities (per VM) across a cluster Increase flexibility and agility of your infrastructure  Reduce your need for network interfaces dedicated to a single virtual machine or application  Enable multi-tenancy deployments
  • 21. 2121 Features  Isolation  Shares  Limits  Load-Based Teaming  IEEE 802.1p tagging
  • 22. 2222 Network Traffic Classifications  vMotion  iSCSI  FT logging  Management  NFS (Network File System)  Virtual machine traffic  vSphere Replication traffic  User Defined
  • 23. 2323 Configuration Best Practices
  • 24. 2424 Choosing the Type of Switch  Size of your deployment • If you have a small deployment and need basic network connectivity, vSS should be sufficient • If you have a large deployment, consider vDS/N1K  Organizational • If you have a group which controls both VM deployment and network provisioning, then choose vSS/vDS (integrated control via vSphere Client UI) • If you have a separate network admin group, trained on Cisco IOS CLI, and wishes to maintain control over virtual and physical networking, then choose N1K  Other factors • Budget – vDS/N1K requires Enterprise+ License • Features – vSS features are frozen, vDS features are evolving (ask Cisco about N1K)
  • 25. 2525 Configuration Best Practices: #1  Enable on Physical Switch Ports • Spanning Tree Protocol- Loop avoidance mechanism • PortFast- Fast convergence after failure • Link State tracking-Detection of upstream ports(on Cisco switches) • Enable BPDU Guard  Validate • Duplex settings • NIC Hardware status • Link status • Switch Port status • Switch Port Configuration • “Jumbo Frames Configuration”  Ensure adequate CPU resources are available • Heavy gigabit networking loads are CPU-intensive • Both native and virtualized
  • 26. 2626 Enabling Jumbo Frame  Physical Switches • Set MTU to desired value on all switches in the network  Virtual Switch • For vDS set MTU on UI • For vSS, run esxcfg-vswitch –m  Physical Adapter • MTU set automatically as part of vSwitch setting. Check for errors!  Virtual Adapter • Change vNic MTU inside the guest • Run esxcfg-vmknic –m to set MTU of vmknic  Ping Test • Make sure you specify don’t fragment
  • 27. 2727 Configuration Best Practices: #2  Use separate Networks to avoid contention • For Console OS (host management traffic), VMKernel (VMotion, iSCSI, NFS traffic), and VM • For VMs running heavy networking workloads • Enable BPDU Guard? • With explicit failover, Set Failback = ‘No’ to avoid the flapping of traffic between two network adapters  Tune VM-to-VM networking on same host • Use same virtual switch to connect communicating VMs • Avoid buffer overflow in guest driver: Tune receive/transmit buffers (Refer KB: 1428)  Use vmxnet3 virtual device in guest • Default 32-bit guest vNIC is vlance, but vmxnet3 performs better • For vmxnet3 driver install tools • e1000 is the default for 64-bit guests • Enhanced vmxnet3 is available for several guest OSes
  • 28. 2828 Configuration Best Practices: #3 Converge Network and Storage I/O onto 10GE • Reduce cabling requirements • Simplify management and reduce cost Tools for Traffic Management 1. Traffic Shaping • Limit the amount of traffic a vNic may send / receive 2. Network I/O Control (vDS + vSphere 4.1) • Isolate different traffic class from each other • Each type of traffic is guaranteed a shared of the pNic bandwidth • Unused bandwidth are automatically distributed to other traffic types
  • 29. 2929 vCloud Networking and Security – Best practices and Troubleshooting Global Support Services
  • 30. 3030 Agenda  Best Practices for vCloud Networking and Security ( vCNS )  VXLAN  When to use App vs Edge or both  Troubleshooting vCNS
  • 31. 3131 Best Practices for vCNS Manager . . .  Install on dedicated management cluster  Run on ESX host unaffected by downtime  Network interfaces placed in common network  Backup regularly  Ensure NTP is setup and working 31
  • 32. 3232 . . . Best Practices for vCNS Manager  Change Admin password after install  Create new admin account for CLI  Prior to upgrade backup DB and clone / snapshot manager 32
  • 33. 3333 Best Practice for vCNS App FW Deployments  Migrate vCenter server / database VMs to alternate ESX server  Set unique IP for the management port of each vShield App  Install VMware Tools on each VM  Use System Status screen to monitor health of a App FW 33
  • 34. 3434 App FW Policy Management . . .  Use vCenter containers and security groups for enforcement  Use service groups to reduce rules  Know when to use General / Ethernet rules  Set the Fail Safe Mode to Block  Utilize Flow Monitoring 34
  • 35. 3535 . . . App FW Policy Management  Create firewall rules allowing access to default services  Use different syslog servers for different log levels  Use the comments fields  Use the Load History option to revert configuration  Exclude machines when necessary 35
  • 36. 3636 Virtual eXtensible LAN
  • 37. 3737 VXLAN Setup - Physical Requirements  DHCP available on VXLAN transport VLANs  Increased MTU needed to accommodate VXLAN encapsulation overhead  Leverage 5-tuple hash distribution for uplink and interswitch LACP  Multicast routing enabled if traffic is traversing a router
  • 38. 3838 VXLAN Setup - Virtual Requirements  vSphere 5 .1  vShield Manager 5 .1  vSphere Distributed Switch 5 .1 .0  Virtual Tunnel End Point (VTEP) 38
  • 39. 3939 VXLAN Implementation 39
  • 40. 4040  When to use vShield Edge or vShield App or both? 40 DMZ Development Finance
  • 41. 4141 vShield App
  • 42. 4242 vShield Edge
  • 43. 4343 vShield App and Edge
  • 44. 4444 Use Case - Securing Business Critical Applications DMZ FinanceDevelopment FinanceDevelopment Solution - vShield App + Edge • Protect data and applications with hypervisor level firewall • Create and enforce security policies with virtual machine migration • Facilitate compliance by monitoring all application traffic • Improve performance and scalability with load balancer and software based solution Requirements • Deploy production and development applications in a shared infrastructure with: • Traffic segmentation between applications • Authorized access to applications • Strict monitoring and enforcement of rules on inter- VM communications • Ability to maintain security policies with VM movement • Compliance to various audit requirements VMware vShield App
  • 45. 4545  Multiple sizes (Compact, Large, X-Large)  Up to 10 interfaces per vShield Edge  DHCP, NAT, and DNS relay  Firewall support  Load Balancing  IPsec and SSL VPN-Plus  VXLAN Gateway  Routing (static routes)  High Availability  Flexible IP address management  Intuitive deployment workflow  CLI vShield Edge Secure the Edge of the Virtual Data Center Tenant A Tenant X Highlights Load balancer firewall VPN
  • 46. 4646 Edge Scalability 46
  • 47. 4747 vShield App Application Protection for Network Based Threats DMZ PCI HIPAA Features • Hypervisor-level firewall • Inbound, outbound connection control applied at vNIC level • Elastic security groups - “stretch” as virtual machines migrate to new hosts • Robust flow monitoring • Policy Management • Simple and business-relevant policies • Managed through UI or REST APIs • Logging and auditing based on industry standard syslog format
  • 48. 4848 Troubleshooting – vCloud Networking and Security
  • 49. 4949 VXLAN Issue #1 – “Not Ready” Shown in vCNS UI 49 BypassVUMenabled is not set to “false” in EAM Managed IP is not set in vCenter
  • 50. 5050 Verify VXLAN Agency Settings . . .  Access the EAM managed object browser  Verify the VXLAN agency has the bypassVumEnabled set to FALSE 50
  • 51. 5151 . . . Verify VXLAN Agency . . .  Access vCenter EAM Managed Object Browser http://vcenter51.vmware.local/eam/mob/
  • 52. 5252 . . . Verify VXLAN Agency Settings . . .
  • 53. 5353 . . . Verify VXLAN Agency Settings . . .
  • 54. 5454 . . . Verify VXLAN Agency Settings
  • 55. 5555 Alter the bypassVumEnabled Setting . . . 55
  • 56. 5656 . . . Alter the bypassVumEnabled Setting . . . • Visit the following URL: https://<VC-IP>/eam/mob/?moid=agency- 0&method=Update • The value will be set to the desired setting “true” or “false” • Once the XML data is filled in, click the “Invoke Method” link. 56
  • 57. 5757 . . . Alter the bypassVumEnabled Setting 57
  • 58. 5858 Managed IP not set in vCenter . . . • In eam.log you see a smilar error:  <msg>('http://vCenter1.vmware.com:80/eam/vib?id=8e840536-1855- 4c7e-81bd-8814b43f8ee0-0', '/tmp/tmpjzGgUU', '[Errno 4] IOError: &lt;urlopen error [Errno -2] Name or service not known&gt;')</msg> • vCenter FQDN is being used which does not work to install VXLAN agent on ESX host 58
  • 59. 5959 . . . Managed IP Not Set in vCenter . . .
  • 60. 6060 . . . Managed IP Not Set in vCenter . . .
  • 61. 6161 . . . Managed IP Not Set in vCenter . . .
  • 62. 6262 . . . Managed IP Not Set in vCenter 62
  • 63. 6363 VXLAN issue #2 – “class domain-cX already has been configured with mapping • Download the curl command if needed from the internet  Run the following command on command line • curl -i -k -H "Content-type: application/xml" -u admin:default -X DELETE https://<vsm-ip>/api/2.0/vdn/map/cluster/<domain- cXXX>/switches/dvs 63
  • 64. 6464 Edge/App debug packet  Enable debug packet mode though the App/Edge CLI debug packet display interface (interface) [Expression] Example vShield# debug packet display interface mgmt host_10.10.11.11_and_port_80 64
  • 65. 6565 Questions ??
  • 66. 6666 Other VMware Activities Related to This Session  HOL: HOL-SDC-1302 vSphere Distributed Switch from A to Z  Group Discussions: VSVC1004-GD Top 10 Customer Support Issues with Josh Gray
  • 67. THANK YOU
  • 68. vSphere Networking and vCloud Networking Suite Best Practices and Troubleshooting Richard Cockett, VMware Umesh Goyal, VMware Software India Pvt ltd VSVC5103 #VSVC5103

×