VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
Achieving the ultimate performance with KVM ShapeBlue
This document summarizes an presentation about achieving ultimate performance with KVM. It discusses optimizing hardware, CPU, memory, networking, and storage for virtual machines. The goal is the lowest cost per delivered resource while meeting performance targets. Specific optimizations mentioned include CPU pinning, huge pages, SR-IOV networking, virtio drivers, and bypassing the host for storage. It cautions that many performance claims use unrealistic benchmarks and hardware configurations unlike real-world usage.
Hypervisors are becoming more and more widespread in embedded environments, from automotive to medical and avionics. Their use case is different from traditional server and desktop virtualization, and so are their requirements. This talk will explain why hypervisors are used in embedded, and the unique challenges posed by these environments to virtualization technologies.
Xen, a popular open source hypervisor, was born to virtualize x86 Linux systems for the data center. It is now the leading open source hypervisor for ARM embedded platforms. The presentation will show how the ARM port of Xen differs from its x86 counterpart. It will go through the fundamental design decisions that made Xen a good choice for ARM embedded virtualization. The talk will explain the implementation of key features such as device assignment and interrupt virtualization.
OpenFlow Switch Management using NETCONF and YANGTail-f Systems
The document discusses how the OpenFlow Configuration (OF-CONFIG) specification uses NETCONF and YANG to enable the remote configuration of OpenFlow datapaths in a standardized way, providing benefits like validation, rollback, and transactions for network managers through a formal API and data models, and introduces Tail-f's NCS product which can act as an OpenFlow switch manager using these technologies.
This presentation provides an overview of vSAN components and fault tolerance methods. It discusses how vSAN objects are divided into components that are placed across hosts. It covers the different states components can be in, such as active, degraded, absent, and how resync, rebuild, repair, and reconfiguration processes work. It also explains how vSAN uses voting and quorum to determine which cluster partition remains available in a network partition scenario.
The document discusses various data structures and functions related to network packet processing in the Linux kernel socket layer. It describes the sk_buff structure that is used to pass packets between layers. It also explains the net_device structure that represents a network interface in the kernel. When a packet is received, the interrupt handler will raise a soft IRQ for processing. The packet will then traverse various protocol layers like IP and TCP to be eventually delivered to a socket and read by a userspace application.
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
Achieving the ultimate performance with KVM ShapeBlue
This document summarizes an presentation about achieving ultimate performance with KVM. It discusses optimizing hardware, CPU, memory, networking, and storage for virtual machines. The goal is the lowest cost per delivered resource while meeting performance targets. Specific optimizations mentioned include CPU pinning, huge pages, SR-IOV networking, virtio drivers, and bypassing the host for storage. It cautions that many performance claims use unrealistic benchmarks and hardware configurations unlike real-world usage.
Hypervisors are becoming more and more widespread in embedded environments, from automotive to medical and avionics. Their use case is different from traditional server and desktop virtualization, and so are their requirements. This talk will explain why hypervisors are used in embedded, and the unique challenges posed by these environments to virtualization technologies.
Xen, a popular open source hypervisor, was born to virtualize x86 Linux systems for the data center. It is now the leading open source hypervisor for ARM embedded platforms. The presentation will show how the ARM port of Xen differs from its x86 counterpart. It will go through the fundamental design decisions that made Xen a good choice for ARM embedded virtualization. The talk will explain the implementation of key features such as device assignment and interrupt virtualization.
OpenFlow Switch Management using NETCONF and YANGTail-f Systems
The document discusses how the OpenFlow Configuration (OF-CONFIG) specification uses NETCONF and YANG to enable the remote configuration of OpenFlow datapaths in a standardized way, providing benefits like validation, rollback, and transactions for network managers through a formal API and data models, and introduces Tail-f's NCS product which can act as an OpenFlow switch manager using these technologies.
This presentation provides an overview of vSAN components and fault tolerance methods. It discusses how vSAN objects are divided into components that are placed across hosts. It covers the different states components can be in, such as active, degraded, absent, and how resync, rebuild, repair, and reconfiguration processes work. It also explains how vSAN uses voting and quorum to determine which cluster partition remains available in a network partition scenario.
The document discusses various data structures and functions related to network packet processing in the Linux kernel socket layer. It describes the sk_buff structure that is used to pass packets between layers. It also explains the net_device structure that represents a network interface in the kernel. When a packet is received, the interrupt handler will raise a soft IRQ for processing. The packet will then traverse various protocol layers like IP and TCP to be eventually delivered to a socket and read by a userspace application.
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
6. 6
Virtualization - Hypervisor
• Hypervisor型 (ESX/ESXi)
VMware ESXi
Microsoft Hyper-V
Citrix XenServer
Nutanix AHV
底層硬體
(Server)
Virtual Hardware
Windows Server 2008
+ MS SQL Server
Red Hat Linux
+ Apache + PHP
18. 18
• CPU
▪ 64-bit x86 CPU ONLY (released after Sep 2006)
▪ At least 2 CPU Cores
▪ Enable NX/XD bit for the CPU in the BIOS
▪ To support 64-bit virtual machines, support for
hardware virtualization (Intel VT-x/AMD RVI)
must be enabled on x64 CPUs.
• Memory
▪ minimum 4GB physical RAM (8GB
recommended)
VMware ESXi 6硬體安裝需求
19. 19
• Network
▪ One or more Gigabit or faster Ethernet
controllers
▪ 為了安全及效能考量,建議將uplink的physical
adapter分開
• management network
• virtual machine network
• Disk storage
▪ A SCSI disk, Fibre Channel LUN, iSCSI disk,
RAID LUN: SATA/SCSI/SAS, USB media
VMware ESXi 6硬體安裝需求(續)
20. 20
• Booting
▪ Supports booting ESXi hosts from the Unified
Extensible Firmware Interface (UEFI).
▪ With UEFI, you can boot systems from hard
drives, CD-ROM drives, or USB media.
▪ ESXi can boot from a disk larger than 2TB.
VMware ESXi 6硬體安裝需求(續)
36. 36
vCenter Server Architecture
ESXi HostESXi Host ESXi HostESXi HostESXi HostESXi Host
vSphere Web
Client
vCenter
Server and
Additional
Modules
Database
Active Directory Domain
Platform
Services
Controller with
vCenter Single
Sign-On
Web Client僅能連接vCenter Server
要直接連ESXi請用Host Client(或舊版vSphere Client)
37. 37
vCenter Server Services and Interfaces
Database
Server
Distributed Services
Platform Services
Controller
vSphere
API
User
Access
Control
ESXi Management
Core Services
Additional Services:
• vSphere Update Manager
• vRealize Orchestrator
vSphere Web Client
Third-Party
Applications
Plug-In
PSC
vCenter
Server
Database
38. 38
• vCenter Server
▪ vCenter Management Server
▪ Platform Services Controller(PSC)
vCenter Server 6組成元件
39. 39
• vCenter Server
▪ vCenter Management Server
• vCenter Server
• VMware vSphere® Web Client (server)
• VMware Inventory Service
• VMware vSphere® Auto Deploy™
• VMware vSphere® ESXi™ Dump Collector
• VMware vSphere® Syslog Collector
vCenter Server 6組成元件(續)
40. 40
• vCenter Server
▪ Platform Services Controller(PSC)
• VMware vCenter™ Single Sign-On™
• VMware License Server
• Lookup Service
• Certificate Authority
• Certificate Store
• VMware Directory Services
vCenter Server 6組成元件(續)
66. 66
▪ Port Group
• Virtual machines network
▪ VMkernel port
• For the ESXi management network
• For IP storage(iSCSI/NFS), vSphere HA, vMotion,
Fault Tolerance, Virtual SAN, and Replication
Types of Virtual Switch Connections
Virtual Switch
Production TestDev DMZ vSphere
vMotion
Management
Uplink Ports
Virtual Machine Port Groups VMkernel Ports
68. 68
• Standard switches
▪ Virtual switch configuration for a single host
• Distributed switches
▪ Virtual switches that provide a consistent
network configuration for virtual machines as
they migrate across multiple hosts
Types of Virtual Switches
69. 69
Standard Switch Components
VM
1
VM
2
VM
3
Port Group
VMkernel
Test VLAN 101
Production VLAN 102
IP Storage VLAN 103
Management VLAN 104
Management
Network
IP
storage
VNIC VNIC VNIC VNIC
• A standard switch provides connections for virtual
machines to communicate with one another.
70. 70
Viewing the Standard Switch Configuration
Delete the
port group.
Display Cisco Discovery
Protocol information.
Display port
group properties.
71. 71
VLANs in Virtual Switch
Virtual Switch
VM
VLAN
105
VLAN
106
VM
VMkernel
Physical Switch
Physical
NIC
Trunk Port
• ESXi supports 802.1Q VLAN
tagging.
• Virtual switch tagging policies
▪ Packets from a virtual machine are
tagged as they exit the virtual switch.
▪ Packets are untagged as they return
to the virtual machine.
▪ Effect on performance is minimal.
• ESXi provides VLAN support by
giving a port group a VLAN ID.
94. 94
降低主機停機時間 - Storage vMotion
OS
APP
VMware ESX/ESXi
OS
APP
OS
APP
• VMware Storage vMotion
▪ 效能調整 - 在不同Datastore移轉
▪ 資料移轉 - 在不同的儲存設備移轉
▪ 格式改變 - 可選擇目的地VMDK格式(thick/thin)
95. 95
降低主機停機時間 - Fault Tolerance
VMware vSphere
OS
APP
OS
APP
OS
APP
• VMware FT (Fault Tolerance)
▪ 被保護VM(Primary)會在別台Host產生shadow
VM(Secondary),隨時和主要VM保持同步
▪ 在故障Failover後,會自動產生新shadow VM
97. 97
Evolutions of VMware FT in vSphere 6
- 能保護任何關鍵任務OS
- 支援SMP(4 vCPU)
- 採用全新Fast
Checkpointing技術
要啟動可參考Validation Checks
for Turning On Fault Tolerance
New
New
99. 99
關於Virtual Machines關於Virtual Machines
Virtual Machine ComponentsVirtual Machine
▪ Operating system
▪ VMware Tools™
▪ Virtual resources:
• CPU and memory
• Network adapters
• Disk controllers
• Parallel and serial ports
• and so on…
128. 128
Shares, Limits, and Reservations
Available Capacity
0 MHz/MB
Limit
Shares are used to
compete in this range.
Reservation
• A virtual machine powers on only if its
reservation can be guaranteed.
129. 129
Resource Pool Attributes
• Shares: Low, Normal, High,
Custom
• Reservations: In MHz or GHz, MB
or GB
• Limits:
▪ In MHz or GHz, MB or GB.
▪ Unlimited access, by default, up to
maximum amount of resource
accessible.
• Reservation type:
▪ Expandable selected: Virtual
machines and subpools can draw
from this pool’s parent.
▪ Expandable deselected: Virtual
machines and subpools can draw only
from this pool, even if its parent has
free resources.