Hypervisors are becoming more and more widespread in embedded environments, from automotive to medical and avionics. Their use case is different from traditional server and desktop virtualization, and so are their requirements. This talk will explain why hypervisors are used in embedded, and the unique challenges posed by these environments to virtualization technologies.
Xen, a popular open source hypervisor, was born to virtualize x86 Linux systems for the data center. It is now the leading open source hypervisor for ARM embedded platforms. The presentation will show how the ARM port of Xen differs from its x86 counterpart. It will go through the fundamental design decisions that made Xen a good choice for ARM embedded virtualization. The talk will explain the implementation of key features such as device assignment and interrupt virtualization.
Ceph is an open-source distributed storage system that provides object, block, and file storage. The document discusses optimizing Ceph for an all-flash configuration and analyzing performance issues when using Ceph on all-flash storage. It describes SK Telecom's testing of Ceph performance on VMs using all-flash SSDs and compares the results to a community Ceph version. SK Telecom also proposes their all-flash Ceph solution with custom hardware configurations and monitoring software.
Hypervisors are becoming more and more widespread in embedded environments, from automotive to medical and avionics. Their use case is different from traditional server and desktop virtualization, and so are their requirements. This talk will explain why hypervisors are used in embedded, and the unique challenges posed by these environments to virtualization technologies.
Xen, a popular open source hypervisor, was born to virtualize x86 Linux systems for the data center. It is now the leading open source hypervisor for ARM embedded platforms. The presentation will show how the ARM port of Xen differs from its x86 counterpart. It will go through the fundamental design decisions that made Xen a good choice for ARM embedded virtualization. The talk will explain the implementation of key features such as device assignment and interrupt virtualization.
Ceph is an open-source distributed storage system that provides object, block, and file storage. The document discusses optimizing Ceph for an all-flash configuration and analyzing performance issues when using Ceph on all-flash storage. It describes SK Telecom's testing of Ceph performance on VMs using all-flash SSDs and compares the results to a community Ceph version. SK Telecom also proposes their all-flash Ceph solution with custom hardware configurations and monitoring software.
DigitalOcean uses Ceph for block and object storage backing for their cloud services. They operate 37 production Ceph clusters running Nautilus and one on Luminous, storing over 54 PB of data across 21,500 OSDs. They deploy and manage Ceph clusters using Ansible playbooks and containerized Ceph packages, and monitor cluster health using Prometheus and Grafana dashboards. Upgrades can be challenging due to potential issues uncovered and slow performance on HDD backends.
The document discusses IBM Power Systems and PowerHA SystemMirror V7 for IBM i. PowerHA SystemMirror provides high availability and disaster recovery clustering capabilities. It uses shared storage clustering technology designed for automation and minimal IT operations. Editions include Standard Edition for data center deployments and Enterprise Edition with additional features for multi-site deployments. The document reviews PowerHA concepts, editions, pricing, and strategy to provide resiliency without downtime through automation and continuous availability.
The document provides an overview of virtual networking concepts in VMware vSphere, including:
- Types of virtual switch connections like virtual machine port groups and VMkernel ports
- Standard switches and distributed switches
- VLAN configurations and tagging
- Network adapter and switch port policies for security, traffic shaping, and failover
- Troubleshooting tools like ESXCLI, TCPDUMP and networking commands
Best practices for optimizing Red Hat platforms for large scale datacenter de...Jeremy Eder
This presentation is from NVIDIA GTC DC on Oct 23, 2018:
https://youtu.be/z5gEUL6dJRI
Corresponding Press Release: https://www.redhat.com/en/about/press-releases/red-hat-nvidia-align-open-source-solutions-fuel-emerging-workloads
Blog: https://www.redhat.com/en/blog/red-hat-and-nvidia-positioning-red-hat-enterprise-linux-and-openshift-primary-platforms-artificial-intelligence-and-other-gpu-accelerated-workloads
Demo Video:
https://www.youtube.com/watch?v=9iVYjA_WJgU
1. The document discusses Linux kernel page reclamation.
2. Direct reclaim is when the caller performs reclamation directly, while daemon reclaim uses kswapd processes.
3. Daemon reclaim involves kswapd processes waking up and using kswapd_shrink_zone() to reclaim pages until all zones are above the high watermark. This helps balance memory usage across zones.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Ceph Benchmarking Tool (CBT) is a Python framework for benchmarking Ceph clusters. It has client and monitor personalities for generating load and setting up the cluster. CBT includes benchmarks for RADOS operations, librbd, KRBD on EXT4, KVM with RBD volumes, and COSBench tests against RGW. Test plans are defined in YAML files and results are archived for later analysis using tools like awk, grep, and gnuplot.
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
Presentation explains the difference between multi site high availability (aka metro cluster) and disaster recovery. General concepts are similar for any products but presentation is more tailored for VMware technologies.
Updated lifecycle management, improved analytics and support, and the option of Kubernetes — VMware vSphere® 7 is the biggest re-platform of vSphere in years. Learn more about the most significant vSphere evolution in a decade.
Learn more: http://ms.spr.ly/6005TmX9B
This presentation provides an overview of vSAN components and fault tolerance methods. It discusses how vSAN objects are divided into components that are placed across hosts. It covers the different states components can be in, such as active, degraded, absent, and how resync, rebuild, repair, and reconfiguration processes work. It also explains how vSAN uses voting and quorum to determine which cluster partition remains available in a network partition scenario.
Testing Persistent Storage Performance in Kubernetes with SherlockScyllaDB
Getting to understand your Kubernetes storage capabilities is important in order to run a proper cluster in production. In this session I will demonstrate how to use Sherlock, an open source platform written to test persistent NVMe/TCP storage in Kubernetes, either via synthetic workload or via variety of databases, all easily done and summarized to give you an estimate of what your IOPS, Latency and Throughput your storage can provide to the Kubernetes cluster.
The document discusses the benefits of using Veritas Cluster Server (VCS) 5 for VMware ESX Server. VCS 5 provides high availability and disaster recovery for virtual machines and applications. It protects against failures at all levels from physical servers to individual applications. VCS 5 also provides granular management of virtual environments similar to physical servers and allows configurations such as M+N clusters across multiple data centers for disaster recovery.
DigitalOcean uses Ceph for block and object storage backing for their cloud services. They operate 37 production Ceph clusters running Nautilus and one on Luminous, storing over 54 PB of data across 21,500 OSDs. They deploy and manage Ceph clusters using Ansible playbooks and containerized Ceph packages, and monitor cluster health using Prometheus and Grafana dashboards. Upgrades can be challenging due to potential issues uncovered and slow performance on HDD backends.
The document discusses IBM Power Systems and PowerHA SystemMirror V7 for IBM i. PowerHA SystemMirror provides high availability and disaster recovery clustering capabilities. It uses shared storage clustering technology designed for automation and minimal IT operations. Editions include Standard Edition for data center deployments and Enterprise Edition with additional features for multi-site deployments. The document reviews PowerHA concepts, editions, pricing, and strategy to provide resiliency without downtime through automation and continuous availability.
The document provides an overview of virtual networking concepts in VMware vSphere, including:
- Types of virtual switch connections like virtual machine port groups and VMkernel ports
- Standard switches and distributed switches
- VLAN configurations and tagging
- Network adapter and switch port policies for security, traffic shaping, and failover
- Troubleshooting tools like ESXCLI, TCPDUMP and networking commands
Best practices for optimizing Red Hat platforms for large scale datacenter de...Jeremy Eder
This presentation is from NVIDIA GTC DC on Oct 23, 2018:
https://youtu.be/z5gEUL6dJRI
Corresponding Press Release: https://www.redhat.com/en/about/press-releases/red-hat-nvidia-align-open-source-solutions-fuel-emerging-workloads
Blog: https://www.redhat.com/en/blog/red-hat-and-nvidia-positioning-red-hat-enterprise-linux-and-openshift-primary-platforms-artificial-intelligence-and-other-gpu-accelerated-workloads
Demo Video:
https://www.youtube.com/watch?v=9iVYjA_WJgU
1. The document discusses Linux kernel page reclamation.
2. Direct reclaim is when the caller performs reclamation directly, while daemon reclaim uses kswapd processes.
3. Daemon reclaim involves kswapd processes waking up and using kswapd_shrink_zone() to reclaim pages until all zones are above the high watermark. This helps balance memory usage across zones.
This presentation provides an overview of the Dell PowerEdge R730xd server performance results with Red Hat Ceph Storage. It covers the advantages of using Red Hat Ceph Storage on Dell servers with their proven hardware components that provide high scalability, enhanced ROI cost benefits, and support of unstructured data.
Ceph Benchmarking Tool (CBT) is a Python framework for benchmarking Ceph clusters. It has client and monitor personalities for generating load and setting up the cluster. CBT includes benchmarks for RADOS operations, librbd, KRBD on EXT4, KVM with RBD volumes, and COSBench tests against RGW. Test plans are defined in YAML files and results are archived for later analysis using tools like awk, grep, and gnuplot.
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
Presentation explains the difference between multi site high availability (aka metro cluster) and disaster recovery. General concepts are similar for any products but presentation is more tailored for VMware technologies.
Updated lifecycle management, improved analytics and support, and the option of Kubernetes — VMware vSphere® 7 is the biggest re-platform of vSphere in years. Learn more about the most significant vSphere evolution in a decade.
Learn more: http://ms.spr.ly/6005TmX9B
This presentation provides an overview of vSAN components and fault tolerance methods. It discusses how vSAN objects are divided into components that are placed across hosts. It covers the different states components can be in, such as active, degraded, absent, and how resync, rebuild, repair, and reconfiguration processes work. It also explains how vSAN uses voting and quorum to determine which cluster partition remains available in a network partition scenario.
Testing Persistent Storage Performance in Kubernetes with SherlockScyllaDB
Getting to understand your Kubernetes storage capabilities is important in order to run a proper cluster in production. In this session I will demonstrate how to use Sherlock, an open source platform written to test persistent NVMe/TCP storage in Kubernetes, either via synthetic workload or via variety of databases, all easily done and summarized to give you an estimate of what your IOPS, Latency and Throughput your storage can provide to the Kubernetes cluster.
The document discusses the benefits of using Veritas Cluster Server (VCS) 5 for VMware ESX Server. VCS 5 provides high availability and disaster recovery for virtual machines and applications. It protects against failures at all levels from physical servers to individual applications. VCS 5 also provides granular management of virtual environments similar to physical servers and allows configurations such as M+N clusters across multiple data centers for disaster recovery.
16. 16
Virtualization - Host OS
• Host OS型 (Workstation/Player)
Windows 7
Windows 10
Ubuntu
CentOS
底層硬體
(PC/Notebook)
VMware Workstation
VMware Player
Oracle Virtual Box
Windows Server
2008
+ MS SQL Server
Red Hat Linux
+ Apache + PHP
19. 19
Virtualization - Hypervisor
• Hypervisor型 (ESX/ESXi)
VMware ESXi
Microsoft Hyper-V
Citrix XenServer
Nutanix AHV
底層硬體
(Server)
Virtual Hardware
Windows Server
2008
+ MS SQL Server
Red Hat Linux
+ Apache + PHP
60. Hypervisor free version
註冊 - Register My VMware account
啟動 - Activate your account
下載 - Download ESXi ISO
安裝 - Install ESXi on your server
金鑰 - Assign hypervisor license
88. 88
Virtual Switch Connections(1)
• Port Group
▪ Virtual machines network
Virtual Switch
Production Dev-Test DMZ
Uplink Ports
Virtual Machine Port Groups
89. 89
Virtual Switch Connections(2)
• VMkernel Ports
▪ For the ESXi management network
▪ For IP storage(iSCSI/NFS), vSphere HA, vMotion,
Fault Tolerance, Virtual SAN, and Replication
Virtual Switch
vSphere
vMotion
Management
Uplink Ports
VMkernel Ports