As eBay is moving to OpenStack, we need to find capacity conversion ratio between ESX and KVM. Moreover, we hope to tunning KVM performance that make KVM to be same as or better than ESX
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld
VMworld 2013
Vyenkatesh (Venky) Deshpande, VMware
Marcos Hernandez, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Do you want to modernize your data center? NSX has introduced amazing new technology and is the perfect ally for network and security administrators. But what if you want more agility? Could you imagine drinking coffee and watching the network configuring itself? Consuming NSX through configuration frameworks like Ansible or scripting languages such as Python or PowerShell goes a step beyond the simple usage of the GUI and allow for introducing custom advanced logic and workflows. Automating your infrastructure allows you to increase productivity, reduce errors due to manual configuration mistakes and simplify processes.
More on http://cloudmaniac.net
As eBay is moving to OpenStack, we need to find capacity conversion ratio between ESX and KVM. Moreover, we hope to tunning KVM performance that make KVM to be same as or better than ESX
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld
VMworld 2013
Vyenkatesh (Venky) Deshpande, VMware
Marcos Hernandez, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Do you want to modernize your data center? NSX has introduced amazing new technology and is the perfect ally for network and security administrators. But what if you want more agility? Could you imagine drinking coffee and watching the network configuring itself? Consuming NSX through configuration frameworks like Ansible or scripting languages such as Python or PowerShell goes a step beyond the simple usage of the GUI and allow for introducing custom advanced logic and workflows. Automating your infrastructure allows you to increase productivity, reduce errors due to manual configuration mistakes and simplify processes.
More on http://cloudmaniac.net
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
Get a technical understanding of the components of NSX, including how switching, routing, firewalling, load-balancing and other services work within NSX.
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
VMworld 2017 - Top 10 things to know about vSANDuncan Epping
In this session Cormac Hogan and I go over the top 10 things to know about vSAN. This is based on two years of questions/answers from our field and customers. Useful for any VMware vSAN customer!
#STO1264BU #STO1264BE
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld
VMworld 2013
Richard Cockett, VMware
Umesh Goyal, VMware Software India Pvt ltd
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud
In this workshop VMware will provide a quick reminder of the main contributions of the NSX network virtualization platform: consistent network and security management, increased application resiliency, rapid migration of workloads to and from the cloud.
VMware and OVH will then move on to practical cases with implementation of micro-segmentation, dynamic routing, automatic deployment of an application, load balancing in the OVH Hosted Private Cloud. This workshop is aimed at a technical audience.
Addressing DHCP and DNS scalability issues in OpenStack NeutronVikram G Hosakote
This presentation is about Cisco's highly scalable, enterprise-class, DHCP driver that uses Cisco Prime Network Registrar (CPNR) to address DHCP and DNS scalability issues in OpenStack Neutron.
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
Get a technical understanding of the components of NSX, including how switching, routing, firewalling, load-balancing and other services work within NSX.
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
VMworld 2017 - Top 10 things to know about vSANDuncan Epping
In this session Cormac Hogan and I go over the top 10 things to know about vSAN. This is based on two years of questions/answers from our field and customers. Useful for any VMware vSAN customer!
#STO1264BU #STO1264BE
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld
VMworld 2013
Richard Cockett, VMware
Umesh Goyal, VMware Software India Pvt ltd
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud
In this workshop VMware will provide a quick reminder of the main contributions of the NSX network virtualization platform: consistent network and security management, increased application resiliency, rapid migration of workloads to and from the cloud.
VMware and OVH will then move on to practical cases with implementation of micro-segmentation, dynamic routing, automatic deployment of an application, load balancing in the OVH Hosted Private Cloud. This workshop is aimed at a technical audience.
Addressing DHCP and DNS scalability issues in OpenStack NeutronVikram G Hosakote
This presentation is about Cisco's highly scalable, enterprise-class, DHCP driver that uses Cisco Prime Network Registrar (CPNR) to address DHCP and DNS scalability issues in OpenStack Neutron.
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
This demo/lab will guide you to install and configure FD.io Vector Packet Processing (VPP) on Intel® Architecture (AI) Server. You will also learn to install TRex* on another AI Server to send packets to the VPP, and use some VPP commands to forward packets back to the TRex*.
Speaker: Loc Nguyen. Loc is a Software Application Engineer in Data Center Scale Engineering Team. Loc joined Intel in 2005, and has worked in various projects. Before joining the network group, Loc worked in High-Performance Computing area and supported Intel® Xeon Phi™ Product Family. His interest includes computer graphics, parallel computing, and computer networking.
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
Summit 16: How to Compose a New OPNFV Solution Stack?OPNFV
This session showcases how a new OPNFV solution stack (a.k.a. ""scenario"") is composed and stood up. We'll use a new solution stack framed around a new software forwarder (""VPP"") provided by the FD.io project as example for this session. The session discusses how an evolution/change of upstream components from OpenStack, OpenDaylight and FFD.io are put in place for the scenario, how installers and tests need to be evolved to allow for integration into OPNFV's continuous integration, deployment and test pipeline.
KVM and docker LXC Benchmarking with OpenStackBoden Russell
Passive benchmarking with docker LXC and KVM using OpenStack hosted in SoftLayer. These results provide initial incite as to why LXC as a technology choice offers benefits over traditional VMs and seek to provide answers as to the typical initial LXC question -- "why would I consider Linux Containers over VMs" from a performance perspective.
Results here provide insight as to:
- Cloudy ops times (start, stop, reboot) using OpenStack.
- Guest micro benchmark performance (I/O, network, memory, CPU).
- Guest micro benchmark performance of MySQL; OLTP read, read / write complex and indexed insertion.
- Compute node resource consumption; VM / Container density factors.
- Lessons learned during benchmarking.
The tests here were performed using OpenStack Rally to drive the OpenStack cloudy tests and various other linux tools to test the guest performance on a "micro level". The nova docker virt driver was used in the Cloud scenario to realize VMs as docker LXC containers and compared to the nova virt driver for libvirt KVM.
Please read the disclaimers in the presentation as this is only intended to be the "chip of the ice burg".
Es gibt viele Möglichkeiten hoch verfügbare und/oder skalierbare Dienste zu bauen, die weitläufig im Einsatz sind: DNS Round-Robin, ein Satz Loadbalancer oder Reverse-Proxies, etc. pp. An Anycast und BGP im eigenen Rechenzentrum trauen sich einige Admins und Entscheider nicht heran.
Warum es OK ist, wenn einige bis viele Server die selbe IP-Adresse haben, viele Wege nach Rom führen und wie man so ein Setup aufbaut und betreibt soll in diesem Vortrag praxisnah gezeigt werden. Wir bauen auf Basis von Debian Linux, Bird und Bind einen Cluster von Webservern und spielen ein bisschen damit herum (wenn noch genug Zeit ist).
Use EPA for NFV & Test with OPNVF* Yardstick*Michelle Holley
The audience will build and deploy an OPNFV Yardstick in a Docker container and run it against a live OpenStack installation. They will then run it again, against an EPA-enabled version to see the differences. Aspects of Yardstick, NFV usage, EPA and upcoming developments will be discussed.
About the speaker: Jim Chamings is a Sr. Software Engineer at Intel, working in the Datacenter Scale Engineering team. His focus is on enabling Cloud and SDN/NFV developers to get the maximum value out of their data centers.
Implementing an IPv6 Enabled Environment for a Public Cloud TenantShixiong Shang
"Implementing an IPv6 Enabled Environment for a Public Cloud Tenant" case study I delivered in OpenStack Vancouver Summit (May, 2015) jointly with Anik and Sharmin from Cisco System.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Iben from Spirent talks at the SDN World Congress about the importance of and...Iben Rodriguez
@Iben Rodriguez from @Spirent talks at the SDN World Congress about the importance of and issues with NFV VNF and SDN Testing in the cloud.
#Layer123 Dusseldorf Germany 20141016
This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdfPaul Yang
The purpose of this technical talk with the demo is to show ODMs, OEMs, and ISVs how to leverage SystemReady Lab, showcase the use-case based on the virtualization platform for the edge, and deploy open-source tools that set up ODMs to develop their Arm platforms.
Similar to Network performance test plan_v0.3 (20)
Flex Cloud - Conceptual Design - ver 0.2David Pasek
The Energy
=========
The cost of energy is increasing. A significant part of electrical energy cost is the cost of distribution. That's the reason why the popularity of small home solar systems increases. That's the way how to generate and consume electricity locally and be independent of the distribution network. However, we have a problem. "Green Energy" from solar, wind, and hydroelectric power stations is difficult to distribute via the electrical grid. Energy accumulation (batteries, pumped storage power plant, etc.) is costly and for the traditional electrical grid is very difficult to automatically manage the distribution of so many energy sources.
The Cloud Computing
=================
The demand for cloud (computing and storage) capacity is increasing year by year. Internet bandwidth increases and cost decreases every year. 5G Networks and SD-WANs are on the radar. Cloud Computing is operated on data centers. The significant part of data center costs is the cost of energy.
The potential synergy between Energetics and Cloud Computing
=================================================
The solution is to consume electricity in the proximity of green power generators. Excess electricity is accumulated into batteries but batteries capacity is limited. We should treat batteries like a cache or buffer to overcome times when green energy does not generate energy but we have local demand. However, when we have excess electricity and the battery (cache/buffer) is full, instead of providing the energy into the electrical grid, the excess electricity can be consumed by a computer system providing compute resources to cloud computing consumers over the internet. This is the form of Distributed Cloud Computing.
Cloud-Native Applications
====================
So, let's assume we will have Distributed Cloud Computing with so-called Spot Compute Resource Pools". Spot Compute Resource Pools are computing resources that can appear or disappear within hours or minutes. This is not optimal IT infrastructure for traditional software applications which are not infrastructure aware. For such distributed cloud computing the software applications must be designed and developed with infrastructure resources ephemerality in mind. In other words, Cloud-Native Applications must be able to leverage ephemeral compute resource pools and know how to use "Spot Compute Resource Pools".
Tato publikace je aktuálním průvodcem světem aplikací ICT v cestovním ruchu, který by měl jeho čtenáři poskytnout téměř ucelenou informaci o možnostech aplikací ICT v cestovním ruchu s přiměřeně podrobným a přiměřeně odborně náročným výkladem používaných technologií, s uvedením řady příkladů aplikací ICT pro popisovanou oblast, s důrazem na trendy a využití ICT v cestovním ruchu v blízké budoucnosti (i když je vhodné v této souvislosti zdůraznit, že„budouc- nost je všude kolem nás“, což podtrhuje dynamiku a progresivnost zavádění ICT v CR). V odstavci je výše zdůrazněno slovo „aktuální“, které má upozornit, že popisovaný stav se vztahuje k červenci 2008 a autoři zvolili kompromis mezi dílčí „nadčasovostí“ publikace a její srozumitelností, konkrétností, uváděním řady příkladů, které však rychle zastarávají. Dalším zdůrazněným výrazem je „téměř ucelenou“ – v publikaci je jen velmi stručně zmíněna jinak velmi rozsáhlá oblast e-business v e-turismu, která je tématem souběžně vznikající publikace, a proto není podrobněji popisována. Popis je v publikaci doplněn odkazy na velké množství literatury, která pomůže při případném podrobnějším studiu dané pro- blematiky. Součástí publikace je i poměrně rozsáhlý výkladový slovník, který je pomůckou v orientaci v množství použí- vaných a „stále se rodících“ termínů.
Architektura a implementace digitálních knihoven v prostředí sítě InternetDavid Pasek
Cílem práce je charakteristika stávající situace budování digitálních knihoven ve světovém měřítku se zaměřením na problematiku jejich architektury a implementace v podmínkách sítě Internet. Práce se věnuje především důležitým komponentám digitálních knihoven a jejich specifikaci. Mezi důležité komponenty patří digitální objekt, repozitář, katalog a uživatelské rozhraní. V práci jsou rovněž popsány signatury digitálních objektů, které jsou důležité zejména z hlediska bezpečnosti, neměnnosti obsahu a věrohodnosti autora digitálního objektu. Pozornost byla věnována i normám a specifikacím metadat, která jsou velmi důležitá pro budování katalogu a vyhledávání relevantních digitálních objektů. Přínosem práce je podání návrhu optimálního modelu digitální knihovny a ukázky praktických aplikací na navrhovaném modelu.
The goal of this test plan is to test SPECTRE and MELTDOWN performance impact on Intel CPU. We will run CPU intensive workloads in Virtual Machine(s) running on non-patched and patched ESXi host and observe performance impact.
We will test impact on network, storage and memory performance because these I/O intensive workloads requires CPU caching which is impacted by vulnerabilities remediation.
Qualification of performance is very specific and hard subject. The performance impact varies across different hardware and software configurations. However, performed tests are very well described in this document so the reader can understand all conditions of the test and observed results. The reader can also perform tests on his specific hardware and software configurations.
Brief introduction and overview of the online reservation system FlexBook.
We are a startup project still in stealth mode.
For further information, please send an email to info@flexbook.cz
This document explains the dual node VLT deployment strategies with its associated network reference architecture. Various VLT deployment topologies are also explained with emphasis on best practices and recommendations for some of the network scenarios. This document also covers the configuration and troubleshooting of VLT using relevant show commands and different outputs.
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
Presentation explains the difference between multi site high availability (aka metro cluster) and disaster recovery. General concepts are similar for any products but presentation is more tailored for VMware technologies.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
2. Version History
Date Rev. Author Description Reviewers
2020 Oct 29 0.1 David Pasek Initial Draft.
Simple tests.
More complex tests and multiple
test combination can be tested
in case of not seeing any
performance issues.
2020 Nov 06 0.2 David Pasek NUTTCP test method fixed.
In guest (vm2vm) iperf TCP test
In guest (vm2vm) iperf UDP test
Two Load balancer simulation
tests (without RSS, with RSS)
2020 Nov 24 0.3 David Pasek Published as open source
3. Contents
1. Overview..........................................................................................4
2. Requirements, Constraints and Assumptions ...............................5
2.1 Requirements ........................................................................................................5
2.2 Constraints ............................................................................................................5
2.3 Assumptions..........................................................................................................5
3. Test Lab Environments...................................................................6
3.1 ESXi hosts - Hardware Specifications......................................................................6
3.2 Virtual Machines - Hardware and App Specifications ................................................6
3.3 Lab Architecture.....................................................................................................7
4. Test Plan..........................................................................................8
4.1 VMKernel - TCP (iperf) communication between two ESXi host consoles ..................9
4.2 VM - TCP (nuttcp) communication of 2 VMs across two ESXi hosts ........................10
4.3 VM - TCP (iperf) communication of 2 VMs across two ESXi hosts ...........................11
4.4 VM - UDP (nuttcp 64 KB) communication of 2 VMs across two ESXi hosts ..............12
4.5 VM - UDP (iperf 64 KB) communication of 2 VMs across two ESXi hosts.................13
4.6 VM - HTTP (Nginx) communication of 2 VMs across two ESXi hosts .......................14
4.7 VM - HTTPS (Nginx) communication of 2 VMs across two ESXi hosts .....................15
4.8 VM - HTTP communication across two ESXi hosts via LoadBalancer (no RSS) .......16
4.9 VM - HTTP communication across two ESXi hosts via LoadBalancer (RSS) ............18
5. Appendixes................................................................................... 20
5.1 Useful commands and tools for test cases.............................................................20
5.2 Diagnostic commands ..........................................................................................29
5.3 ESX commands to manage NIC Offloading Capabilities .........................................32
4. 1. Overview
This document contains testing procedures to verify that the implemented design successfully addresses
customer requirements and expectations.
This document assumes that the person performing these tests has a basic understanding of VMware
vSphere and is familiar with vSphere lab design and environment. This document is not intended for
administrators or testers who have no prior knowledge of VMware vSphere concepts and terminology.
5. 2. Requirements, Constraints and Assumptions
2.1 Requirements
1. VM to VM Network TCP communication across two ESXi hosts has to achieve at least ~5
Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps
receive)
2. VM to VM Network HTTP communication across two ESXi hosts has to achieve at least 5
Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps
receive)
3. VM to VM Network HTTPS communication across two ESXi hosts has to achieve at least 5
Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps
receive)
2.2 Constraints
1. Hardware
a. 4x HPE DL560 Gen10 (BIOS: U34)
b. 4x Intel NIC X710 (Firmware Version: 10.51.5, Driver Version: 1.9.5)
c. 4x Qlogic FastLinQ QL41xxx
2. People
a. Testers having access to lab environment
3. Processes
a. VPN access to lab environment
2.3 Assumptions
1. Hardware
a. We will have 2 ESXi hosts with Intel X710 NIC
b. We will have 2 ESXi hosts with QLogic FastLinQ QL41xxx NIC
2. We will get VPN access to lab environment
3. We will be able to use Linux operating systems as Guest OS within VM and install testing
software (nuttcp, iperf, nginx, wrk, iftop)
6. 3. Test Lab Environments
3.1 ESXi hosts - HardwareSpecifications
All ESXi host should be in following system
Server Platform: HPE ProLiant DL560 Gen10
BIOS: U34 | Date (ISO-8601): 2020-04-08
OS/Hypervisor: VMware ESXi 6.7.0 build-16075168 (6.7 U3)
Following four ESXi hosts specifications are used for testing.
1. ESX01-INTEL
a. CPU: TBD
b. RAM: TBD
c. NIC: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5
d. STORAGE: any storage for test VMs
2. ESX02-INTEL
a. CPU: TBD
b. RAM: TBD
c. NIC: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5
d. STORAGE: any storage for test VMs
3. ESX01-QLOGIC
a. CPU: TBD
b. RAM: TBD
c. NIC: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm
8.38.2.0
d. STORAGE: any storage for test VMs
4. ESX02-QLOGIC
a. CPU: TBD
b. RAM: TBD
c. NIC: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm
8.38.2.0
d. STORAGE: any storage for test VMs
Note: Four ESXi hosts above can be consolidate into two ESXi hosts where each ESXi host will
have Intel and Qlogic NIC
3.2 VirtualMachines -Hardwareand App Specifications
1. APP-SERVER-01 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
2. APP-SERVER-02 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
3. APP-SERVER-03 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
7. o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
4. APP-SERVER-04 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
5. APP-CLIENT-01 - Application Client
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, WRK, TASKSET
6. APP-LB-01 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
3.3 Lab Architecture
vSphere Cluster DRS Rules are used to pin Server, Client and LoadBalancer VMs to particular ESXi
hosts.
8. 4. Test Plan
All tests in this test plan should be tested with different NIC hardware offload methods enabled
and disabled. These methods are:
Enable/Disable LRO
Enable/Disable RSS
Enable/Disable NetQueue
All performance results should be analyzed and discussed within expert group.
9. 4.1 VMKernel - TCP (iperf)communication betweentwo ESXi host
consoles
Test Name VMKernel - TCP (iperf) communication between two ESXi host consoles
Success Criteria At least 5 Gbps (~500 MB/s) throughput
Test scenario /
runbook
Run IPERF listener on APP-SERVER-01:
esxcli network firewall set --enabled false
/usr/lib/vmware/vsan/bin/iperf3.copy -s -B [APP-SERVER-01 vMotion IP]
Run TCP listener on APP-CLIENT-01:
esxcli network firewall set --enabled false
/usr/lib/vmware/vsan/bin/iperf3.copy -t 300 -c [APP-SERVER-01 vMotion
IP]
Use VM network monitoring in vSphere Client to see network throughput.
Write down achieved results from iperf utility into Result row below.
After test, re-enable firewall
esxcli network firewall set --enabled true
Tester
Results 5.91 Gbits/sec
Comments Test duration: 5 minutes
Succeed Yes | No | Partially
10. 4.2 VM - TCP (nuttcp)communication of 2 VMs acrosstwo ESXi
hosts
Test Name VM - TCP (nuttcp) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 20 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
nuttcp -S -P 5000 -N 20
Run TCP traffic generator on APP-CLIENT-01:
nuttcp -t -N 4 -P 5000 -T 300 APP-SERVER-01
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down nuttcp reported results into test results below.
Tester
Results 9323.1294 Mbps = 9.1 Gbps
Comments
Succeed Yes | No | Partially
11. 4.3 VM - TCP (iperf)communicationof 2 VMs across two ESXi hosts
Test Name VM - TCP (iperf) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 20 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
iperf3 -s
Run TCP traffic generator on APP-CLIENT-01:
iperf3 -t 300 -b 25g -P 4 -c [APP-SERVER-01]
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down iperf reported results into test results below.
Tester
Results
9.39 Gbps
Comments
Succeed Yes | No | Partially
12. 4.4 VM - UDP (nuttcp 64 KB) communicationof 2 VMs across two
ESXi hosts
Test Name VM - UDP (nuttcp 64 KB) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 8 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
nuttcp -S -P 5000 -N 20
Run TCP traffic generator on APP-CLIENT-01:
nuttcp -u -Ru -l65507 -N 4 -P 5000 -T 300 -i APP-SERVER-01
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down nuttcp reported results into test results below.
Tester
Results 9315.9779 Mbps = 9.1 Gbps
1116.6931 MB / 1.00 sec = 9367.3418 Mbps 0 / 17875 ~drop/pkt 0.00 ~%loss
1119.2545 MB / 1.00 sec = 9388.5554 Mbps 3 / 17919 ~drop/pkt 0.01674
~%loss
333165.1325 MB / 300.00 sec = 9315.9779 Mbps 49 %TX 69 %RX 34816 /
5367818 drop/pkt 0.65 %loss
Comments
Succeed Yes | No | Partially
13. 4.5 VM - UDP (iperf 64 KB) communication of2 VMs across two ESXi
hosts
Test Name VM - UDP (iperf 64 KB) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 8 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
iperf3 -s
Run TCP traffic generator on APP-CLIENT-01:
iperf3 -u -t 300 -b 25g -P 4 -l 65507 -c [APP-SERVER-01]
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down iperf reported results into test results below.
Tester
Results 7.91 Gbps
Comments [root@server-U19 /]# iperf3 -c 10.226.97.155 -u -t 300 -b 25g -P 4 -l 65507
[SUM] 0.00-300.00 sec 276 GBytes 7.91 Gbits/sec 0.062 ms 24936/4527389
(0.55%)
Succeed Yes | No | Partially
14. 4.6 VM - HTTP (Nginx)communicationof 2 VMs across two ESXi
hosts
Test Name VM - HTTP (nginx) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 22 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Install and run Nginx (See
) on APP-SERVER-01
Create test files on APP-SERVER-01
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14)
Run wrk on APP-CLIENT-01 to generate traffic from APP-SERVER-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-SERVER-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down WRK reported results (Gbps) into test results below
Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 660.16MB = 5252.28 Mbps = 5.15 Gbps
Comments [root@server-U19 /]# taskset -c 0-8 /bin/wrk -t 8 -c 8 -d 300s
http://10.226.97.155/1M.txt
198016 requests in 5.00m, 193.43GB read
Requests/sec: 659.98
Transfer/sec: 660.16MB
Advanced testing
Test is uni-directional. Bi-directional test would require Lua script for wrk
We do not use taskset utility which can be used to pin threads to logical
CPUs.
We will do advance testing in phase 2 based on observed results.
Succeed Yes | No | Partially
15.
16. 4.7 VM - HTTPS (Nginx) communication of 2 VMs acrosstwo ESXi
hosts
Test Name VM – HTTPS (nginx) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 22 Gbps should be achievable in pure software stack. Intel CPU Instructions
AES-NI accelerates SSL, thus encryption penalty should be mitigated.
Test scenario /
runbook
Install and run Nginx (See
) on APP-SERVER-01
Create test files on APP-SERVER-01
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14)
Run wrk on APP-CLIENT-01 to generate traffic from APP-SERVER-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s https://[APP-SERVER-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down WRK reported results (Gbps) into test results below
Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 1.09 GB = 8.72 Gbps
Comments 335140 requests in 5.00m, 327.36GB read
Requests/sec: 1116.99
Transfer/sec: 1.09GB
Succeed Yes | No | Partially
17. 4.8 VM - HTTP communication acrosstwo ESXihosts via
LoadBalancer (no RSS)
Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer no RSS
Success Criteria At least 4 Gbps (~400 MB/s) throughput
Note: 4 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Prepare test environment as depicted in section 3.3
Install and run Nginx (See
) on four servers APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-
SERVER-04
Configure DRS rules to keep servers and client on one ESXi host and load balancer
on another one.
Create test files on APP-SERVERs
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14)
Install HTTP L7 Load Balancer APP-LB-01 (See 5.1.12) with four load balancer
members APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-SERVER-
04
Do NOT enable RSS (this is the default config) in virtual machine APP-LB-01. See
section 5.1.15
Run wrk on APP-CLIENT-01 to generate traffic from APP-LB-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-LB-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down WRK reported results (Gbps) into test results below
Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 635.97 MBps = 5087 Mbps = 4.97 Gbps
Comments Advanced testing
Test is uni-directional. Bi-directional test would require Lua script for wrk
We do not use taskset utility which can be used to pin threads to logical
CPUs.
We will do advance testing in phase 2 based on observed results.
18. Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer no RSS
Succeed Yes | No | Partially
19. 4.9 VM - HTTP communication acrosstwo ESXihosts via
LoadBalancer (RSS)
Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer
Success Criteria At least 4 Gbps (~400 MB/s) throughput
Note: 4 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Prepare test environment as depicted in section 3.3
Install and run Nginx (See
) on four servers APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-
SERVER-04
Configure DRS rules to keep servers and client on one ESXi host and load balancer
on another one.
Create test files on APP-SERVERs
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See Error! Reference source
not found. and
T)
Install HTTP L7 Load Balancer APP-LB-01 (See 5.1.12) with four load balancer
members APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-SERVER-
04
Enable RSS in virtual machine APP-LB-01. See section 5.1.15
Run wrk on APP-CLIENT-01 to generate traffic from APP-LB-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-LB-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
Write down WRK reported results (Gbps) into test results below
Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 664 MBps = 5312 Mbps = 5.18 Gbps
Comments Test duration: 5 minutes
Advanced testing
Test is uni-directional. Bi-directional test would require Lua script for wrk
20. Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer
We do not use taskset utility which can be used to pin threads to logical
CPUs.
We will do advance testing in phase 2 based on observed results.
Succeed Yes | No | Partially
21. 5. Appendixes
5.1 Useful commands and tools for test cases
In this section are documented procedures and commands used for test cases. All commands are
targeted for RedHat 7 (or Centos 7) Linux operating system which are standard Linux
distribution.
5.1.1 Network settings
Information source: https://wiki.centos.org/FAQ/CentOS7
System files with network settings
/etc/hostname
/etc/resolv.conf
/etc/sysconfig/network
o Common network settings
GATEWAY=10.16.1.1
DNS1=10.20.30.10
DNS2=10.20.40.10
/etc/sysconfig/network-scripts/ifcfg-eth0
o IP settings for interface eth0
DHCP
BOOTPROTO=dhcp
Static IP
BOOTPROTO=static
IPADDR=10.16.1.106
IPADDR1=10.16.1.107 Alias IP
IPADDR2=10.16.1.108 Alias IP
NETMASK=255.255.255.0
/etc/hosts
o local hostname ip resolution
To apply network settings use following command
systemctl restart network
5.1.2 NTP
To install and configure NTPD use following commands
yum install ntp
systemctl start ntpd
systemctl enable ntpd
22. To set timezone, crearte symbolic link from /etc/localtime to /usr/share/zoneinfo/…
ln -s /usr/share/zoneinfo/Europe/Prague /etc/localtime
To check current timezone setting, just list the symlink
ls -la /etc/localtime
5.1.3 Open-VM-Tools
VMware tools are usually installed in Centos 7 by default but just in case, here is the install procedure.
sudo systemctl install open-vm-tools
sudo systemctl start vmtoolsd
sudo systemctl status vmtoolsd
sudo systemctl enable vmtoolsd
5.1.4 Firewall
To disable firewall services on RedHat linux use following commands
systemctl stop firewalld.service
systemctl disable firewalld.service
and to check firewall status use
systemctl status firewalld.service
5.1.5 SElinux
To disable SElinux on RedHat linux edit file /etc/selinux/config, change parameter SELINUX to disabled
and restart the system.
vi /etc/selinux/config
SELINUX=disabled
5.1.6 EPEL
CentOS or Red Hat Enterprise Linux (RHEL) version 7.x to use the Fedora Extra Packages for Enterprise
Linux (EPEL) repository.
yum install -y epel-release
5.1.7 Open-vm-tools
Check if open-vm-tools are installed
yum list installed | grep open-vm
In case VMware Tools are not install, install it.
yum install open-vm-tools
5.1.8 NUTTCP performance test tool
NUTTCP is a network performance measurement tool intended for use by network and system
managers. Its most basic usage is to determine the raw TCP (or UDP) network layer throughput
by transferring memory buffers from a source system across an interconnecting network to a
23. destination system, either transferring data for a specified time interval, or alternatively
transferring a specified number of bytes. In addition to reporting the achieved network
throughput in Mbps, nuttcp also provides additional useful information related to the data
transfer such as user, system, and wall-clock time, transmitter and receiver CPU utilization, and
loss percentage (for UDP transfers).
Assumptions
EPEL repository is accessible
Installation on RHEL 7
yum install --enablerepo=Unsupported_EPEL nuttcp
Installation on CENTOS 7
yum install -y epel-release
yum install -y nuttcp
Usage …
Server part is started by following command
nuttcp -S -N 12
Client part is started by one of following commands
nuttcp -t -N 12 czchoapint092
cat /dev/zero | nuttcp -t -s -N 12 czchoapint092
cat /dev/urandom | nuttcp -t -s -N 12 czchoapint092
5.1.9 IPERF performance test tool
iperf3 is a tool for performing network throughput measurements. It can test either TCP or UDP
throughput. To perform an iperf3 test the user must establish both a server and a client.
Assumptions
EPEL repository is accessible
Installation on RHEL 7
yum install --enablerepo=Unsupported_EPEL nuttcp
Installation on CENTOS 7
yum install -y epel-release
yum install -y iperf3
Usage …
24. Server part is started by following command
iperf3 -s
Client part is started by one of following commands
iperf3 -c 192.168.11.51 -u -t 300 -b 25g -P 4
Parameters -P and -b can be tuned to achieve minimal packet loss.
5.1.10 IFTOP - performance monitoring tool
iftop - display bandwidth usage on an interface by host
Assumptions
EPEL repository is accessible
Installation on Centos 7
yum install -y iftop
Installation on RHEL 7
yum install --enablerepo=Unsupported_EPEL iftop
Usage …
# show interfaces
ip link
# use desired interface for iftop
iftop –i <INTERFACE>
25. 5.1.11 NGINX – http/https server, load balancer
Centos 7 Nginx install procedure is based on tutorials at
https://phoenixnap.com/kb/how-to-install-nginx-on-centos-7
and
https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-
for-nginx-on-centos-7
Other resources for NGINX performance tuning
https://www.nginx.com/blog/performance-tuning-tips-tricks/
https://www.tweaked.io/guide/nginx-proxying/
Assumptions
Firewall is disabled
SElinux is disabled
Sudo or root privileges
Installation instructions …
sudo yum -y update
sudo yum install -y epel-release
sudo yum install -y nginx
sudo systemctl start nginx
sudo systemctl status nginx
sudo systemctl enable nginx
Website content (default server root) is in the directory /usr/share/nginx/html
Default server block configuration file, located at /etc/nginx/conf.d/default.conf
Global configuration is in /etc/nginx/nginx.conf
Configure SSL Certificate and enable HTTPS
mkdir /etc/ssl/private
chmod 700 /etc/ssl/private
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-
selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt
openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
vi /etc/nginx/conf.d/ssl.conf
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
26. server_name server_IP_address;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
root /usr/share/nginx/html;
location / {
autoindex on;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Test if Nginx syntax is correct
nginx -t
Restart Nginx
systemctl restart nginx
5.1.12 NGINX – http/https L7 load balancer (reverse proxy)
Install NGINX package as documented in previous section. Load balancer function can be
configured in NGINX Global configuration file /etc/nginx/nginx.conf
Use the simplest configuration for load balancing with nginx like the following example:
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
27. listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
Source: http://nginx.org/en/docs/http/load_balancing.html
5.1.13 WRK – http benchmarking
wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a
single multi-core CPU. It combines a multithreaded design with scalable event notification
systems such as epoll and kqueue.
https://github.com/wg/wrk
https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux
Install procedure on Centos
sudo yum -y update
yum groupinstall 'Development Tools'
yum install -y openssl-devel git
git clone https://github.com/wg/wrk.git wrk
cd wrk
make
cp wrk /somewhere/in/your/PATH
Advanced benchmarking with wrk
WRK supports Lua scripts for more advanced benchmarking.
Resources
Quick Start for the http Pressure Tool wrk
o https://programmer.ink/think/quick-start-for-the-http-pressure-tool-wrk.html
POST request with wrk?
o https://stackoverflow.com/questions/15261612/post-request-with-wrk
Benchmark testing of OSS with ab and wrk tools
o https://www.alibabacloud.com/forum/read-497
Intelligent benchmark with wrk
o https://medium.com/@felipedutratine/intelligent-benchmark-with-wrk-163986c1587f
28.
29. 5.1.14 TASKSET
The taskset tool is provided by the util-linux package. It allows administrators to retrieve and set
the processor affinity of a running process, or launch a process with a specified processor
affinity.
Install procedure on Centos
sudo yum -y update
yum install -y util-linux
5.1.15 VMware Virtual Machine RSS configuration
You have to configure virtual machine advanced settings to enable RSS in virtual machine.
Additional advanced settings must be added into .vmx or advanced config of particular VM to
enable mutlti-queue support.
Below are VM advanced settings:
ethernetX.pnicFeatures = “4” <<< Enable multi-queue (NetQueue RSS) in particular VM
ethernetX.ctxPerDev = “3” <<< Allow multiple TX threads for particular VM
ethernetX.udpRSS = “1” <<< Receive Side Scaling (RSS) for UDP
Note 1: RSS has to be enabled end to end. NIC Driver (driver specific) -> VMkernel (enabled
by default) -> Virtual Machine Advanced Settings (disabled by default) -> Guest OS vNIC
(enabled by default on vmxnet3 with open-vm-tools).
Following command validates RSS is enabled in VMkernel and driver for particular physical
NIC (vmnic1):
vsish -e cat /net/pNics/vmnic1/rxqueues/info
Note 2: You have to enable and configure RSS in the guest OS in addition to the VMkernel
driver module. Multi-queuing is enabled by default in Linux guest OS when the latest VMware
tools version (version 1.0.24.0 or later) is installed or when the Linux VMXNET3 driver version
1.0.16.0-k or later is used. Prior to these versions, you were required to manually enable multi-
queue or RSS support. Be sure to check the driver and version used to verify if your Linux OS
has RSS support enabled by default.
Guest OS Driver version within linux OS can be checked by following command:
# modinfo vmxnet3
You can determine the number of Tx and Rx queues allocated for a VMXNET3 driver on by
running the ethtool console command in the Linux guest operating system:
ethtool -S ens192
30. 5.2 Diagnostic commands
In this section we will document diagnostic commands which should be run on each system to
understand implementation details of NIC offload capabilities and network traffic queueing.
ESXCLI commands are available at ESXCLI documentation:
https://code.vmware.com/docs/11743/esxi-7-0-esxcli-command-
reference/namespace/esxcli_network.html
For further detail about diagnostic commands, you can watch vmkernel log during execution of
commands below as there can be interesting outputs from NIC driver.
tail -f /var/log/vmkernel.log
5.2.1 ESXi Inventory
Collect hardware and ESXi inventory details.
esxcli system version get
esxcli hardware platform get
esxcli hardware cpu global get
smbiosDump
WebBrowser https://192.168.4.121/cgi-bin/esxcfg-info.cgi
5.2.2 Driver information
NIC inventory
esxcli network nic get -n <VMNIC>
NIC device info
vmkchdev –l | grep vmnic
Document VID:DID:SVID”SDID
To list all vib modules and understand what drivers are “Inbox” (aka native VMware) or
“Async” (from partners like Intel or Marvel/QLogic)
esxcli software vib list
5.2.3 Driver module settings
Identify NIC driver module name
esxcli network nic get -n vmnic0
Show driver module parameters
esxcli system module parameters list -m <DRIVER-MODULE-NAME>
31. 5.2.4 TSO
To verify that your pNIC supports TSO and if it is enabled on your ESXi host
esxcli network nic tso get
5.2.5 LRO
To display the current LRO configuration values
esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
Check the length of the LRO buffer by using the following esxcli command:
esxcli system settings advanced list - o /Net/VmxnetLROMaxLength
To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO,
software LRO) can be issued:
esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
esxcli system settings advanced list -o /Net/Vmxnet3SwLRO
5.2.6 CSO (Checksum Offload)
To verify that your pNIC supports Checksum Offload (CSO) on your ESXi host
esxcli network nic cso get
5.2.7 Net Queue Count
Get netqueue count on a nic
esxcli network nic queue count get
5.2.8 Net Filter Classes
List the netqueue supported filterclass of all physical NICs currently installed and loaded on the
system.
esxcli network nic queue filterclass list
5.2.9 List the load balancer settings
List the load balancer settings of all the installed and loaded physical NICs. (S:supported,
U:unsupported, N:not-applicable, A:allowed, D:disallowed).
esxcli network nic queue loadbalancer list
5.2.10 Details of netqueue balancer plugins
Details of netqueue balancer plugins on all physical NICs currently installed and loaded on the
system
esxcli network nic queue loadbalancer plugin list
32. 5.2.11 Net Queue balancer state
Netqueue balancer state of all physical NICs currently installed and loaded on the system
esxcli network nic queue loadbalancer state list
5.2.12 RX/TX ring buffer current parameters
Get current RX/TX ring buffer parameters of a NIC
esxcli network nic ring current get
5.2.13 RX/TX ring buffer parameters max values
Get preset maximums for RX/TX ring buffer parameters of a NIC.
esxcli network nic ring preset get -n vmnic0
5.2.14 SG (Scatter and Gather)
Scatter and Gather (Vectored I/O) is a concept that was primarily used in hard disks and it
enhances large I/O request performance, if supported by the hardware.
esxcli network nic sg get
5.2.15 List software simulation settings
List software simulation settings of physical NICs currently installed and loaded on the system.
esxcli network nic software list
5.2.16 RSS
We do not see any RSS related driver parameters, therefore, driver i40en 1.9.5 does not support
RSS.
On top of that, we have been assured by VMware Engineering that inbox driver i40en 1.9.5 does
not support RSS.
5.2.17 VMkernel software treads per VMNIC
Show number of VMkernel software treads per VMNIC
net-stats -A -t vW
vsish
/> cat /world/<WORLD-ID-1-IN-VMNIC>/name
/> cat /world/<WORLD-ID-2-IN-VMNIC>/name
/> cat /world/<WORLD-ID-3-IN-VMNIC>/name
…
/> cat /world/<WORLD-ID-n-IN-VMNIC>/name
33. 5.3 ESX commandsto manage NIC Offloading Capabilities
5.3.1 LRO in the ESXi host
By default, a host is configured to use hardware TSO if its NICs support the feature.
To check the LRO configuration for the default TCP/IP stack on the ESXi host, execute the
following command to display the current LRO configuration values:
esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
You are able to check the length of the LRO buffer by using the following esxcli command:
esxcli system settings advanced list - o /Net/VmxnetLROMaxLength
The LRO features are functional for the guest OS when the VMXNET3 virtual adapter is used.
To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO,
software LRO) can be issued:
esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
esxcli system settings advanced list -o /Net/Vmxnet3SwLRO
You can disable LRO for all VMkernel adapters on a host with command
esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 0
and enabling LRO with
esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 1
5.3.2 Netqueue and RSS
5.3.2.1. How to validate RSS is enabled in VMkernel
If you have running system, you can check the status of RSS by following command from ESXi
shell
vsish -e cat /net/pNics/vmnic1/rxqueues/info
In figure below, you can see the command output for 1Gb Intel NIC not supporting NetQueue,
therefore RSS is logically not supported as well, because it does not make any sense.
Figure 1 Command to validate if RSS is enabled in VMkernel
It seems, that some drivers enabling RSS by default and some others not.
34. 5.3.2.2. How to explicitly enable Netqueue RSS
The procedure to enable RSS is always dependent on specific driver, because specific parameters
have to be passed to driver module. The information how to enable RSS for particular driver
should be written in specific NIC vendor documentation.
Example for Intel ixgbe driver:
vmkload_mod ixgbe RSS=”4″
To enable the feature on multiple Intel 82599EB SFI/SFP+ 10Gb/s NICs, include another
comma-separated 4 for each additional NIC (for example, to enable the feature on three such
NICs, you'd run vmkload_mod ixgbe RSS="4,4,4").
Example for Mellanox nmlx4driver:
For Mellanox adapters, the RSS feature can be turned on by reloading the driver with
num_rings_per_rss_queue=4.
vmkload_mod nmlx4_en num_rings_per_rss_queue=4
NOTE: After loading the driver with vmkload_mod, you should make vmkdevmgr rediscover the
NICs with the following command:
kill -HUP ID … where ID is the process ID of the vmkdevmgr process
5.3.2.3. How to disable Netqueue RSS for particular driver
Disabling Netqueue RSS is also driver specific. It can be done using driver module parameter as
shown below. The example assumes there are four qedentv (QLogic NIC) instances.
[root@host:~] esxcfg-module -g qedentv
qedentv enabled = 1 options = ''
[root@host:~] esxcfg-module -s "num_queues=0,0,0,0 RSS=0,0,0,0" qedentv
[root@host:~] esxcfg-module -g qedentv
qedentv enabled = 1 options = 'num_queues=0,0,0,0 RSS=0,0,0,0'
Reboot the system for settings to take effect and will apply to all NICs managed by the qedentv
driver.
Source: https://kb.vmware.com/s/article/68147
5.3.2.4. How to disable Netqueue in VMkernel
Netqueue can be also totally disabled in VMkernel