The goal of this test plan is to test SPECTRE and MELTDOWN performance impact on Intel CPU. We will run CPU intensive workloads in Virtual Machine(s) running on non-patched and patched ESXi host and observe performance impact.
We will test impact on network, storage and memory performance because these I/O intensive workloads requires CPU caching which is impacted by vulnerabilities remediation.
Qualification of performance is very specific and hard subject. The performance impact varies across different hardware and software configurations. However, performed tests are very well described in this document so the reader can understand all conditions of the test and observed results. The reader can also perform tests on his specific hardware and software configurations.
Jean-Ian Boutin, ESET
Frédéric Vachon, ESET
BIOS rootkits have been researched and discussed heavily in the past few years, but sparse evidence has been presented of real campaigns actively trying to compromise a system at this level. Our talk will reveal such a campaign successfully executed by STRONTIUM.
Earlier this year, there was a public report stating that the infamous Sofacy/APT28/Sednit APT group successfully trojanized a userland LoJack agent and used it against their targets. LoJack, a controversial anti-theft software, was scrutinized by security researchers in the past because of its unusual persistence method: a module preinstalled in many computers' UEFI/BIOS software. Several security risks were found through the years in their product, but no large in-the-wild activity was ever detected until the discovery of the STRONTIUM group leveraging some of these vulnerabilities affecting the userland agent. However, through our research, we now know that they did not stop there: they also tried, and succeeded, in installing a custom UEFI module directly in the systems' SPI flash memory.
In this talk, we will detail the full infection chain showing how STRONTIUM was able to install their custom UEFI module on key targets' computers.
Additionally, we will provide an in-depth analysis of their UEFI module and the associated trojanized LoJack agent.
Implementing SR-IOv failover for Windows guests during live migrationYan Vugenfirer
Presentation from KVM Forum 2020.
In the past, there were several attempted to enable live migration for VMs that are using SR-IOV NICs. We are going to discuss the recent development based on the SR-IOV failover feature in virtio specification and its implementation for the Windows guests. In this session, Annie Li and Yan Vugenfirer will provide an overview of the failover feature and discuss specifics of the Windows guest implementation.
Jean-Ian Boutin, ESET
Frédéric Vachon, ESET
BIOS rootkits have been researched and discussed heavily in the past few years, but sparse evidence has been presented of real campaigns actively trying to compromise a system at this level. Our talk will reveal such a campaign successfully executed by STRONTIUM.
Earlier this year, there was a public report stating that the infamous Sofacy/APT28/Sednit APT group successfully trojanized a userland LoJack agent and used it against their targets. LoJack, a controversial anti-theft software, was scrutinized by security researchers in the past because of its unusual persistence method: a module preinstalled in many computers' UEFI/BIOS software. Several security risks were found through the years in their product, but no large in-the-wild activity was ever detected until the discovery of the STRONTIUM group leveraging some of these vulnerabilities affecting the userland agent. However, through our research, we now know that they did not stop there: they also tried, and succeeded, in installing a custom UEFI module directly in the systems' SPI flash memory.
In this talk, we will detail the full infection chain showing how STRONTIUM was able to install their custom UEFI module on key targets' computers.
Additionally, we will provide an in-depth analysis of their UEFI module and the associated trojanized LoJack agent.
Implementing SR-IOv failover for Windows guests during live migrationYan Vugenfirer
Presentation from KVM Forum 2020.
In the past, there were several attempted to enable live migration for VMs that are using SR-IOV NICs. We are going to discuss the recent development based on the SR-IOV failover feature in virtio specification and its implementation for the Windows guests. In this session, Annie Li and Yan Vugenfirer will provide an overview of the failover feature and discuss specifics of the Windows guest implementation.
Presentation from 2008. Compares Lighttpd .vs Apache for static content. Discovery session for scaling http://www.imagesocket.com during it's peak popularity.
This is really old and /outdated/ at this point.
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...Yan Vugenfirer
In order to be able to accept contributions from different developers to virtio-win (http://github.com/virtio-win/kvm-guest-drivers-windows/) project, there is a need to ensure that those contributions are not breaking the ability to certify the virtio-win drivers by different members of the ecosystem. As a result, the HCK-CI test framework was created in order to enable CI for all the types of virtio-win drivers on a wide range of Windows OS versions. The framework automates setup creation (VM and network orchestration), uses HLK\HCK tools kits API in order to run Microsoft WHQL certification tests, and publishes the results in human-readable form. During the presentation, Konstantin will review the history of the project, explain the architecture of HCK-CI, demonstrate how you can deploy it in your development setup, and talk about the future of the project.
OSDC 2014: Nat Morris - Open Network Install EnvironmentNETWAYS
ONIE defines an open source “install environment” that runs on this management subsystem utilizing facilities in a Linux/BusyBox environment. This environment allows end-users and channel partners to install the target network OS as part of data center provisioning, in the fashion that servers are provisioned.
ONIE enables switch hardware suppliers, distributors and resellers to manage their operations based on a small number of hardware SKUs. This in turn creates economies of scale in manufacturing, distribution, stocking, and RMA enabling a thriving ecosystem of both network hardware and operating system alternatives.
Android 5.0 Lollipop brings huge change, compare to before.
This report includes statistics from source code with data and hidden features from source code & git log investigation.
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)KostiantynKostiuk
In order to be able to accept contributions from different developers to virtio-win (http://github.com/virtio-win/kvm-guest-drivers-windows/) project, there is a need to ensure that those contributions are not breaking the ability to certify the virtio-win drivers by different members of the ecosystem. As a result, the HCK-CI test framework was created in order to enable CI for all the types of virtio-win drivers on a wide range of Windows OS versions. The framework automates setup creation (VM and network orchestration), uses HLK\HCK tools kits API in order to run Microsoft WHQL certification tests, and publishes the results in human-readable form. During the presentation, Konstantin will review the history of the project, explain the architecture of HCK-CI, demonstrate how you can deploy it in your development setup, and talk about the future of the project.
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...Yan Vugenfirer
Windows Hardware Certification Kit (HCK) is a set of tools, processes, and tests for certifying HW devices, device drivers and systems. Being a great test environment for QEMU devices, Windows Guest device drivers and related Host subsystems, it's still frightening due to deployment complexity. We'll share a way to deploy HCK setup(s) on top of QEMU VMs in just a few minutes.
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...The Linux Foundation
This presentation will detail a practical approach to memory introspection of virtual machines running on the Xen hypervisor with no in-guest footprint. The functionality makes use of the mem-event API with a number of improvements which enable the proper tracking of guest OS activity. The technology created on top of this Xen API opens the door for several immediate applications, including: rootkit detection and prevention, detection and action on several categories of malware, and event source information for low-level post-event forensics and correlation based on real event data during events.
Migration of virtual machines without guest downtime is a key feature for hypervisors. Sadly, not all hardware is the same, and keeping guests running in a heterogeneous environment takes a lot of care. Normally, features are advertised via the CPUID instruction, but life is never as simple as we would like. Andrew will discuss what information needs to be controlled, what information can and can't be controlled, and how it applies to Xen guests.
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...Haidee McMahon
For details on Intel's Out of The Box Network Developers Ireland meetup, goto https://www.meetup.com/Out-of-the-Box-Network-Developers-Ireland/events/237726826/
Intel Talk : Enhanced Platform Awareness for Openstack to increase NFV performance
By Andrew Duignan
Bio: Andrew Duignan is an Electronic Engineering graduate from University College Dublin, Ireland. He has worked as a software engineer in Motorola and now at Intel Corporation. He is now in a Platform Applications Engineering role, supporting technologies such as DPDK and virtualization on Intel CPUs. He is based in the Intel Shannon site in Ireland.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
Presentation from 2008. Compares Lighttpd .vs Apache for static content. Discovery session for scaling http://www.imagesocket.com during it's peak popularity.
This is really old and /outdated/ at this point.
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers - Kostiantyn Ko...Yan Vugenfirer
In order to be able to accept contributions from different developers to virtio-win (http://github.com/virtio-win/kvm-guest-drivers-windows/) project, there is a need to ensure that those contributions are not breaking the ability to certify the virtio-win drivers by different members of the ecosystem. As a result, the HCK-CI test framework was created in order to enable CI for all the types of virtio-win drivers on a wide range of Windows OS versions. The framework automates setup creation (VM and network orchestration), uses HLK\HCK tools kits API in order to run Microsoft WHQL certification tests, and publishes the results in human-readable form. During the presentation, Konstantin will review the history of the project, explain the architecture of HCK-CI, demonstrate how you can deploy it in your development setup, and talk about the future of the project.
OSDC 2014: Nat Morris - Open Network Install EnvironmentNETWAYS
ONIE defines an open source “install environment” that runs on this management subsystem utilizing facilities in a Linux/BusyBox environment. This environment allows end-users and channel partners to install the target network OS as part of data center provisioning, in the fashion that servers are provisioned.
ONIE enables switch hardware suppliers, distributors and resellers to manage their operations based on a small number of hardware SKUs. This in turn creates economies of scale in manufacturing, distribution, stocking, and RMA enabling a thriving ecosystem of both network hardware and operating system alternatives.
Android 5.0 Lollipop brings huge change, compare to before.
This report includes statistics from source code with data and hidden features from source code & git log investigation.
HCK-CI: Enabling CI for Windows Guest Paravirtualized Drivers (KVM Forum 2021)KostiantynKostiuk
In order to be able to accept contributions from different developers to virtio-win (http://github.com/virtio-win/kvm-guest-drivers-windows/) project, there is a need to ensure that those contributions are not breaking the ability to certify the virtio-win drivers by different members of the ecosystem. As a result, the HCK-CI test framework was created in order to enable CI for all the types of virtio-win drivers on a wide range of Windows OS versions. The framework automates setup creation (VM and network orchestration), uses HLK\HCK tools kits API in order to run Microsoft WHQL certification tests, and publishes the results in human-readable form. During the presentation, Konstantin will review the history of the project, explain the architecture of HCK-CI, demonstrate how you can deploy it in your development setup, and talk about the future of the project.
QEMU Development and Testing Automation Using MS HCK - Anton Nayshtut and Yan...Yan Vugenfirer
Windows Hardware Certification Kit (HCK) is a set of tools, processes, and tests for certifying HW devices, device drivers and systems. Being a great test environment for QEMU devices, Windows Guest device drivers and related Host subsystems, it's still frightening due to deployment complexity. We'll share a way to deploy HCK setup(s) on top of QEMU VMs in just a few minutes.
XPDS14 - Zero-Footprint Guest Memory Introspection from Xen - Mihai Dontu, Bi...The Linux Foundation
This presentation will detail a practical approach to memory introspection of virtual machines running on the Xen hypervisor with no in-guest footprint. The functionality makes use of the mem-event API with a number of improvements which enable the proper tracking of guest OS activity. The technology created on top of this Xen API opens the door for several immediate applications, including: rootkit detection and prevention, detection and action on several categories of malware, and event source information for low-level post-event forensics and correlation based on real event data during events.
Migration of virtual machines without guest downtime is a key feature for hypervisors. Sadly, not all hardware is the same, and keeping guests running in a heterogeneous environment takes a lot of care. Normally, features are advertised via the CPUID instruction, but life is never as simple as we would like. Andrew will discuss what information needs to be controlled, what information can and can't be controlled, and how it applies to Xen guests.
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...Haidee McMahon
For details on Intel's Out of The Box Network Developers Ireland meetup, goto https://www.meetup.com/Out-of-the-Box-Network-Developers-Ireland/events/237726826/
Intel Talk : Enhanced Platform Awareness for Openstack to increase NFV performance
By Andrew Duignan
Bio: Andrew Duignan is an Electronic Engineering graduate from University College Dublin, Ireland. He has worked as a software engineer in Motorola and now at Intel Corporation. He is now in a Platform Applications Engineering role, supporting technologies such as DPDK and virtualization on Intel CPUs. He is based in the Intel Shannon site in Ireland.
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
(FR)
Voici un excellent document qui explique étape après étape comment installer, monitorer et surtout correctement benchmarker ses SSD PCIe/NVMe (pas si simple que ça). Autre élément clé : comment analyser la charge I/O de véritables applications? Combien d'IOPS, en read, en write, quelle bande passante et surtout quel impact sur la durée de vie des SSD? Bref à mettre en toute les mains, et un merci à mon collègue Andrey Kudryavtsev.
(EN)
An excellent content which describe step by step how to install, monitor and benchmark PCIe/NVMe SSD (many trick not so simple). Another key learning: how to measure real I/O activities on a real workload? How many R/W IOPS, block size, throughtput, and finally what's the impact on SSD endurance and (real)life? A must read, and a huge thanks to my colleague Andrey Kudryavtsev.
Auteurs/Authors:
Andrey Kudryavtsev, SSD Solution Architect, Intel Corporation
Zhdan Bybin, Application Engineer, Intel Corporation
CTX138217 - IntelliCache Reduction in IOPS: XenDesktop 5.6 FP1 on XenServer 6.1 - Citrix Knowledge Center http://ow.ly/o3Ma4
The purpose of this document is to provide testing results based on MCS-delivered streamed virtual desktops leveraging IntelliCache
Presentation delivered at LinuxCon China 2017
Real-Time is used for deadline-oriented applications and time-sensitive workloads. Real-Time KVM is the extension of KVM(Linux Kernel-based Virtual Machine) to allow the virtual machines(VM) to be a truly Real-Time operating system.Users sometimes need to run low-latency applications(such as audio/video streaming, highly interactive systems, etc) to meet their requirements in clouds. NFV is a new network concept which uses virtualization and software instead of dedicated network appliances. For some use cases of telecommunications, network latency must be within a certain range of values. Real-Time KVM can help NFV meet this requirements.
In this presentation, Pei Zhang will talk about:
(1)Real-Time KVM introduction
(2)Real-Time cloud building
(3)Real-Time KVM in NFV: VM with openvswitch, dpdk and qemu’s vhostuser
(4)Performance testing results show
Accelerating Virtual Machine Access with the Storage Performance Development ...Michelle Holley
Abstract: Although new non-volatile media inherently offers very low latency, remote access
using protocols such as NVMe-oF and presenting the data to VMs via virtualized interfaces such as virtio
adds considerable software overhead. One way to reduce the overhead is to use the Storage
Performance Development Kit (SPDK), an open-source software project that provides building blocks for
scalable and efficient storage applications with breakthrough performance. Comparing the software
paths for virtualizing block storage I/O illustrates the advantages of the SPDK-based approach. Empirical
data shows that using SPDK can improve CPU efficiency by up to 10 x and reduce latency up to 50% over
existing methods. Future enhancements for SPDK will make its advantages even greater.
Speaker Bio: Anu Rao is Product line manager for storage software in Data center Group. She helps
customer ease into and adopt open source Storage software like Storage Performance Development Kit
(SPDK) and Intelligent Software Acceleration-Library (ISA-L).
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
This demo/lab will guide you to install and configure FD.io Vector Packet Processing (VPP) on Intel® Architecture (AI) Server. You will also learn to install TRex* on another AI Server to send packets to the VPP, and use some VPP commands to forward packets back to the TRex*.
Speaker: Loc Nguyen. Loc is a Software Application Engineer in Data Center Scale Engineering Team. Loc joined Intel in 2005, and has worked in various projects. Before joining the network group, Loc worked in High-Performance Computing area and supported Intel® Xeon Phi™ Product Family. His interest includes computer graphics, parallel computing, and computer networking.
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...Nagios
Dan Wittenberg's presentation on using Nagios at a Fortune 50 Company
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
POLYTEDA LLC, a provider of semiconductor design software and PV-services announced the general availability of PowerDRC/LVS version 2.2.
This release is dedicated to delivering fill layer generation for multi-CPU mode, new KLayout integration functionality and other significant improvements for multi-CPU mode
This is the document which explain the step by step procedure to upgrade PowerVC from 1.3.0.2 to 1.3.2.0. I've added useful information in the documents.
Intro to open source telemetry linux con 2016Matthew Broberg
Abstract
As part of the team delivering Snap, an open telemetry framework, I've run through dozens of use cases where gathering disparate metrics from services can roll up into meaningful diagrams for operations engineers and developers alike. We will use Snap's plugin model to collect, process and publish these measurements into meaningful graphs using open source tools. By joining this session, you can follow along and install industry-standard open source projects, deploy them and then use Snap to collect, process and visualize these metrics.
Audience
Anyone with an operations-background (or future ahead of them) that wants to see the breadth of available open source tooling around telemetry. This proposal is designed for the hands-on user, who is comfortable running containers or virtual machines locally.
Experience Level
Intermediate
Benefits to the Ecosystem
By joining this session, you can follow along and install industry-standard open source projects, deploy them and then use Snap to collect, process and visualize these metrics. This empowers users within the Linux ecosystem to see their knowledge as powerful when visualized next to other layers of the datacenter.
Flex Cloud - Conceptual Design - ver 0.2David Pasek
The Energy
=========
The cost of energy is increasing. A significant part of electrical energy cost is the cost of distribution. That's the reason why the popularity of small home solar systems increases. That's the way how to generate and consume electricity locally and be independent of the distribution network. However, we have a problem. "Green Energy" from solar, wind, and hydroelectric power stations is difficult to distribute via the electrical grid. Energy accumulation (batteries, pumped storage power plant, etc.) is costly and for the traditional electrical grid is very difficult to automatically manage the distribution of so many energy sources.
The Cloud Computing
=================
The demand for cloud (computing and storage) capacity is increasing year by year. Internet bandwidth increases and cost decreases every year. 5G Networks and SD-WANs are on the radar. Cloud Computing is operated on data centers. The significant part of data center costs is the cost of energy.
The potential synergy between Energetics and Cloud Computing
=================================================
The solution is to consume electricity in the proximity of green power generators. Excess electricity is accumulated into batteries but batteries capacity is limited. We should treat batteries like a cache or buffer to overcome times when green energy does not generate energy but we have local demand. However, when we have excess electricity and the battery (cache/buffer) is full, instead of providing the energy into the electrical grid, the excess electricity can be consumed by a computer system providing compute resources to cloud computing consumers over the internet. This is the form of Distributed Cloud Computing.
Cloud-Native Applications
====================
So, let's assume we will have Distributed Cloud Computing with so-called Spot Compute Resource Pools". Spot Compute Resource Pools are computing resources that can appear or disappear within hours or minutes. This is not optimal IT infrastructure for traditional software applications which are not infrastructure aware. For such distributed cloud computing the software applications must be designed and developed with infrastructure resources ephemerality in mind. In other words, Cloud-Native Applications must be able to leverage ephemeral compute resource pools and know how to use "Spot Compute Resource Pools".
Tato publikace je aktuálním průvodcem světem aplikací ICT v cestovním ruchu, který by měl jeho čtenáři poskytnout téměř ucelenou informaci o možnostech aplikací ICT v cestovním ruchu s přiměřeně podrobným a přiměřeně odborně náročným výkladem používaných technologií, s uvedením řady příkladů aplikací ICT pro popisovanou oblast, s důrazem na trendy a využití ICT v cestovním ruchu v blízké budoucnosti (i když je vhodné v této souvislosti zdůraznit, že„budouc- nost je všude kolem nás“, což podtrhuje dynamiku a progresivnost zavádění ICT v CR). V odstavci je výše zdůrazněno slovo „aktuální“, které má upozornit, že popisovaný stav se vztahuje k červenci 2008 a autoři zvolili kompromis mezi dílčí „nadčasovostí“ publikace a její srozumitelností, konkrétností, uváděním řady příkladů, které však rychle zastarávají. Dalším zdůrazněným výrazem je „téměř ucelenou“ – v publikaci je jen velmi stručně zmíněna jinak velmi rozsáhlá oblast e-business v e-turismu, která je tématem souběžně vznikající publikace, a proto není podrobněji popisována. Popis je v publikaci doplněn odkazy na velké množství literatury, která pomůže při případném podrobnějším studiu dané pro- blematiky. Součástí publikace je i poměrně rozsáhlý výkladový slovník, který je pomůckou v orientaci v množství použí- vaných a „stále se rodících“ termínů.
Architektura a implementace digitálních knihoven v prostředí sítě InternetDavid Pasek
Cílem práce je charakteristika stávající situace budování digitálních knihoven ve světovém měřítku se zaměřením na problematiku jejich architektury a implementace v podmínkách sítě Internet. Práce se věnuje především důležitým komponentám digitálních knihoven a jejich specifikaci. Mezi důležité komponenty patří digitální objekt, repozitář, katalog a uživatelské rozhraní. V práci jsou rovněž popsány signatury digitálních objektů, které jsou důležité zejména z hlediska bezpečnosti, neměnnosti obsahu a věrohodnosti autora digitálního objektu. Pozornost byla věnována i normám a specifikacím metadat, která jsou velmi důležitá pro budování katalogu a vyhledávání relevantních digitálních objektů. Přínosem práce je podání návrhu optimálního modelu digitální knihovny a ukázky praktických aplikací na navrhovaném modelu.
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6David Pasek
We are observing different network throughputs on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting.
Brief introduction and overview of the online reservation system FlexBook.
We are a startup project still in stealth mode.
For further information, please send an email to info@flexbook.cz
This document explains the dual node VLT deployment strategies with its associated network reference architecture. Various VLT deployment topologies are also explained with emphasis on best practices and recommendations for some of the network scenarios. This document also covers the configuration and troubleshooting of VLT using relevant show commands and different outputs.
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
Presentation explains the difference between multi site high availability (aka metro cluster) and disaster recovery. General concepts are similar for any products but presentation is more tailored for VMware technologies.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Spectre meltdown performance_tests - v0.3
1. David Pasek – dpasek@vmware.com
VMWARE - TAM Program
SPECTRE AND MELTDOWN
PERFORMANCE IMPACT TESTS
March 14, 2018, Document version: 0.3
2. Purpose of this test plan
The goal of this test plan is to test SPECTRE and MELTDOWN performance impact on Intel
CPU. We will run CPU intensive workloads in Virtual Machine(s) running on non-patched
and patched ESXi host and observe performance impact.
We will test impact on network, storage and memory performance because these I/O intensive
workloads requires CPU caching which is impacted by vulnerabilities remediation.
Qualification of performance is very specific and hard subject. The performance impact varies
across different hardware and software configurations. However, performed tests are very
well described in this document so the reader can understand all conditions of the test and
observed results. The reader can also perform tests on his specific hardware and software
configurations.
3. Specifications
Hypervisors (ESXi) Hardware and Software Specifications
ESX01
ESX01 - without any Spectre Patches
• Intel NUC D54250WYKH
• 1 x CPU i5-4250U @ 1.30 GHz
• 1 x 2 Cores / 4 logical CPU with Hyper-threading
• ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
ESX02
ESX02 - with Spectre Patches for Hypervisor-Specific Remediation
• Intel NUC D54250WYKH
• 1 x CPU i5-4250U @ 1.30 GHz
• 1 x 2 Cores / 4 logical CPU with Hyper-threading
• ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
ESX03
ESX03 - with Spectre Patches for Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation
• Intel NUC D54250WYKH
• 1 x CPU i5-4250U @ 1.30 GHz
• 1 x 2 Cores / 4 logical CPU with Hyper-threading
• ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
4. VM Hardware and Software Specifications
MS-Windows
MS-VM01
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
• IP address: 192.168.5.36
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
MS-VM02
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
• IP address: 192.168.5.37
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
MS-VM11
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
5. • OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB
4056898)
• IP address: 192.168.5.46
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
MS-VM12
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB
4056898)
• IP address: 192.168.5.47
• Software
▪ CPU-Z
▪ IOmeter
▪ nuttcp
▪ Redis 3.0.503
Linux
LIN-VM01
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• OS – Centos 7 – without Spectre/Meltdown updates
▪ Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.31
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
6. LIN-VM02
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – Centos 7 – without Spectre/Meltdown updates
▪ Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.32
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
LIN-VM11
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
• OS – Centos 7 – with Spectre/Meltdown updates
▪ Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.41
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
LIN-VM12
• 4 vCPU
• 4 GB RAM
• VM Hardware 11
• NIC (VMXNET3) MTU 1500
• 1x SCSI Controller – LSI Logic SAS
▪ 40 GB Disk (OS) – Thick, eager-zeroed
• 1x SCSI Controller – VMware Paravirtual
▪ 5 GB Disk (DATA) – Thick, eager-zeroed
7. • OS – Centos 7 – with Spectre/Meltdown updates
▪ Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux
• IP address: 192.168.5.42
• Software
▪ Redis
▪ Nuttcp
▪ Iftop
▪ Bc
8. Performance Testing tools
CPU-Z - https://www.cpuid.com/softwares/cpu-z.html
Download: https://www.cpuid.com/downloads/cpu-z/cpu-z_1.83-en.exe
CPU-Z is a freeware that gathers information on some of the main devices of your system
IOMETER - http://www.iometer.org/
Download: http://www.iometer.org/doc/downloads.html
IOmeter is an I/O subsystem measurement and characterization tool for single and clustered
systems.
NUTTCP
Install:
RedHat 7: yum install --enablerepo=Unsupported_EPEL nuttcp
CentOS 7: yum install epel-release nuttcp
MS-Windows: http://nuttcp.net/nuttcp/latest/binaries/nuttcp-8.1.4.win64.zip
TTCP (Test TCP) as client/server network performance measurement tool.
Usage …
Server part is started by following command
nuttcp -S -N 100
Client part is started by following command
cat /dev/zero | nuttcp -t -s -N 100 czchoapint092
Other nuttcp examples:
Server and Client
nuttcp -r -S -P 5000 -N 20
cat /dev/zero | nuttcp -t -s -N 20 -P 5000 czchoapint094
Larger buffers
nuttcp -r -l 8972 -S -P 5000 -N 20
cat /dev/zero | nuttcp -t -l 8972 -s -N 20 -P 5000 czchoapint094
UDP traffic
nuttcp -r -u -l 8972 -w4m -S -P 5000 -N 20
cat /dev/zero | nuttcp -t -u -l 8972 -w4m -s -N 20 -P 5000 czchoapint094
REDIS - https://redis.io/
Install:
CentOS 7: yum install redis
MS-Windows: https://dingyuliang.me/redis-3-2-install-redis-windows/
Download: https://github.com/MicrosoftArchive/redis/releases/download/win-
3.2.100/Redis-x64-3.2.100.zip
Redis is an open source (BSD licensed), in-memory data structure store, used as a database,
cache and message broker. It supports data structures such as strings, hashes, lists, sets,
sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius
queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different
9. levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic
partitioning with Redis Cluster.
10. Spectre/Meldown OS remediations
ESXi
Use VMware Update Manager and patches based on VMSA-2018-02 and VMSA-2018-04.
MS-Windows
To protect MS-Windows apply updates available here
http://www.catalog.update.microsoft.com/Search.aspx?q=KB4056898
To enable the fix change Registry Settings
reg add "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession
ManagerMemory Management" /v FeatureSettingsOverride /t REG_DWORD /d 0 /f
reg add "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession
ManagerMemory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f
reg add "HKLMSOFTWAREMicrosoftWindows NTCurrentVersionVirtualization" /v
MinVmVersionForCpuBasedMitigations /t REG_SZ /d "1.0" /f
Restart the server for changes to take effect.
Linux / Centos
Use “yum update” and apply the latest OS updates.
Spectre/Meldown remediation checkers
ESXi
ESXi command to get information if microcode is updated …
if [ `vsish -e get /hardware/msr/pcpu/0/addr/0x00000048 2&>1 > /dev/null ;echo $?` -eq 0 ];
then echo -e "nIntel Security Microcode Updatedn";else echo -e "nIntel Security Microcode
NOT Updatedn";fi
MS-Windows
MS-Windows test tool for SPECTRE/MELTDOWN remediation
Installation
• Article: https://support.microsoft.com/en-us/help/4073119/protect-against-
speculative-execution-side-channel-vulnerabilities-in
• PowerShell 5.0 is required
11. Install-Module SpeculationControl
Vulnarability Check (PowerShell command)
Set-ExecutionPolicy RemoteSigned -Scope Currentuser
Get-SpeculationControlSettings
Linux / Centos
Linux test tool for SPECTRE/MELTDOWN remediation
Installation
• Blog: https://www.cyberciti.biz/faq/check-linux-server-for-spectre-meltdown-
vulnerability/
• Tool:
cd /root
wget –O spectre-meltdown-checker.sh
https://raw.githubusercontent.com/speed47/spectre-meltdown-checker/master/spectre-
meltdown-checker.sh
chmod 755 ./spectre-meltdown-checker.sh
Vulnarability Check (Shell command)
/root/spectre-meltdown-checker.sh
12. Spectre/Meldown remediation status of VMs on ESXi hosts
MS-Windows
MS-VM01 on ESX01
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
13. MS-VM01 on ESX02
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
14. MS-VM01 on ESX03
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
15. MS-VM02 on ESX01
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
16. MS-VM02 on ESX02
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
17. MS-VM02 on ESX03
VM Guest OS – MS Windows 2012 R2 – without Spectre/Meltdown updates
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
18. MS-VM11 on ESX01
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
19. MS-VM11 on ESX02
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
20. MS-VM11 on ESX03
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
21. MS-VM12 on ESX01
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
22. MS-VM12 on ESX02
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
23. MS-VM12 on ESX03
VM Guest OS – MS Windows 2012 R2 – with Spectre/Meltdown updates (MS KB 4056898)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
24. Linux / Centos
LIN-VM01 on ESX01
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
25. LIN-VM01 on ESX02
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
26. LIN-VM01 on ESX03
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
27. LIN-VM02 on ESX01
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
28. LIN-VM02 on ESX02
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
29. LIN-VM02 on ESX03
VM Guest OS – Centos 7 – without Spectre/Meltdown updates (Linux 3.10.0-
514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
30. LIN-VM11 on ESX01
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
31. LIN-VM11 on ESX02
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
32. LIN-VM11 on ESX03
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
33. LIN-VM12 on ESX01
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Update 1 (Build 5969303) - 2017-07-27
34. LIN-VM12 on ESX02
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7388607) - 2017-12-19
35. LIN-VM12 on ESX03
VM Guest OS – Centos 7 – with Spectre/Meltdown updates (Linux 3.10.0-
693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64
GNU/Linux)
ESXi 6.5 Patch 02 (Build 7526125) - 2018-01-09
36. Performance tests
MS Windows OS
CPU performance (Win/CPU-Z) - single VM on top of ESXi host
Verification type Design
Test type Performance
Tested area CPU
Test name CPU performance (Win/CPU-Z) of VM on top of ESXi host
Test description Verification of Spectre/Meltdown security patches impact on CPU
performance
Tasks
Step 1/ Generate CPU workload leveraging CPU-Z benchmaring tool.
Run CPU-Z on MS-VM
Step 2/ Note CPU performance (CPU-Z benchmark single thread and
multi thread)
Test combinations
• ESXi host without security patches – OS without security
patches (MS-VM01 on ESX01)
• ESXi host without security patches – OS with security patches
(MS-VM11 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (MS-VM01 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (MS-VM11 on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(MS-VM11 on ESX03.)
Compare results and quantify impact.
Expected results Lower CPU performance on systems with security patches.
Test tools: CPU-Z
Test result: passed
Test notes:
37. Test results
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
MS-VM01
Single Thread:
233.9
235.4
237.1
236.7
236.7
AVG = 236.3
Multi Thread:
624.3
624.5
624.2
621.8
624.9
AVG = 624.3
Single Thread:
236.0
233.8
233.3
233.4
233.9
AVG = 233.7
Multi Thread:
616.6
623.4
622.4
624.6
624.1
AVG = 623.3
Single Thread:
233.8
253.6
233.0
234.6
235.1
AVG = 234.5
Multi Thread:
623.7
623.3
624.5
625.0
623.8
AVG = 624
OS with security
patches
MS-VM11
Single Thread:
234.5
235.3
232.0
225.8
236.5
AVG = 231.9
Multi Thread:
622.2
619.1
621.0
612.8
621.2
AVG = 620.4
Single Thread:
232.1
234.3
233.4
234.7
234.1
AVG = 233.9
Multi Thread:
622.2
623.6
621.0
620.9
622.3
AVG = 621.8
Single Thread:
208.1
207.4
202.9
206.1
209.7
AVG = 207.2
Multi Thread:
604.7
597.5
602.3
609.5
610.6
AVG = 605.5
38. Storage performance (Win/IOmeter) – Single VM storage performance to local disk
Verification type Design
Test type Performance
Tested area Storage
Test name Storage performance (Win/IOmeter) – single VM storage
performance to local disk
Test description Verification of CPU performance impact to storage performance
Tasks
Step 1/ Run IOmeter GUI on MS-VM01.
Step 2/ Run disk IO testing tools (VM01 with IOmeter GUI and
dynamo) and generate load to disk on shared storage.
I/O workload patterns for tests
• 512B, 100% Random, 50% Write
• 64kB, 100% Random, 50% Write
Multi threading configurations
• 4 Workers / 1 Outstanding IO
Disk Size 10GB (20000000 sectors)
Step 3/ Note storage performance (I/O per second = IOPS), data
throughput(MB/s), response time (ms) and CPU load
Test combinations
• ESXi host without security patches – OS without security
patches (MS-VM01 and MS-VM02 on ESX01)
• ESXi host without security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (MS-VM01 and MS-
VM02 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (MS-VM11 and MS-VM12
on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX03.)
Compare results and quantify impact.
Expected results Lower storage performance on systems with security patches.
Test tools: IOmeter
Test result: passed
Test notes:
42. Network performance (Win/IOmeter) between two VMs within the same ESXi host
Verification type Design
Test type Performance
Tested area Network
Test name Network performance (Win/IOmeter) between two VMs within the
same ESXi host
Test description Verification of CPU performance impact to network performance
Tasks
Step 1/ Run IOmeter GUI on MS-VM01.
Step 2/ Remove all storage workers
Step 3/ Run IOmeter dynamo on MS-VM02 connected to IOmeter host
<hostname of VM01> … dynamo.exe –i MS-VM01 –m MS-VM02
Step 4/ Create 8 network workers. Assign specification I/O Size 512B,
100% Read to all network workers. Set test duration 30 seconds.
Step 5/ Generate network workload between two MS-VM’s on the
same ESXi host
Step 6/ Note network performance (packets per second), throughput
(MB/s), Response Time (ms) and CPU load (%)
Test combinations
• ESXi host without security patches – OS without security
patches (MS-VM01 and MS-VM02 on ESX01)
• ESXi host without security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (MS-VM01 and MS-
VM02 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (MS-VM11 and MS-VM12
on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(MS-VM11 and MS-VM12 on ESX03.)
Compare results and quantify impact.
Expected results Lower network performance on systems with security patches.
Test tools: IOmeter
Test result: passed
Test notes:
45. In-Memory database performance (Win/Redis) - single VM on top of ESXi host
Verification type Design
Test type Performance
Tested area Database
Test name Database performance from VM to In-Memory DB (Redis)
Test description Verification of CPU performance impact to in-memory database
performance
Tasks
Step 1/ Install and Run Redis DB on WIN-VM01
Step 2/ Run redis-benchmark
redis-benchmark -t get,set –n 1000000 -c 8
Step 3/ Note DB performance (transactions per second) and CPU load
(%)
Test combinations
• ESXi host without security patches – OS without security
patches (VM01 on ESX01)
• ESXi host without security patches – OS with security patches
(VM11 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (VM01 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (VM11 on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(VM11 on ESX03.)
Compare results and quantify impact.
Expected results Lower memory performance on systems with security patches.
Test tools: RedisDB
Test result: passed
Test notes:
47. Linux OS
Network performance (Linux/NUTTCP) between two VMs within the same ESXi host
Verification type Design
Test type Performance
Tested area Network
Test name Network performance (Linux/NUTTCP) between two VMs within
the same ESXi host
Test description Verification of CPU performance impact to network performance
Tasks
Step 1/ Run
nuttcp -r -S -P 5501
nuttcp -r -S -P 5502
nuttcp -r -S -P 5503
nuttcp -r -S -P 5504
nuttcp -r -S -P 5505
nuttcp -r -S -P 5506
nuttcp -r -S -P 5507
nuttcp -r -S -P 5508
on LIN-VM01.
Step 2/ Run
iftop -F 192.168.4.32/32
on LIN-VM01 to monitor traffic
Step 2/ Change IP address bellow to VM01 and run script (/tmp/run.sh)
#!/bin/bash
PORT_START=5501
LOGDIR="/tmp"
IP="192.168.4.31"
for i in `seq 1 8`;
do
echo "Process $i"
port=$(expr $PORT_START + $i - 1)
echo " port $port"
logfile="$LOGDIR/job$i.log"
echo " logfile $logfile"
echo " target IP address $IP"
( /usr/bin/nuttcp -t -b -P $port -T 30 $IP > $logfile ) &
sleep 0.1
done
on LIN-VM02 to generate workload.
Step 4/ Note network throughput (Mbps) of each process and calculate
sum.
SHOW RESULTS: cat /tmp/job*
SUM: cat /tmp/job* | cut -c29-38 | paste -s -d+ | bc
48. Test combinations
• ESXi host without security patches – OS without security
patches (LIN-VM01 and LIN-VM02 on ESX01)
• ESXi host without security patches – OS with security patches
(LIN-VM11 and LIN-VM12 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (LIN-VM01 and LIN-
VM02 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (LIN-VM11 and LIN-VM12
on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(LIN-VM11 and LIN-VM12 on ESX03.)
Compare results and quantify impact.
Expected results Lower network performance on systems with security patches.
Test tools: IOmeter
Test result: passed
Test notes:
49. Test results
ESXi host without
security patches
ESX01
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESX02
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
ESX03
OS without
security patches
LIN-VM01
LIN-VM02
Mbps:
10497.4184
10625.7293
10290.1794
10048.5660
9479.2741
AVG: 10278.7213
Mbps:
10421.9657
10157.4198
10673.1855
10052.0098
10610.0493
AVG: 10396.4783
Mbps:
9338.4298
9395.8587
10205.4507
9680.9801
8942.6938
AVG: 9471.7562
OS with security
patches
LIN-VM11
LIN-VM12
Mbps:
9592.9527
9761.0624
10655.8847
10283.9328
9630.8711
AVG: 9891.9554
Mbps:
10626.2253
10390.4692
9941.6684
10011.0204
10373.6655
AVG: 10258.2286
Mbps:
7794.9779
8703.3505
8383.7530
7298.3641
7165.8537
AVG: 7825.6983
50. In-Memory database performance (Linux/Redis) - single VM on top of ESXi host
Verification type Design
Test type Performance
Tested area Database
Test name Database performance from VM to In-Memory DB (Redis)
Test description Verification of CPU performance impact to in-memory database
performance
Tasks
Step 1/ Install and Run Redis DB on LIN-VM01 (Linux or FreeBSD
OS are required)
Step 2/ Run redis-benchmark
redis-benchmark -t get,set –n 1000000 -c 8
Step 3/ Note DB performance (transactions per second) and CPU load
(%)
Test combinations
• ESXi host without security patches – OS without security
patches (VM01 on ESX01)
• ESXi host without security patches – OS with security patches
(VM11 on ESX01)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS without security patches (VM01 on ESX02)
• ESXi host with Hypervisor-Specific Remediation security
patches – OS with security patches (VM11 on ESX02)
• ESXi host with Hypervisor-Specific and Hypervisor-Assisted
Guest Remediation security patches – OS with security patches
(VM11 on ESX03.)
Compare results and quantify impact.
Expected results Lower memory performance on systems with security patches.
Test tools: RedisDB
Test result: passed
Test notes:
52. Findings
CPU Performance on MS Windows
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
Single Thread:
236.3
Multi Thread:
624.3
Single Thread:
233.7
Multi Thread:
623.3
Single Thread:
234.5
Multi Thread:
624
MS Windows 2012
R2 with security
patches
Single Thread:
231.9
Multi Thread:
620.4
Single Thread:
233.9
Multi Thread:
621.8
Single Thread:
207.2
Multi Thread:
605.5
Secured system performance impact
CPU Single Thread ~ -12%
<< this is probably because ESXi hardware has just a 2 CPU cores (Intel NUC) and
ESXi VMkernel is probably using more CPU resources on CPU core 0. Such
performance impact was not observed on enterprise server hardware where the
impact in single CPU thread was negligible.
CPU Multi Thread ~ -3%
Storage performance (Win/IOmeter) – I/O size 512B
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
20.25 MB/s
39550 IOPS
19.91 MB/s
38887 IOPS
20.13 MB/s
39316 IOPS
MS Windows 2012
R2 with security
patches
19.47 MB/s
38027 IOPS
17.77 MB/s
34707 IOPS
15.63 MB/s
30527 IOPS
Secured system storage performance impact is ~ -23%
53. Storage performance (Win/IOmeter) – I/O size 64kB
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
499.36 MB/s
7619 IOPS
498.8 MB/s
7611 IOPS
501.76 MB/s
7656 IOPS
MS Windows 2012
R2 with security
patches
497.82 MB/s
7596 IOPS
499.15 MB/s
7616 IOPS
495.68 MB/s
7563 IOPS
Secured system performance impact is ~ 1% which is negligible. In other words, for
larger I/O size negative performance impact has not been observed.
Network performance (Win/IOmeter) – I/O size 512B
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
463.59 MB/s 445.37 MB/s 425.22 MB/s
MS Windows 2012
R2 with security
patches
450.01 MB/s 381.92 MB/s 212.83 MB/s
Secured system network performance impact is ~ -54%
<< Even bigger impact (~ 60%) was observed on enterprise server hardware
54. In-Memory database performance (Win/Redis)
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
MS Windows 2012
R2 without security
patches
TPS set,get:
140067, 149696
TPS set,get:
141180, 148153
TPS set,get:
143787, 147371
MS Windows 2012
R2 with security
patches
TPS set,get:
142081, 145618
TPS set,get:
139777, 145926
TPS set,get:
82531, 84119
Secured system memory performance impact is ~ -42%
<< Similar impact (~ 40%) for set transaction but even bigger impact (~ 50%) was
observed for get transaction on enterprise server hardware
Network performance (Linux/NUTTCP)
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
CentOS 7 without
security patches
10278.72 Mbps 10396.48 Mbps 9471.76 Mbps
CentOS 7 with
security patches
9891.96 Mbps 10258.23 Mbps 7825.7 Mbps
Secured system network performance impact is ~ -24%
<< Less impact (~ 9%) was observed on enterprise server hardware
In-Memory database performance (Linux/Redis)
ESXi host without
security patches
ESXi host with
Hypervisor-Specific
Remediation
security patches
ESXi host with
Hypervisor-Specific
and Hypervisor-
Assisted
Guest Remediation
security patches
CentOS 7 without
security patches
TPS set,get:
156633, 150487
TPS set,get:
153840, 149583
TPS set,get:
153623, 154284
CentOS 7 with
security patches
TPS set,get:
106133, 104803
TPS set,get:
105516, 106492
TPS set,get:
43956, 43843
Secured system memory performance impact is ~ -70%
<< Similar impact was observed on enterprise server hardware
55. Conclusion
Qualification of performance is very specific and hard subject. The performance impact varies
across different hardware and software configurations. However, performed tests are very
well described in this document so the reader can understand all conditions of the test and
observed results. The reader can also perform tests on his specific hardware and software
configurations.
Tests in this document are focused on CPU, Memory, Storage and Network. It is worth to
mention that these tests are synthetic created to test the impact on specific infrastructure
component. Real workloads are usually mix of CPU, Memory, Storage and Network,
therefore the impact is the combination of extreme impacts of these synthetic tests.
The performance impact of VMware ESXi patches
We did not observe performance penalty after application of ESXi patches (Hypervisor-
Specific and Hypervisor-Assisted Guest Remediation security patches). The performance
penalty on CPU, Memory and Storage was observed after application of security patches in
to Guest Operating Systems and CPU Microcode. The only exception are Network
performance tests where we have observed up to 8% performance penalty after application of
ESXi patches even the Guest OS was still unpatched.
The performance impact of GuestOS and CPU Microcode patches
After application of all security remediation for Windows 2012 R2 and ESXi 6.5 we have
observed following performance impacts
• CPU
o ~ 12% negative performance impact on single thread CPU performance
o ~ 3% (negligible) negative performance impact on multi thread CPU
performance
• Memory
o ~ 42% negative performance impact on memory performance
• Storage
o ~ 23% negative performance impact on storage performance with small I/O
size (512B)
o No performance impact on storage performance with 64kB I/O size
• Network
o ~ 54% negative performance impact on network performance impact with
small I/O size (512B)
After application of all security remediation for CentOS 7 and ESXi 6.5 we have observed
following performance impacts
• Memory
o ~ 70% negative performance impact on memory performance
• Network
o ~ 24% negative performance impact on network performance