The needs for immediate responsiveness of VMs in the virtualized environments have been on the rise. Several services in SKT also require soft realtime support for virtual machines to substitute the physical machines to achieve high utilization and adaptability. However, consolidated multiple OSes and irregular external events might render the hypervisor infringe on a VM's promptitude. As a solution of this problem, we are improving Xen's credit scheduler by introducing the RT_PRIORITY that guarantees a VM's running at any given point in time as long as credits remains to be burn. It would increase the quality of service and make a VM's behavior predictable on the consolidated environment. In addition, we extend our suggestion to the multi-core environment and even a large number of physical machines by using live migrations.
Application Live Migration in LAN/WAN EnvironmentMahendra Kutare
Evaluation of VM live migration policies on VMware, Xen, IBM System P, and Hyper-V ! Examination of critical stages of VM live migration policy as state machine and steps to optimize and improve service disruption time.
This is my POC report to our customer - AWB and I use my professional skill to present them how to make their testing be automatically with whole stuff!
The needs for immediate responsiveness of VMs in the virtualized environments have been on the rise. Several services in SKT also require soft realtime support for virtual machines to substitute the physical machines to achieve high utilization and adaptability. However, consolidated multiple OSes and irregular external events might render the hypervisor infringe on a VM's promptitude. As a solution of this problem, we are improving Xen's credit scheduler by introducing the RT_PRIORITY that guarantees a VM's running at any given point in time as long as credits remains to be burn. It would increase the quality of service and make a VM's behavior predictable on the consolidated environment. In addition, we extend our suggestion to the multi-core environment and even a large number of physical machines by using live migrations.
Application Live Migration in LAN/WAN EnvironmentMahendra Kutare
Evaluation of VM live migration policies on VMware, Xen, IBM System P, and Hyper-V ! Examination of critical stages of VM live migration policy as state machine and steps to optimize and improve service disruption time.
This is my POC report to our customer - AWB and I use my professional skill to present them how to make their testing be automatically with whole stuff!
We implement link virtualization based on Xen. Link virtualization is a basic building block for network virtualizaiton that allows the co-existence of different Internet protocols. To minimize virtualize overhead, we use SR-IOV with Intel 82576
Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.
Service Assurance for Virtual Network Functions in Cloud-Native EnvironmentsNikos Anastopoulos
Network functions virtualization (NFV) is transforming the telecommunications industry by changing the way networks are built and operated. Standalone network functions like soft switching, edge routing, and firewalling, as well as composite network services like evolved packet core and IP multimedia subsystem are increasingly being migrated to software-based implementations.
Performance is a key factor for NFV adoption: first, virtual network functions (VNFs) must offer performance that is competitive with legacy, fixed-function alternatives. In addition, VNFs must feature predictable performance so that the communications service providers can confidently plan on certain service level objectives (SLOs). Both of these requirements translate into a problem of optimal scheduling of hardware resources for a virtual network service in a way that guarantees sufficient amount of them for every VNF that makes it up, and at the same time protection from noisy neighbors. This problem becomes hard as NFV matures from monolithic network services with few VNFs to microservice-type services with many VNFs, and even harder as multiple independent network services need to colocate without performance losses.
In this talk we will present Intracom Telecom’s NFV Service Assurance Platform to address these challenges across a broad spectrum of virtualization options, including KVM virtual machines, native Linux applications, Docker containers and Kubernetes pods. We will describe the key technologies used, like programmable allocation of processor caches and memory bandwidth, and the methodologies we employ to discover ideal VNF resource allocations, like exhaustive stress testing and VNF profiling.
Linux Foundation Collaboration Summit 13 :10 years of Xen and BeyondThe Linux Foundation
In 2013, the Xen Hypervisor will be 10 years old: when Xen was designed, we anticipated a world, which now is known as cloud computing. Today, Xen powers the largest clouds in production and is the basis for several commercial virtualization products. In this talk we will give on overview of Xen and related projects, cover hot developments in the Xen community and outline what comes next.
The talk is intended for users and developers that are familiar with virtualization: no deep knowledge is required. We will start with an architectural overview and cover topics such as: Xen and Linux, how to secure your cloud using disaggregation, SELinux and XSM/FLASK, the evolution of Paravirtualization, Xen on ARM and common challenges for open source hypervisors. We will explore the potential of Open Mirage for testing hypervisors. The talk will conclude with an outlook to the future of Xen.
Xen has been very successful on servers, and yet there are substantial areas where Xen can evolve further. In this talk Jun will discuss a compelling area where the Xen technologies can be applied to -- Mobile virtualization. Using Android as an example, the talk will explore two types of usage models, 1) Android as a guest, 2) Android as the host, showing the benefits of using the Xen technologies.
Introduction to Cloud Computing Data Center and Network Issues to Internet Research Lab at NTU, Taiwan. Another definition of cloud computing and comparison of traditional IT warehouse and current cloud data center. (ppt slide for download.) Take a opensource data center management OS, OpenStack, as an example. Underlying network issues inside a cloud DC.
We implement link virtualization based on Xen. Link virtualization is a basic building block for network virtualizaiton that allows the co-existence of different Internet protocols. To minimize virtualize overhead, we use SR-IOV with Intel 82576
Traditionally Linux has always run on Xen either as a pure PV guest or as a virtualization unaware guest in an HVM domain. Recently, under the name of "PV on HVM", a series of works has been done to make Linux aware that is running on Xen and enable as many PV interfaces as possible even when running in an HVM container. After enabling the basic PV network and disk drivers, some other more interesting optimizations were implemented: in particular remapping legacy interrupts and MSIs onto event channels. This talk will explain the idea behind the feature, the reason why avoiding interactions with the lapic is a good, and some implementation details.
Service Assurance for Virtual Network Functions in Cloud-Native EnvironmentsNikos Anastopoulos
Network functions virtualization (NFV) is transforming the telecommunications industry by changing the way networks are built and operated. Standalone network functions like soft switching, edge routing, and firewalling, as well as composite network services like evolved packet core and IP multimedia subsystem are increasingly being migrated to software-based implementations.
Performance is a key factor for NFV adoption: first, virtual network functions (VNFs) must offer performance that is competitive with legacy, fixed-function alternatives. In addition, VNFs must feature predictable performance so that the communications service providers can confidently plan on certain service level objectives (SLOs). Both of these requirements translate into a problem of optimal scheduling of hardware resources for a virtual network service in a way that guarantees sufficient amount of them for every VNF that makes it up, and at the same time protection from noisy neighbors. This problem becomes hard as NFV matures from monolithic network services with few VNFs to microservice-type services with many VNFs, and even harder as multiple independent network services need to colocate without performance losses.
In this talk we will present Intracom Telecom’s NFV Service Assurance Platform to address these challenges across a broad spectrum of virtualization options, including KVM virtual machines, native Linux applications, Docker containers and Kubernetes pods. We will describe the key technologies used, like programmable allocation of processor caches and memory bandwidth, and the methodologies we employ to discover ideal VNF resource allocations, like exhaustive stress testing and VNF profiling.
Linux Foundation Collaboration Summit 13 :10 years of Xen and BeyondThe Linux Foundation
In 2013, the Xen Hypervisor will be 10 years old: when Xen was designed, we anticipated a world, which now is known as cloud computing. Today, Xen powers the largest clouds in production and is the basis for several commercial virtualization products. In this talk we will give on overview of Xen and related projects, cover hot developments in the Xen community and outline what comes next.
The talk is intended for users and developers that are familiar with virtualization: no deep knowledge is required. We will start with an architectural overview and cover topics such as: Xen and Linux, how to secure your cloud using disaggregation, SELinux and XSM/FLASK, the evolution of Paravirtualization, Xen on ARM and common challenges for open source hypervisors. We will explore the potential of Open Mirage for testing hypervisors. The talk will conclude with an outlook to the future of Xen.
Xen has been very successful on servers, and yet there are substantial areas where Xen can evolve further. In this talk Jun will discuss a compelling area where the Xen technologies can be applied to -- Mobile virtualization. Using Android as an example, the talk will explore two types of usage models, 1) Android as a guest, 2) Android as the host, showing the benefits of using the Xen technologies.
Introduction to Cloud Computing Data Center and Network Issues to Internet Research Lab at NTU, Taiwan. Another definition of cloud computing and comparison of traditional IT warehouse and current cloud data center. (ppt slide for download.) Take a opensource data center management OS, OpenStack, as an example. Underlying network issues inside a cloud DC.
Power management has become increasingly important in large-scale datacenters to address costs and limitations in cooling or power delivery, and it is much critical in mobile client where battery lifecycle is considered as one of the critical characteristics of the platform of choice. Good power management helps to achieve great energy efficiency. Virtualization imposes additional challenge to power management. It involves multiple software layers: VMM, OS, APP. For example, a good OS software stack may result in bad power consumption, if the hypervisor is not the timer unalignment, etc.
In this session, we will introduce what we did to improve power efficiency to achieve better power efficiency in both server and client virtualization environment.
In server side, we will introduce additional optimization technologies (e.g., eliminate unnecessary activities, align periodic timers to create long-idle period), to improve package C6 residency to be within 5% overhead with native. In client side, we will share our client power optimization technologies (e.g. graphics, ATA and wireless), which successfully reduce XenClient idle power overhead to be within 5%.
XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.The Linux Foundation
Currently, several initiatives promote XEN hypervisor into the automotive area as a base of complex virtualized systems. To support those initiatives and plunge into the automotive world XEN should fit at least two requirements: it should be appropriately certified and to be able to host a security domain. Leaving behind certification topic, here we focus on security domain hosting capability of XEN. Particularly on keeping RT guarantees for the specific domain.
This talk is a presentation of the investigation on a XEN hypervisor applicability to building a multi-OS system with real-time guarantees being kept for one of the hosted OSes.
During this presentation following topics would be outlined:
- experimental setup
- experimental use-cases and their motivation
- received results and discovered issues
- solutions and mitigation measures for discovered issues
COLO: COarse-grain LOck-stepping Virtual Machines for Non-stop ServiceThe Linux Foundation
Virtual machine (VM) replication (replicating the state of a primary VM running on a primary node to a secondary VM running on a secondary node) is a well known technique for providing application-agnostic, non-stop service. Unfortunately, existing VM replication approaches suffer from excessive replication overhead and, for client-server systems, there is really no need for the secondary VM to match its machine state with the primary VM at all times.
In this paper, we propose COLO (COarse-grain LOck-stepping virtual machine solution for non-stop service), a generic and highly efficient non-stop service solution, based on on-demand VM replication. COLO monitors the output responses of the primary and secondary VMs. COLO considers the secondary VM a valid replica of the primary VM, as long as network responses generated by the secondary VM match that of the primary. The primary VM state is propagated to the secondary VM, if and only if outputs from the secondary and primary servers no longer match.
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...VMworld
VMworld 2013
Bhavesh Davda, VMware
Josh Simons, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Short presentation I gave to the UKCMG 1-day mini-conference 15 October in London.
Covers 2 main aspects of Parallel Sysplex performance, both in the CPU area:
1) Comparing Type 70 view of CPU to Type 74-4.
2) Type 74-4 Structure-Level CPU and its role in Capacity Planning and Performance.
VM live migration from one physical server to another is a key advantage of virtualization. It's used widely in the scenarios such as load balance / power consumption optimization inside the cluster and host maintenance, etc. Being able to do VM live migration as quickly as possible with no service interruption is regarded as a key competitiveness of the virtualization platform.
Xen has supported live migration for many years. However our recent study shows that Xen still has lots of room to improve, in the aspects of live migration elapsed time, service downtime and concurrency instance number. Several experimental enhancements have been added and the initial result looks pretty good. For instance, merely using memory comparison before migration can speed up the elapsed time by >2X in some cases per our evaluation. The policy to balance the CPU utilization and the compression ratio is also considered.
Static partitioning is used to split an embedded system into multiple domains, each of them having access only to a portion of the hardware on the SoC. It is key to enable mixed-criticality scenarios, where a critical application, often based on a small RTOS, runs alongside a larger non-critical app, typically based on Linux. The two domains cannot interfere with each other.
This talk will explain how to use Xen for static partitioning. It will introduce dom0-less, a new Xen feature written for the purpose. Dom0-less allows multiple VMs to start at boot time directly from the Xen hypervisor, decreasing boot times drastically. It makes it very easy to partition the system without virtualization overhead. Dom0 becomes unnecessary.
This presentation will go into details on how to setup a Xen dom0-less system. It will show configuration examples and explain device assignment. The talk will discuss its implications for latency-sensitive and safety-critical environments.
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...The Linux Foundation
TrenchBoot is a cross-community OSS integration project for hardware-rooted, late launch integrity of open and proprietary systems. It provides a general purpose, open-source DRTM kernel for measured system launch and attestation of device integrity to trust-centric access infrastructure. TrenchBoot closes the UEFI Measurement Gap and reduces the need to trust system firmware. This talk will introduce TrenchBoot architecture and a recent collaboration with Oracle to launch the Linux kernel directly with Intel TXT or AMD SVM Secure Launch. It will propose mechanisms for integrating the Xen hypervisor into a TrenchBoot system launch. DRTM-enabled capabilities for client, server and embedded platforms will be presented for consideration by the Xen community.
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...The Linux Foundation
Artem will briefly cover what has been done since the first talk on Xen in Automotive domain back in 2013, what is going on now and what is still missing for broad adaptation of Xen in vehicles. The following topics will be covered:
Embedded/automotive features of Xen
Collaboration with AGL and GENIVI organizations for standardization
Efforts on Functional Safety compliance
Artem will also go over typical automotive use scenarios for Xen which may not be the same as generic computing use of hypervisor.
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...The Linux Foundation
In this keynote talk, we will give an overview of the state of the Xen Project, trends that impact the project, see whether challenges that surfaced last year have been addressed and how we did it, and highlight new challenges and solutions for the coming year.
In recent years unikernels have shown immense performance potential (e.g., boot times of only a few ms, image sizes of only hundreds of KBs).The fundamental drawback of unikernels is that they require that applications be manually ported to the underlying minimalistic OS, needing both expert work and often considerable amount of time.
The Unikraft project provides a unikernel code base and build system that significantly simplifies the building of unikernels. In addition to support for a number CPU architectures, languages and frameworks, Unikraft provides debugging and tracing features that are generally sorely missing from unikernel projects. In this talk we will talk about these features, show a set of preliminary performance numbers, and provide a roadmap for the project's future.
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...The Linux Foundation
The idea of making Xen secret-free has been floating since Spectre and Meltdown came into light. In this talk we will discuss what is being done and what needs to be done next.
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
This talk will introduce Dom0-less: a new way of using Xen to build mixed-criticality solutions. Dom0-less is a Xen feature that adds a novel approach to static partitioning based on virtualization. It allows multiple domains to start at boot time directly from the Xen hypervisor, decreasing boot times dramatically. Xen userspace tools, such as xl and libvirt, become optional.
Dom0-less extends the existing device tree based Xen boot protocol to cover information required by additional domains. Binaries, such as kernels and ramdisks, are loaded by the bootloader (u-boot) and advertised to Xen via new device tree bindings.
The audience will learn how to use Dom0-less to partition the system. Uboot and device tree configuration details will be explained to enable the audience to get the most out of this feature. The talk will include a status update and details on future plans.
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...The Linux Foundation
As the number of contributions grow, reviewer bandwidth becomes a bottleneck; and maintainers are always asking for more help. However, ultimately maintainers must at least Ack every patch that goes in; so if you're not a maintainer, how can you contribute? Why should anyone care about your opinion?
This talk will try to lay out some advice and guidelines for non-maintainers, for how they can do code review in a way which will effectively reduce the load on maintainers when they do come to review a patch.
This talk is a follow-up to our Summit 2017 presentation in which we covered our plans for Intel VMFUNC and #VE, as well as related use-cases. This year, we will provide a report on what we have accomplished in Xen 4.12, and what remains to be addressed. We will also give a brief status update of VMI on AMD hardware. The session will end with some real-world numbers of the Hypervisor Introspection solution running on Citrix Hypervisor 8.0 with #VE enabled.
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...The Linux Foundation
Safety certification is one of the essential requirements for software to be used in highly regulated industries. Besides technical and compliance issues (such as ISO 26262 vs IEC 611508) transitioning an existing project to become more easily safety certifiable requires significant changes to development practices within an open source project.
In this session, we will lay out some challenges of making safety certification achievable in open source and the Xen Project. We will outline the process the Xen Project has followed thus far and highlight lessons learned along the way. The talk will primarily focus on necessary process, tooling changes and community challenges that can prevent progress. We will be offering an in-depth review of how Xen Project is approaching this challenging goal and try to derive lessons for other projects and contributors.
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...The Linux Foundation
Safety certification is one of the essential requirements for software to be used in highly regulated industries. The Xen Project, a secure and stable hypervisor that is used in many different markets, has been exploring the feasibility of building safety certified products on top of Xen for a year, looking at key aspects of its code base and development practices.
In this session, we will lay out the motivation and challenges of making safety certification achievable in open source and the Xen Project. We will outline the process the project has followed thus far and highlight lessons learned along the way. The talk will cover technical enablers, necessary process and tooling changes and community challenges offering an in-depth review of how Xen Project is approaching this exciting and and challenging goal.
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixThe Linux Foundation
2018 saw fundamental shifts in security boundaries which were previously taken for granted. A lot of work has been done in the past 2 years, and largely in secret under embargo, but there is plenty more work to be done to strengthen the existing mitigations and to try to recover some performance without reopening security holes.
This talk will look at speculative execution sidechannels, the work which has already been done to mitigate the security holes, and future work which hopes to bring some improvements.
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdThe Linux Foundation
The Arm architecture provides a set of guidelines that any software should abide by when accessing the memory with MMU off and update page-tables. Failing to do so may result in getting TLB conflicts or breaking coherency.
In a previous talk ("Keeping coherency on Arm"), we focused on updating safely the stage-2 (aka P2M) page-tables. This talk will focus on the boot code and Xen memory management.
During this session, we will introduce some of the guidelines and when they should be used. We will also discuss how Xen boot sequence needs to be reworked to avoid breaking the guidelines.
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...The Linux Foundation
For many years the QEMU codebase has contained PV backends for Xen guests, giving them paravirtual access to storage, network, keyboard, mouse, etc. however these backends have not been configurable as QEMU devices as their implementation did not fully adhere to the QEMU Object Model (QOM).
Particularly the PV storage backend not using proper QOM devices, or qdevs, meant that the QEMU block layer needed to maintain legacy code that was cluttering up the source. This was causing push-back from the maintainers who did not want to accept any patches relating to that Xen backend until it was 'qdevified'.
In this talk, I'll explain the modifications I made to QEMU to achieve 'qdevification' of the PV storage backend, how compatibility with the libxl toolstack was maintained, and what the next steps in both QEMU and libxl development should be.
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DThe Linux Foundation
PCI is a local computer bus for attaching hardware devices in a computer, and is the main peripheral bus on modern x86 systems. As such, having a proper way to emulate it is crucial for Xen to be able to expose both fully emulated devices or passthrough devices to guests.
This talk will focus on the current status of PCI emulation in Xen, how and where it is used, what are its main limitations and future plans to improve it in order to be more robust and modular.
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsThe Linux Foundation
Volodymyr will speak about TEE mediators. This is a new feature in Xen which allows multiple virtual machines to interact with Trusted Execution Environment available on platform. He developed mediator for one of TEEs, namely OP-TEE.
He will give background information on why TEE is needed at all and share some implementation details.
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...The Linux Foundation
Xen is a very powerful hypervisor with a talented and diverse developers community. Despite the fact it's almost everywhere (from the Cloud to the embedded world), it can be difficult to set up and manage as a system administrator. General purpose distros have Xen packages, but that's just a start in your Xen journey: you need some tooling and knowledge to have a working and scalable platform.
XCP-ng was built to overcome those issues: by bringing Xen to the masses with a fully turnkey distro with Xen as its core. It's the logical sequel to the XCP project, with a community focus from the start. We'll see how it happened, what we did, and what's next. Finally, we'll see the impact of XCP-ng on the Xen Project.
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...The Linux Foundation
Doug has long advocated for more CI/CD (Continuous Integration / Continuous Delivery) processes to be adopted by the Xen Project from the use of Travis CI and now GitLab CI. This talk aims to propose ideas for building upon the existing process and transforming the development process to provide users a higher quality with each release by the Xen Project.
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...The Linux Foundation
High level toolstacks for server and cloud virtualization are very mature with large communities using and supporting them. Client virtualization is a much more niche community with unique requirements when compared to those found in the server space. In this talk, we’ll introduce a client virtualization toolstack for Xen (redctl) that we are using in Redfield, a new open-source client virtualization distribution that builds upon the work done by the greater virtualization and Linux communities. We will present a case for maturing libxl’s Go bindings and discuss what advantages Go has to offer for high level toolstacks, including in the server space.
Today Xen is scheduling guest virtual cpus on all available physical cpus independently from each other. Recent security issues on modern processors (e.g. L1TF) require to turn off hyperthreading for best security in order to avoid leaking information from one hyperthread to the other. One way to avoid having to turn off hyperthreading is to only ever schedule virtual cpus of the same guest on one physical core at the same time. This is called core scheduling.
This presentation shows results from the effort to implement core scheduling in the Xen hypervisor. The basic modifications in Xen are presented and performance numbers with core scheduling active are shown.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
2. I/O Latency Issues in Xen-ARM
• Guaranteed I/O latency is essential to Xen-ARM
– For mobile communication,
• Broken/missing phone calls, long network detection
• AP, CP are used for guaranteed communication
• Virtualization premises transparent execution
– Not only instruction execution; access isolation
– But also performance; performance isolation
• In practice, I/O latency is still an obstacle
– Lack of scheduler support
• Hierarchical scheduling nature
i.e. hypervisor cannot impose task scheduling inside a guest OS
• The credit scheduler doesn’t match to time-sensitive applications
– Latency due to the split driver model
• Enhances reliability, but degrades performance (w.r.t. I/O latency)
• Inter-domain communication between IDD and user domain
• We investigate these issues, and present possible remedies
Operating Systems Lab. http://os.korea.ac.kr 2
3. Related Work
• Credit schedulers in Xen
– Task-aware scheduler (by Hwanjoo et. al. vee’08)
• Inspection of task execution at guest OS
• Adaptively use boost mechanism by VM workload characteristics
– Laxity-based soft RT scheduler (by Lee min et. al. vee’10)
• Laxity calculation by execution profile
• Assign priority based on the remaining time to deadline (laxity)
– Dynamic core-allocation (by Y. Hu et. al. hpdc’10)
• Core-allocation by workload characteristics (driver core, fast/slow tick core)
• Mobile/embedded hypervisors
– OKL4 microvisor: microkernel-based hypervisor
• Has verified kernel – from sel4
• Presents good performance – commercially successful
• Real-time optimizations – slice donation, direct switching, reflective scheduling,
threaded interrupt handling, etc.
– VirtualLogix – VLX: mobile hypervisor for real-time support
• Shared driver model – device sharing model among RT-guest OS and
non-RT guest OSs
• Good PV performance
Operating Systems Lab. http://os.korea.ac.kr 3
4. Background – the Credit in Xen
• Weighted round-robin based fair scheduler
• Priority – BOOST, UNDER, OVER
– Basic principle: preserve fairness among all
VCPUs
• Each vcpu gets credit periodically
• Credit is debited as vcpu consumes execution time
– Priority assignment
• Remaining credit <= 0 à OVER (lowest)
• Remaining credit > 0 à UNDER
• Event-pending VCPU : BOOST (highest)
– BOOST: for providing low response time
• Allows immediate preemption of the current vcpu
Operating Systems Lab. http://os.korea.ac.kr 4
5. Fallacies for the BOOST
in the Credit
• Fallacy 1) VCPU is always boosted by I/O event
– In fact, BOOST is sometimes ignored because
VCPU is boosted only when it doesn’t break the fairness
• ‘Not-boosted vcpu’s are observed when the vcpu is in-execution
– Example 1)
• If a user domain has CPU job and waiting for execution, then
• It is not boosted since it will be executed soon, and tentative BOOST is easy to break
the fairness
• Fallacy 2) BOOST always prioritizes the VCPU
– In fact, BOOST is easy to be negated because
multiple vcpus can be boosted simultaneously
• ‘Multi-boost’ happens quite often in split driver model
– Example 1)
• Driver domain has to be boosted, and then
• User domain also needs to be boosted
– Example 2)
• Driver domain has multiple pkts that are destined to multiple user domains, then
• All the designated user domains are boosted
Operating Systems Lab. http://os.korea.ac.kr 5
6. Xen-ARM’s
Latency Characteristic
• I/O latency measured throughout interrupt
path
– Preemption latency :
until code is preemptible
– VCPU dispatch latency :
until the designated VCPU is scheduled
– Intra-VM latency :
until IO completion
Dom0 DomU
dispatch dispatch
Phy intr.
Dom0 DomU
Xen-ARM Dom0 DomU
Intr. handler I/O task I/O task
Preemption Vcpu Vcpu
dispatch Intra-VM dispatch Intra-VM
latency
latency latency latency latency
Operating Systems Lab. http://os.korea.ac.kr 6
7. Observed Latency thru intr. path
• I/O latency measured throughout interrupt path
– Send ping request from external server to dom1
– VM settings
• Dom0 : the driver domain
• Dom1 : 20% cpu load + ping recv.
• Dom2 : 100% cpu load (CPU-burning workload)
– Xen-netback latency
• Large worst-case latency
• Dom0 vcpu dispatch lat.
+ intra-dom0 lat.
– Netback-domU
latency
• Large average latency
• Dom1 vcpu dispatch lat.
Operating Systems Lab. http://os.korea.ac.kr 7
8. Observed VCPU dispatch latency :
not-boosted vcpu
• Experimental setting (%)
100
– Dom1: varying CPU workload
– Dom2: burning CPU workload
– Another external host sends ping
Cumulative latency distribution
80
to Dom1
• Not-boosted vcpus affect I/O latency
distribution 60
– Dom1 CPU workload 20% : almost
90% ping requests are handled
within 1ms 40
– Dom1 CPU workload 40% : 75%
ping requests are handled 20% CPU load @ dom1
40% CPU load @ dom1
within 1ms 20
60% CPU load @ dom1
– Dom1 CPU workload 60% : 65% 80% CPU load @ dom1
ping requests are handled
within 1ms 0
– Dom1 CPU workload 80% : only 60% 0 2 4 6 8 10 12 14
ping requests are handled Latency distribution by CPU load
Latency
(ms)
within 1ms
• When Dom1 has more CPU load
è larger I/O latency (self-disturbing by not-boosted vcpu)
Operating Systems Lab. http://os.korea.ac.kr 8
9. Observed Multi-boost
• At the hypervisor, we vcpu state priority sched. out
count
counted the number Blocked BOOST 275
of “schedule out” of the UNDER 13
driver domain Unblocked BOOST 664,190
UNDER 49
• Multi-boost is specifically when
– the current VCPU is BOOST state, and
it is unblocked
– Imply that there should be another BOOST vcpu, and
the current vcpu is scheduled out
• Large number of Multi-boosts
Operating Systems Lab. http://os.korea.ac.kr 9
10. Intra-VM latency
1
• Latency
Cumulative distribution
0.95
from usb_irq
to netback
0.9
Xen-usb_irq @ dom0
Xen-netback @ dom0
Xen-icmp @ dom1
– Schedule outs 0.85 Dom0 : IDD
Dom1 : CPU 20%+ ping recv
during dom0 execution
Dom2 : CPU 100%
0.8
0 20 40 60 80
Latency (ms)
1
• Reasons
Cumulative distribution
0.95
– Dom0 : not always the
higest prio. 0.9
Xen-usb_irq @ dom0
– Asynch I/O handling :
Xen-netback @ dom0
Xen-icmp @ dom1
0.85 Dom0 : IDD
bottom half, softirq, Dom1 : CPU 80%+ ping recv
Dom2 : CPU 100%
tasklets, etc. 0.8
0 20 40 60 80
Latency (ms)
Operating Systems Lab. http://os.korea.ac.kr 10
11. Virtual Interrupt Preemption
@ Guest OS
<At driver domain>
• Interrupt enable/disable is not physically
void default_idle(void) {
operated within a guest OS
Hardware interrupt
local_irq_disable();
– local_irq_disable(): disables
Set a pending bit
only virtual intr. Virtual
(because virtual if (!need_resched())
interrupt
intr. is disabled) // blocks this domain disabled
• Physical intr. can be occurred arch_idle();
– might trigger inter-VM local_irq_enable();
scheduling }
Driver domain is scheduled out
• Virtual interrupt preemption having pending IRQ
– Similar to lock-holder preemption
– Perform driver function with
interrupt disabled
– Physical timer intr. triggers
inter-VM scheduling
– Virt. Intr. can be received only after
the domain is scheduled again
(as large as tens of ms)
Note that the driver domain performs extensive I/O operation that disables interrupt
Operating Systems Lab. http://os.korea.ac.kr 11
12. Resolving I/O Latency
Problems for Xen-ARM
1. Fixed priority assignment
– Let the driver domain always run with
DRIVER_BOOST, which is the highest priority
• regardless of the CPU workload
• Resolves non-boosted VCPU and multi-boost
– RT_BOOST, BOOST for real-time I/O domain
2. Virtual FIQ support
– ARM-specific interrupt optimization
– Higher priority than normal IRQ interrupt
– vPSR (Program status register) usage
3. Do not schedule out the driver domain when it disables virtual
interrupt
– It will be finished soon, and the hypervisor should give a chance to
run the driver domain
• Resolves virtual interrupt preemption
Operating Systems Lab. http://os.korea.ac.kr 12
14. Enhanced Interrupt Latency :
@ Driver Domain
• Vcpu dispatch latency 1
– From intr. handler @ Xen
to hard irq handler @ IDD 0.99
Cumulative distribution
– Fixed priority dom0 vcpu 0.98
Dom0 : IDD
Dom1 : CPU 80%+ ping recv
– Virtual FIQ Dom2 : CPU 100%
– No latency is observed! 0.97
Xen-usb_irq - orig.
Xen-netback - orig.
0.96 Xen-usb_irq - new
Xen-netback - new
• Intra-VM latency
0.95
– From ISR to netback @ IDD 0 20 40 60 80
Latency (ms)
– No virt. intr. preemption
(dom0 highest prio.)
• Among 13M intr.,
– 56K are caught where
virt intr preemption happened
– 8.5M preemption occurred
with FIQ optimization
Operating Systems Lab. http://os.korea.ac.kr 14
15. Enhanced end-user Latency :
overall result
• Over 1 million ping tests,
– Fixed priority make the driver domain to run
without additional latency
(from inter-VM scheduling)
• Largely reduces overall latency
– 99% interrupts (%)
100
are handled
Latency distribution (cumulative)
within 1ms 95 reduced
latency
90
Xen-domU (enhanced)
Xen-domU (original)
85
80
0 10 20 30 40 50 60
Latencies in all intervals (ms)
Operating Systems Lab. http://os.korea.ac.kr 15
16. Conclusion and
Possible Future work
• We analyzed I/O latency in Xen-ARM virtual machine
– Throughout the interrupt handling path in split driver model
• Two main reasons for long latency
– Limitation of ‘BOOST’ in the Xen-ARM’s Credit scheduler
• Not-boosted vcpu
• Multi-boost
– Driver domain’s virtualized interrupt handling
• Virtual interrupt preemption (aka. lock-holder preemption)
• Achieve under millisecond latency for 99% network packet interrupts
– DRIVER_BOOST; the highest priority for driver domain
– Modify scheduler in order for the driver domain not to be scheduled out while
virtual interrupts are disabled
– Further optimizations (incl. virtual FIQ mode, softirq-awareness)
• Possible future work
– Multi-core consideration/extension (core allocation, etc.)
• Other scheduler integration
– Tight latency guarantee for real-time guest OS
• Rest 1% holds the key
Operating Systems Lab. http://os.korea.ac.kr 16
17. Thanks for your attention
Credits to OSvirtual @ oslab, KU
17
18. Appendix.
Native comparison
• Comparison with native system – no cpuworkload
– Slightly reduced handling latency, largely reduced max.
Orig.
Orig.
New
sched.
New
sched.
Na#ve
dom0
domU
Dom0
DomU
Min
375
459
575
444
584
Avg
532.52
821.48
912.42
576.54
736.06
Max
107456
100782
100964
1883
2208
Stdev
1792.34
4656.26
4009.78
41.84
45.95
1
Cumulative distribution
0.8
0.6
0.4
native ping (eth0)
orig. dom0
0.2 orig. domU
new sched. dom0
new sched. domU
0
300 400 500 600 700 800 900 1000
Latency (us)
Operating Systems Lab. http://os.korea.ac.kr 18
19. Appendix.
Fairness?
• is still good, and achieves high utilization
CPU burning jobs’ utilization
dom2
dom1
NO I/O (ideal case)
Orig. credit
New scheduler
* Setting
Dom1: 20, 40, 60, 80, 100% CPU load
+ ping receiver
Dom2: 100% CPU load
Note that the credit is work-conserving
Normalized
Orig. credit
New scheduler
throughput
= ( Measured thruput/
ideal thruput )
Operating Systems Lab. http://os.korea.ac.kr 19