The Turtles Project developed a technique called nested virtualization to run multiple hypervisors and their associated virtual machines simultaneously on x86 hardware, which does not natively support nested virtualization. It implemented nested virtualization by having the host hypervisor (L0) multiplex hardware access between the first level hypervisor (L1) and second level hypervisor (L2), running them both as L0 guests without their awareness. It optimized this approach through techniques like merging virtual machine control structures and reducing the number of exits needed to handle a trap from L2. This allowed full hardware virtualization of multiple guest hypervisors and VMs on x86 for the first time.
Describes what is lightweight virtualization and containers, and the low-level mechanisms in the Linux kernel that it relies on: namespaces, cgroups. It also gives details on AUFS. Those component together are the key to understanding how modern systems like Docker (http://www.docker.io/) work.
Docker, Linux Containers, and Security: Does It Add Up?Jérôme Petazzoni
Containers are becoming increasingly popular. They have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting an new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
In this presentation, we will:
- Review the actual security risks, in particular for multi-tenant environments running arbitrary applications and code
- Discuss how to mitigate those risks
- Focus on containers as implemented by Docker and the libcontainer project, but the discussion also stands for plain containers as implemented by LXC
امروزه مجازیسازی یکی از روشهای پرطرفدار برای پیادهسازی کارگزاران وب است. این فناوری موجب کاهش هزینههای تجارتهای کوچک میشود. مجازیسازی یکی از جنبههای مهم ارائه خدمات ابری است که حتی برای تجارتهای بزرگ نیز از جذابیت زیادی برخوردار است.
در این سخنرانی به امکاناتی همچون Control Groups و Containers که در نسخههای جدیدتر هسته سیستم عامل لینوکس پیادهسازی شده است میپردازیم. هرچند این امکانات مجازیسازی کامل را به ارمغان نمیآورند، اما بسیاری از مزایای آن را با سربار بسیار کم در سطح هسته فراهم میکنند. راه حلهایی همچون LXC و Docker بر اساس این امکانات توانستهاند به نتایج خوبی برسند که هم از لحاظ تجاری در خور توجه هستند و هم تبعات و کاربردهای امنیتی دارند.
Virtual machines are generally considered secure. At least, secure enough to power highly multi-tenant, large-scale public clouds, where a single physical machine can host a large number of virtual instances belonging to different customers. Containers have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting a new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
We will show techniques to harden Linux Containers; including kernel capabilities, mandatory access control, hardened kernels, user namespaces, and more, and discuss the remaining attack surface.
Describes what is lightweight virtualization and containers, and the low-level mechanisms in the Linux kernel that it relies on: namespaces, cgroups. It also gives details on AUFS. Those component together are the key to understanding how modern systems like Docker (http://www.docker.io/) work.
Docker, Linux Containers, and Security: Does It Add Up?Jérôme Petazzoni
Containers are becoming increasingly popular. They have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting an new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
In this presentation, we will:
- Review the actual security risks, in particular for multi-tenant environments running arbitrary applications and code
- Discuss how to mitigate those risks
- Focus on containers as implemented by Docker and the libcontainer project, but the discussion also stands for plain containers as implemented by LXC
امروزه مجازیسازی یکی از روشهای پرطرفدار برای پیادهسازی کارگزاران وب است. این فناوری موجب کاهش هزینههای تجارتهای کوچک میشود. مجازیسازی یکی از جنبههای مهم ارائه خدمات ابری است که حتی برای تجارتهای بزرگ نیز از جذابیت زیادی برخوردار است.
در این سخنرانی به امکاناتی همچون Control Groups و Containers که در نسخههای جدیدتر هسته سیستم عامل لینوکس پیادهسازی شده است میپردازیم. هرچند این امکانات مجازیسازی کامل را به ارمغان نمیآورند، اما بسیاری از مزایای آن را با سربار بسیار کم در سطح هسته فراهم میکنند. راه حلهایی همچون LXC و Docker بر اساس این امکانات توانستهاند به نتایج خوبی برسند که هم از لحاظ تجاری در خور توجه هستند و هم تبعات و کاربردهای امنیتی دارند.
Virtual machines are generally considered secure. At least, secure enough to power highly multi-tenant, large-scale public clouds, where a single physical machine can host a large number of virtual instances belonging to different customers. Containers have many advantages over virtual machines: they boot faster, have less performance overhead, and use less resources. However, those advantages also stem from the fact that containers share the kernel of their host, instead of abstracting a new independent environment. This sharing has significant security implications, as kernel exploits can now lead to host-wide escalations.
We will show techniques to harden Linux Containers; including kernel capabilities, mandatory access control, hardened kernels, user namespaces, and more, and discuss the remaining attack surface.
Linux container (LXC) seems to be preferred technology for deployment of Platform as a service (PaaS) in cloud. Partly because it's easy to install on top of existing visualization platforms (KVM, VMware, VirtualBox), partly because it is lightweight solution to provide separation and process allocations between separate containers running under single kernel.
In this talk we will take a look at LXC and try to explain how to combine it with mandatory access control (MAC) mechanisms within Linux kernel to provide secure separation between different users of applications.
Christian Kniep from Docker Inc. gave this talk at the Stanford HPC Conference.
"This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward, Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale. In conclusion, attendees will get an update on what problems still hinder the adoption of containers for distributed high performance workloads and how Docker is addressing these issues."
Christian Kniep is a Technical Account Manager at Docker, Inc. With a 10 year journey rooted in the HPC parts of the german automotive industry, Christian Kniep started to support CAE applications and VR installations. When told at a conference that HPC can not learn anything from the emerging Cloud and BigData companies, he became curious and was leading the containerization effort of the cloud-stack at Playstation Now. Christian joined Docker Inc in 2017 to help push the adoption forward and be part of the innovation instead of an external bystander. During the day he helps Docker customers in the EMEA region to fully utilize the power of containers; at night he likes to explore new emerging trends by containerizing them first and seek application in the nebulous world of DevOps.
Watch the video: https://wp.me/p3RLHQ-i4X
Learn more: http://docker.com
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com
Running Applications on the NetBSD Rump Kernel by Justin Cormack eurobsdcon
Abstract
The NetBSD rump kernel has been developed for some years now, allowing NetBSD kernel drivers to be used unmodified in many environments, for example as userspace code. However it is only since last year that it has become possible to easily run unmodified applications on the rump kernel, initially with the rump kernel on Xen port, and then with the rumprun tools to run them in userspace on Linux, FreeBSD and NetBSD. This talk will look at how this is achieved, and look at use cases, including kernel driver development, and lightweight process virtualization.
Speaker bio
Justin Cormack has been a Unix user, developer and sysadmin since the early 1990s. He is based in London and works on open source cloud applications, Lua, and the NetBSD rump kernel project. He has been a NetBSD developer since early 2014.
Introduction to Docker, December 2014 "Tour de France" EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the "Tour de France" in Paris, Lille, Lyon, Nice...
Windows Internals for Linux Kernel DevelopersKernel TLV
Agenda:
The Windows kernel has an honorable history of more than a quarter of a century. Since its inception in 1989, Windows NT supported a variety of modern OS features -- symmetric multiprocessing, interrupt prioritization, virtual memory, deferred interrupt processing, and many others. In this talk, targeted for Linux kernel developers, we will highlight the key features of the Windows NT kernel that are interesting or different from Linux's perspective. We will begin with a brief overview of processes, threads, and virtual memory on Windows. Next, we will talk about interrupt handling, interrupt priorities (IRQLs), bottom-half processing (DPC, APC, kernel worker threads, kernel thread pool), and I/O request flow. Among other things, we will look at device driver structure on Windows, application to driver communication (handles, IOCTLs), and the logical \DosDevices filesystem. Finally, we will discuss some features introduced in newer Windows versions, such as user-mode drivers (UMDF).
Speaker:
Sasha is the CTO of Sela Group, a training and consulting company based in Israel that employs over 400 developers world-wide. Most of Sasha's work revolves around performance optimization, production debugging, and low-level system diagnostics, but he also dabbles in mobile application development on iOS and Android. Sasha is the author of two books and three Pluralsight courses, and a contributor to multiple open-source projects. He blogs at http://blog.sashag.net.
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...Yandex
Lightweight virtualization", also called "OS-level virtualization", is not new. On Linux it evolved from VServer to OpenVZ, and, more recently, to Linux Containers (LXC). It is not Linux-specific; on FreeBSD it's called "Jails", while on Solaris it’s "Zones". Some of those have been available for a decade and are widely used to provide VPS (Virtual Private Servers), cheaper alternatives to virtual machines or physical servers. But containers have other purposes and are increasingly popular as the core components of public and private Platform-as-a-Service (PAAS), among others.
Just like a virtual machine, a Linux Container can run (almost) anywhere. But containers have many advantages over VMs: they are lightweight and easier to manage. After operating a large-scale PAAS for a few years, dotCloud realized that with those advantages, containers could become the perfect format for software delivery, since that is how dotCloud delivers from their build system to their hosts. To make it happen everywhere, dotCloud open-sourced Docker, the next generation of the containers engine powering its PAAS. Docker has been extremely successful so far, being adopted by many projects in various fields: PAAS, of course, but also continuous integration, testing, and more.
Introduction to Docker (as presented at December 2013 Global Hackathon)Jérôme Petazzoni
Not on board of the Docker ship yet? This presentation will get you up to speed, and explain everything you want to know about Linux Containers and Docker, including the new features of the latest 0.7 version (which brings support for all Linux distros and kernels).
Trystakc.cn was announced in OpenStack Summit San Diego 2012(www.slideshare.net/openstack/trystack-introfinalpdf
).It was a Non-profit OpenStack community projects.
By Stackers, for stackers.Experience the latest OpenStack features.
Welcoming contributions and feedback, Join the fun !
Luke Jennings, Countercept
Attackers have been avoiding disk and staying memory resident for over a decade and this has traditionally proven an Achilles heels for security products and the teams that operate them. The boom in both EDR products and memory forensics toolkits in more recent years have helped defenders to fight back but attackers are already adapting their approaches.
This talk will cover both classic and modern techniques for injecting code into legitimate processes on Microsoft Windows systems, as well as several techniques for detecting them. This will include both system tracing methods, good for proactive detection, as well as memory analysis techniques that have the added benefit of allow detection of pre-existing compromises in real-world incident response scenarios, with a brief case study example. As part of this, practical examples will be given showing how Microsoft’s ATP and Sysmon help in this area as well as other techniques. Finally, the future of this area will be considered, including how the .NET runtime already complicates detection techniques in this area and how this will likely become increasingly challenging as more attackers discover and exploit this.
By the end of the talk, the audience should understand the importance of code injection in the context of memory-resident implants, the key techniques for performing it and detecting it and the challenges of achieving this in the real-world at enterprise scale.
Linux container (LXC) seems to be preferred technology for deployment of Platform as a service (PaaS) in cloud. Partly because it's easy to install on top of existing visualization platforms (KVM, VMware, VirtualBox), partly because it is lightweight solution to provide separation and process allocations between separate containers running under single kernel.
In this talk we will take a look at LXC and try to explain how to combine it with mandatory access control (MAC) mechanisms within Linux kernel to provide secure separation between different users of applications.
Christian Kniep from Docker Inc. gave this talk at the Stanford HPC Conference.
"This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward, Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale. In conclusion, attendees will get an update on what problems still hinder the adoption of containers for distributed high performance workloads and how Docker is addressing these issues."
Christian Kniep is a Technical Account Manager at Docker, Inc. With a 10 year journey rooted in the HPC parts of the german automotive industry, Christian Kniep started to support CAE applications and VR installations. When told at a conference that HPC can not learn anything from the emerging Cloud and BigData companies, he became curious and was leading the containerization effort of the cloud-stack at Playstation Now. Christian joined Docker Inc in 2017 to help push the adoption forward and be part of the innovation instead of an external bystander. During the day he helps Docker customers in the EMEA region to fully utilize the power of containers; at night he likes to explore new emerging trends by containerizing them first and seek application in the nebulous world of DevOps.
Watch the video: https://wp.me/p3RLHQ-i4X
Learn more: http://docker.com
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com
Running Applications on the NetBSD Rump Kernel by Justin Cormack eurobsdcon
Abstract
The NetBSD rump kernel has been developed for some years now, allowing NetBSD kernel drivers to be used unmodified in many environments, for example as userspace code. However it is only since last year that it has become possible to easily run unmodified applications on the rump kernel, initially with the rump kernel on Xen port, and then with the rumprun tools to run them in userspace on Linux, FreeBSD and NetBSD. This talk will look at how this is achieved, and look at use cases, including kernel driver development, and lightweight process virtualization.
Speaker bio
Justin Cormack has been a Unix user, developer and sysadmin since the early 1990s. He is based in London and works on open source cloud applications, Lua, and the NetBSD rump kernel project. He has been a NetBSD developer since early 2014.
Introduction to Docker, December 2014 "Tour de France" EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the "Tour de France" in Paris, Lille, Lyon, Nice...
Windows Internals for Linux Kernel DevelopersKernel TLV
Agenda:
The Windows kernel has an honorable history of more than a quarter of a century. Since its inception in 1989, Windows NT supported a variety of modern OS features -- symmetric multiprocessing, interrupt prioritization, virtual memory, deferred interrupt processing, and many others. In this talk, targeted for Linux kernel developers, we will highlight the key features of the Windows NT kernel that are interesting or different from Linux's perspective. We will begin with a brief overview of processes, threads, and virtual memory on Windows. Next, we will talk about interrupt handling, interrupt priorities (IRQLs), bottom-half processing (DPC, APC, kernel worker threads, kernel thread pool), and I/O request flow. Among other things, we will look at device driver structure on Windows, application to driver communication (handles, IOCTLs), and the logical \DosDevices filesystem. Finally, we will discuss some features introduced in newer Windows versions, such as user-mode drivers (UMDF).
Speaker:
Sasha is the CTO of Sela Group, a training and consulting company based in Israel that employs over 400 developers world-wide. Most of Sasha's work revolves around performance optimization, production debugging, and low-level system diagnostics, but he also dabbles in mobile application development on iOS and Android. Sasha is the author of two books and three Pluralsight courses, and a contributor to multiple open-source projects. He blogs at http://blog.sashag.net.
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...Yandex
Lightweight virtualization", also called "OS-level virtualization", is not new. On Linux it evolved from VServer to OpenVZ, and, more recently, to Linux Containers (LXC). It is not Linux-specific; on FreeBSD it's called "Jails", while on Solaris it’s "Zones". Some of those have been available for a decade and are widely used to provide VPS (Virtual Private Servers), cheaper alternatives to virtual machines or physical servers. But containers have other purposes and are increasingly popular as the core components of public and private Platform-as-a-Service (PAAS), among others.
Just like a virtual machine, a Linux Container can run (almost) anywhere. But containers have many advantages over VMs: they are lightweight and easier to manage. After operating a large-scale PAAS for a few years, dotCloud realized that with those advantages, containers could become the perfect format for software delivery, since that is how dotCloud delivers from their build system to their hosts. To make it happen everywhere, dotCloud open-sourced Docker, the next generation of the containers engine powering its PAAS. Docker has been extremely successful so far, being adopted by many projects in various fields: PAAS, of course, but also continuous integration, testing, and more.
Introduction to Docker (as presented at December 2013 Global Hackathon)Jérôme Petazzoni
Not on board of the Docker ship yet? This presentation will get you up to speed, and explain everything you want to know about Linux Containers and Docker, including the new features of the latest 0.7 version (which brings support for all Linux distros and kernels).
Trystakc.cn was announced in OpenStack Summit San Diego 2012(www.slideshare.net/openstack/trystack-introfinalpdf
).It was a Non-profit OpenStack community projects.
By Stackers, for stackers.Experience the latest OpenStack features.
Welcoming contributions and feedback, Join the fun !
Luke Jennings, Countercept
Attackers have been avoiding disk and staying memory resident for over a decade and this has traditionally proven an Achilles heels for security products and the teams that operate them. The boom in both EDR products and memory forensics toolkits in more recent years have helped defenders to fight back but attackers are already adapting their approaches.
This talk will cover both classic and modern techniques for injecting code into legitimate processes on Microsoft Windows systems, as well as several techniques for detecting them. This will include both system tracing methods, good for proactive detection, as well as memory analysis techniques that have the added benefit of allow detection of pre-existing compromises in real-world incident response scenarios, with a brief case study example. As part of this, practical examples will be given showing how Microsoft’s ATP and Sysmon help in this area as well as other techniques. Finally, the future of this area will be considered, including how the .NET runtime already complicates detection techniques in this area and how this will likely become increasingly challenging as more attackers discover and exploit this.
By the end of the talk, the audience should understand the importance of code injection in the context of memory-resident implants, the key techniques for performing it and detecting it and the challenges of achieving this in the real-world at enterprise scale.
Docker is an open-source implementation of the deployment engine .
-No Guest OS
-Rides on the already existing kernel’s
- Uses LinuX Containers (LXC) running in the host OS
- Only Container, Apps on Container
The virtualization can be described in a generic way as a separation of the service request from the underlying physical delivery of that service. In computer virtualization, an additional layer called hypervisor is typically added between the hardware and the operating system. The hypervisor layer is responsible for both sharing of hardware resource and the enforcement of mandatory access control rules based on the available hardware resources.
There are three types of virtualization: full virtualization, para-virtualization and operating system level (OS-level) virtualization.
With the Grizzly release of OpenStack comes many new features for Hyper-V and Windows platforms.
This was the work that was done for the Grizzly the release, and get an early preview before the Havana Summit in Portland.
Come and see Grizzly running on Hyper-V and supporting new features such as:
Quantum
Quantum Agent for Hyper-V
VLAN and Routing Support
Cinder
Windows as a Storage Server
Nova
Resize/Cold Migration
HTML5 Canvas/RDP Gateway
Cloudinit functionality for Windows guests.
This slides focuses on Virtualization concepts, types of virtualization, Hypervisors, Evolution of virtualization towards cloud and QEMU-KVM architecture.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
1. The Turtles Project:
Design and Implementation of Nested Virtualization
Muli Ben-Yehuday Michael D. Dayz Zvi Dubitzkyy Michael Factory
Nadav Har’Ely Abel Gordony Anthony Liguoriz Orit Wassermany
Ben-Ami Yassoury
yIBM Research – Haifa
zIBM Linux Technology Center
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 1 / 22
2. What is nested x86 virtualization?
Running multiple unmodified
hypervisors
With their associated
unmodified VM’s
Guest
Simultaneously
Hypervisor
On the x86 architecture
Which does not support
nesting in hardware. . .
. . . but does support a single
level of virtualization Hardware
Hypervisor
Guest
OS
Guest
OS
Guest
OS
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 2 / 22
3. Why?
Operating systems are already hypervisors (Windows 7 with XP
mode, Linux/KVM)
To be able to run other hypervisors in clouds
Security (e.g., hypervisor-level rootkits)
Co-design of x86 hardware and system software
Testing, demonstrating, debugging, live migration of hypervisors
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 3 / 22
4. Related work
First models for nested virtualization [PopekGoldberg74,
BelpaireHsu75, LauerWyeth73]
First implementation in the IBM z/VM; relies on architectural
support for nested virtualization (sie)
Microkernels meet recursive VMs [FordHibler96]: assumes we
can modify software at all levels
x86 software based approaches (slow!) [Berghmans10]
KVM [KivityKamay07] with AMD SVM [RoedelGraf09]
Early Xen prototype [He09]
Blue Pill rootkit hiding from other hypervisors [Rutkowska06]
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 4 / 22
5. What is the Turtles project?
Efficient nested virtualization for Intel x86 based on KVM
Multiple guest hypervisors and VMs: VMware, Windows, . . .
Code publicly available
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 5 / 22
6. What is the Turtles project? (cont’)
Nested VMX virtualization for nested CPU virtualization
Multi-dimensional paging for nested MMU virtualization
Multi-level device assignment for nested I/O virtualization
Micro-optimizations to make it go fast (see paper)
+ + =
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 6 / 22
7. Theory of nested CPU virtualization
Single-level architectural support (x86) vs. multi-level architectural
support (e.g., z/VM)
Single level ) one hypervisor, many guests
Turtles approach: L0 multiplexes the hardware between L1 and L2,
running both as guests of L0—without either being aware of it
(Scheme generalized for n levels; Our focus is n=2)
Guest
Host Hypervisor
Hardware
Host Hypervisor
Hardware
Multiple logical levels Multiplexed on a single level
L1
L0
L2
L1
Guest
L2
Guest
L2
L0
Guest
L2 Guest
Hypervisor
Guest
Hypervisor Guest Guest
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 7 / 22
8. Theory of nested CPU virtualization
Single-level architectural support (x86) vs. multi-level architectural
support (e.g., z/VM)
Single level ) one hypervisor, many guests
Turtles approach: L0 multiplexes the hardware between L1 and L2,
running both as guests of L0—without either being aware of it
(Scheme generalized for n levels; Our focus is n=2)
Guest
Host Hypervisor
Hardware
Host Hypervisor
Hardware
Multiple logical levels Multiplexed on a single level
L1
L0
L2
L1
Guest
L2
Guest
L2
L0
Guest
L2 Guest
Hypervisor
Guest
Hypervisor Guest Guest
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 7 / 22
9. Theory of nested CPU virtualization
Single-level architectural support (x86) vs. multi-level architectural
support (e.g., z/VM)
Single level ) one hypervisor, many guests
Turtles approach: L0 multiplexes the hardware between L1 and L2,
running both as guests of L0—without either being aware of it
(Scheme generalized for n levels; Our focus is n=2)
Guest
Host Hypervisor
Hardware
Host Hypervisor
Hardware
Multiple logical levels Multiplexed on a single level
L1
L0
L2
L1
Guest
L2
Guest
L2
L0
Guest
L2 Guest
Hypervisor
Guest
Hypervisor Guest Guest
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 7 / 22
10. Theory of nested CPU virtualization
Single-level architectural support (x86) vs. multi-level architectural
support (e.g., z/VM)
Single level ) one hypervisor, many guests
Turtles approach: L0 multiplexes the hardware between L1 and L2,
running both as guests of L0—without either being aware of it
(Scheme generalized for n levels; Our focus is n=2)
Guest
Host Hypervisor
Hardware
Host Hypervisor
Hardware
Multiple logical levels Multiplexed on a single level
L1
L0
L2
L1
Guest
L2
Guest
L2
L0
Guest
L2 Guest
Hypervisor
Guest
Hypervisor Guest Guest
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 7 / 22
11. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
12. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
13. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
14. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
15. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
16. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
17. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
18. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
19. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
20. Nested VMX virtualization: flow
L0 runs L1 with VMCS0!1
L1 prepares VMCS1!2 and
executes vmlaunch
vmlaunch traps to L0
L0 merges VMCS’s:
VMCS0!1 merged with
VMCS1!2 is VMCS0!2
L0 launches L2
L2 causes a trap
L0 handles trap itself or
forwards it to L1
. . .
eventually, L0 resumes L2
repeat
L1 L2
Host Hypervisor
Hardware
Guest
OS
Guest
Hypervisor
VMCS
Memory
VMCS Tables
Memory
Tables VMCS Memory
Tables
L0
1-2 State
0-1 State 0-2 State
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 8 / 22
21. Exit multiplication makes angry turtle angry
To handle a single L2 exit, L1 does many things: read and write
the VMCS, disable interrupts, . . .
Those operations can trap, leading to exit multiplication
Exit multiplication: a single L2 exit can cause 40-50 L1 exits!
Optimize: make a single exit fast and reduce frequency of exits
…
…
…
…
L3
L2
L1
L0
Single Two
Three Levels
Level
Levels
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 9 / 22
22. Exit multiplication makes angry turtle angry
To handle a single L2 exit, L1 does many things: read and write
the VMCS, disable interrupts, . . .
Those operations can trap, leading to exit multiplication
Exit multiplication: a single L2 exit can cause 40-50 L1 exits!
Optimize: make a single exit fast and reduce frequency of exits
…
…
…
…
L3
L2
L1
L0
Single Two
Three Levels
Level
Levels
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 9 / 22
23. Exit multiplication makes angry turtle angry
To handle a single L2 exit, L1 does many things: read and write
the VMCS, disable interrupts, . . .
Those operations can trap, leading to exit multiplication
Exit multiplication: a single L2 exit can cause 40-50 L1 exits!
Optimize: make a single exit fast and reduce frequency of exits
…
…
…
…
L3
L2
L1
L0
Single Two
Three Levels
Level
Levels
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 9 / 22
24. Exit multiplication makes angry turtle angry
To handle a single L2 exit, L1 does many things: read and write
the VMCS, disable interrupts, . . .
Those operations can trap, leading to exit multiplication
Exit multiplication: a single L2 exit can cause 40-50 L1 exits!
Optimize: make a single exit fast and reduce frequency of exits
…
…
…
…
L3
L2
L1
L0
Single Two
Three Levels
Level
Levels
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 9 / 22
25. MMU virtualization via multi-dimensional paging
Three logical translations: L2 virt ! phys, L2 ! L1, L1 ! L0
Only two tables in hardware with EPT:
virt ! phys and guest physical ! host physical
L0 compresses three logical translations onto two hardware tables
SPT12
Shadow on top of
L2 virtual
L2 physical
L1 physical
L0 physical
GPT
L2 virtual
L2 physical
L1 physical
L0 physical
GPT
SPT02
shadow
SPT12
EPT01
Multi-dimensional
L2 virtual
L2 physical
L1 physical
L0 physical
GPT
EPT02
EPT12
paging
Shadow on top
of EPT
EPT01
baseline better best
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 10 / 22
26. Baseline: shadow-on-shadow
SPT12
L2 virtual
L2 physical
L1 physical
L0 physical
GPT
SPT02
Assume no EPT table; all hypervisors use shadow paging
Useful for old machines and as a baseline
Maintaining shadow page tables is expensive
Compress: three logical translations ) one table in hardware
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 11 / 22
27. Better: shadow-on-EPT
L2 virtual
L2 physical
L1 physical
L0 physical
SPT12
EPT01
GPT
Instead of one hardware table we have two
Compress: three logical translations ) two in hardware
Simple approach: L0 uses EPT, L1 uses shadow paging for L2
Every L2 page fault leads to multiple L1 exits
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 12 / 22
28. Best: multi-dimensional paging
L2 virtual
L2 physical
L1 physical
L0 physical
GPT
EPT02
EPT12
EPT01
EPT table rarely changes; guest page table changes a lot
Again, compress three logical translations ) two in hardware
L0 emulates EPT for L1
L0 uses EPT0!1 and EPT1!2 to construct EPT0!2
End result: a lot less exits!
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 13 / 22
33. Introduction to I/O virtualization
Device emulation [Sugerman01]
HOST
GUEST
1
2
device
4 3
device
driver
device
emulation
driver
Para-virtualized drivers [Barham03, Russell08]
HOST
GUEST
front−end
virtual
driver
1
driver
3 2
back−end
device virtual
driver
Direct device assignment [Levasseur04,Yassour08]
HOST
GUEST
device
driver
Direct assignment best performing option
Direct assignment requires IOMMU for safe DMA bypass
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 14 / 22
34. Multi-level device assignment
With nested 3x3 options for I/O virtualization (L2 , L1 , L0)
Multi-level device assignment means giving an L2 guest direct
access to L0’s devices, safely bypassing both L0 and L1
L1
hypervisor
physical
device
L0
hypervisor
L2 device
driver
MMIOs and PIOs
L L0 IOMMU 1 IOMMU
Device DMA via
platform IOMMU
How? L0 emulates an IOMMU for L1 [Amit10]
L0 compresses multiple IOMMU translations onto the single
hardware IOMMU page table
L2 programs the device directly
Device DMA’s into L2 memory space directly
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 15 / 22
35. Multi-level device assignment
With nested 3x3 options for I/O virtualization (L2 , L1 , L0)
Multi-level device assignment means giving an L2 guest direct
access to L0’s devices, safely bypassing both L0 and L1
L1
hypervisor
physical
device
L0
hypervisor
L2 device
driver
MMIOs and PIOs
L L0 IOMMU 1 IOMMU
Device DMA via
platform IOMMU
How? L0 emulates an IOMMU for L1 [Amit10]
L0 compresses multiple IOMMU translations onto the single
hardware IOMMU page table
L2 programs the device directly
Device DMA’s into L2 memory space directly
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 15 / 22
36. Multi-level device assignment
With nested 3x3 options for I/O virtualization (L2 , L1 , L0)
Multi-level device assignment means giving an L2 guest direct
access to L0’s devices, safely bypassing both L0 and L1
L1
hypervisor
physical
device
L0
hypervisor
L2 device
driver
MMIOs and PIOs
L L0 IOMMU 1 IOMMU
Device DMA via
platform IOMMU
How? L0 emulates an IOMMU for L1 [Amit10]
L0 compresses multiple IOMMU translations onto the single
hardware IOMMU page table
L2 programs the device directly
Device DMA’s into L2 memory space directly
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 15 / 22
37. Multi-level device assignment
With nested 3x3 options for I/O virtualization (L2 , L1 , L0)
Multi-level device assignment means giving an L2 guest direct
access to L0’s devices, safely bypassing both L0 and L1
L1
hypervisor
physical
device
L0
hypervisor
L2 device
driver
MMIOs and PIOs
L L0 IOMMU 1 IOMMU
Device DMA via
platform IOMMU
How? L0 emulates an IOMMU for L1 [Amit10]
L0 compresses multiple IOMMU translations onto the single
hardware IOMMU page table
L2 programs the device directly
Device DMA’s into L2 memory space directly
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 15 / 22
38. Multi-level device assignment
With nested 3x3 options for I/O virtualization (L2 , L1 , L0)
Multi-level device assignment means giving an L2 guest direct
access to L0’s devices, safely bypassing both L0 and L1
L1
hypervisor
physical
device
L0
hypervisor
L2 device
driver
MMIOs and PIOs
L L0 IOMMU 1 IOMMU
Device DMA via
platform IOMMU
How? L0 emulates an IOMMU for L1 [Amit10]
L0 compresses multiple IOMMU translations onto the single
hardware IOMMU page table
L2 programs the device directly
Device DMA’s into L2 memory space directly
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 15 / 22
39. Multi-level device assignment
With nested 3x3 options for I/O virtualization (L2 , L1 , L0)
Multi-level device assignment means giving an L2 guest direct
access to L0’s devices, safely bypassing both L0 and L1
L1
hypervisor
physical
device
L0
hypervisor
L2 device
driver
MMIOs and PIOs
L L0 IOMMU 1 IOMMU
Device DMA via
platform IOMMU
How? L0 emulates an IOMMU for L1 [Amit10]
L0 compresses multiple IOMMU translations onto the single
hardware IOMMU page table
L2 programs the device directly
Device DMA’s into L2 memory space directly
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 15 / 22
40. Experimental Setup
Running Linux, Windows,
KVM, VMware, SMP, . . .
Macro workloads:
kernbench
SPECjbb
netperf
Multi-dimensional paging?
Multi-level device assignment?
KVM as L1 vs. VMware as L1?
See paper for full experimental
details, more benchmarks and
analysis, including worst case
synthetic micro-benchmark
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 16 / 22
41. Macro: SPECjbb and kernbench
kernbench
Host Guest Nested NestedDRW
Run time 324.3 355 406.3 391.5
% overhead vs. host - 9.5 25.3 20.7
% overhead vs. guest - - 14.5 10.3
SPECjbb
Host Guest Nested NestedDRW
Score 90493 83599 77065 78347
% degradation vs. host - 7.6 14.8 13.4
% degradation vs. guest - - 7.8 6.3
Table: kernbench and SPECjbb results
Exit multiplication effect not as bad as we feared
Direct vmread and vmwrite (DRW) give an immediate boost
Take-away: each level of virtualization adds approximately the same
overhead!
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 17 / 22
42. Macro: multi-dimensional paging
Shadow on EPT
Multi−dimensional paging
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
kernbench specjbb netperf
Improvement ratio
Impact of multi-dimensional paging depends on rate of page faults
Shadow-on-EPT: every L2 page fault causes L1 multiple exits
Multi-dimensional paging: only EPT violations cause L1 exits
EPT table rarely changes: #(EPT violations) << #(page faults)
Multi-dimensional paging huge win for page-fault intensive
kernbench
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 18 / 22
43. Macro: multi-level device assignment
throughput (Mbps)
%cpu
1,000
900
800
700
600
500
400
300
200
100
0
native
nested guest
nested guest
nested guest
emulation / emulation
single level guest
single emulation
level virtio
guest
single direct level access
guest
nested guest
direct / virtio
virtio / emulation
virtio / virtio
100
80
60
40
20
0
nested guest
direct / direct
throughput (Mbps)
% cpu
Benchmark: netperf TCP_STREAM (transmit)
Multi-level device assignment best performing option
But: native at 20%, multi-level device assignment at 60% (x3!)
Interrupts considered harmful, cause exit multiplication
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 19 / 22
44. Macro: multi-level device assignment (sans interrupts)
1000
900
800
700
600
500
400
300
200
100
L0 (bare metal)
L2 (direct/direct)
L2 (direct/virtio)
16 32 64 128 256 512
Throughput (Mbps)
Message size (netperf -m)
What if we could deliver device interrupts directly to L2?
Only 7% difference between native and nested guest!
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 20 / 22
45. Conclusions
Efficient nested x86
virtualization is challenging but
feasible
A whole new ballpark opening
up many exciting
applications—security, cloud,
architecture, . . .
Current overhead of 6-14%
Negligible for some
workloads, not yet for others
Work in progress—expect at
most 5% eventually
Code is available
It’s turtles all the way down
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 21 / 22
46. Conclusions
Efficient nested x86
virtualization is challenging but
feasible
A whole new ballpark opening
up many exciting
applications—security, cloud,
architecture, . . .
Current overhead of 6-14%
Negligible for some
workloads, not yet for others
Work in progress—expect at
most 5% eventually
Code is available
It’s turtles all the way down
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 21 / 22
47. Conclusions
Efficient nested x86
virtualization is challenging but
feasible
A whole new ballpark opening
up many exciting
applications—security, cloud,
architecture, . . .
Current overhead of 6-14%
Negligible for some
workloads, not yet for others
Work in progress—expect at
most 5% eventually
Code is available
It’s turtles all the way down
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 21 / 22
48. Conclusions
Efficient nested x86
virtualization is challenging but
feasible
A whole new ballpark opening
up many exciting
applications—security, cloud,
architecture, . . .
Current overhead of 6-14%
Negligible for some
workloads, not yet for others
Work in progress—expect at
most 5% eventually
Code is available
It’s turtles all the way down
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 21 / 22
49. Conclusions
Efficient nested x86
virtualization is challenging but
feasible
A whole new ballpark opening
up many exciting
applications—security, cloud,
architecture, . . .
Current overhead of 6-14%
Negligible for some
workloads, not yet for others
Work in progress—expect at
most 5% eventually
Code is available
It’s turtles all the way down
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 21 / 22
50. Questions?
Ben-Yehuda et al. (IBM Research) The Turtles Project: Nested Virtualization OSDI ’10 22 / 22