The document summarizes the evolutions of power management in Xen. It discusses evolutions in idle power management through Xen cpuidle and run-time power management through Xen cpufreq. It provides experimental data showing improvements in SPECpower scores and reductions in idle power consumption when these features are enabled. It also discusses how guest virtual machines can impact power and how "green" guest OS designs with features like smaller tick frequencies can reduce power consumption.
Capacitance Sensing - Migrating from CSR to CSAemilyjoseph444
This application note describes migrating from the CapSense Relaxation Oscillator (CSR) User Module (UM) to the CapSense
Successive Approximation (CSA) UM.
Capacitance Sensing - Migrating from CSR to CSAemilyjoseph444
This application note describes migrating from the CapSense Relaxation Oscillator (CSR) User Module (UM) to the CapSense
Successive Approximation (CSA) UM.
Static partitioning is used to split an embedded system into multiple domains, each of them having access only to a portion of the hardware on the SoC. It is key to enable mixed-criticality scenarios, where a critical application, often based on a small RTOS, runs alongside a larger non-critical app, typically based on Linux. The two domains cannot interfere with each other.
This talk will explain how to use Xen for static partitioning. It will introduce dom0-less, a new Xen feature written for the purpose. Dom0-less allows multiple VMs to start at boot time directly from the Xen hypervisor, decreasing boot times drastically. It makes it very easy to partition the system without virtualization overhead. Dom0 becomes unnecessary.
This presentation will go into details on how to setup a Xen dom0-less system. It will show configuration examples and explain device assignment. The talk will discuss its implications for latency-sensitive and safety-critical environments.
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...The Linux Foundation
TrenchBoot is a cross-community OSS integration project for hardware-rooted, late launch integrity of open and proprietary systems. It provides a general purpose, open-source DRTM kernel for measured system launch and attestation of device integrity to trust-centric access infrastructure. TrenchBoot closes the UEFI Measurement Gap and reduces the need to trust system firmware. This talk will introduce TrenchBoot architecture and a recent collaboration with Oracle to launch the Linux kernel directly with Intel TXT or AMD SVM Secure Launch. It will propose mechanisms for integrating the Xen hypervisor into a TrenchBoot system launch. DRTM-enabled capabilities for client, server and embedded platforms will be presented for consideration by the Xen community.
Static partitioning is used to split an embedded system into multiple domains, each of them having access only to a portion of the hardware on the SoC. It is key to enable mixed-criticality scenarios, where a critical application, often based on a small RTOS, runs alongside a larger non-critical app, typically based on Linux. The two domains cannot interfere with each other.
This talk will explain how to use Xen for static partitioning. It will introduce dom0-less, a new Xen feature written for the purpose. Dom0-less allows multiple VMs to start at boot time directly from the Xen hypervisor, decreasing boot times drastically. It makes it very easy to partition the system without virtualization overhead. Dom0 becomes unnecessary.
This presentation will go into details on how to setup a Xen dom0-less system. It will show configuration examples and explain device assignment. The talk will discuss its implications for latency-sensitive and safety-critical environments.
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...The Linux Foundation
TrenchBoot is a cross-community OSS integration project for hardware-rooted, late launch integrity of open and proprietary systems. It provides a general purpose, open-source DRTM kernel for measured system launch and attestation of device integrity to trust-centric access infrastructure. TrenchBoot closes the UEFI Measurement Gap and reduces the need to trust system firmware. This talk will introduce TrenchBoot architecture and a recent collaboration with Oracle to launch the Linux kernel directly with Intel TXT or AMD SVM Secure Launch. It will propose mechanisms for integrating the Xen hypervisor into a TrenchBoot system launch. DRTM-enabled capabilities for client, server and embedded platforms will be presented for consideration by the Xen community.
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...The Linux Foundation
Artem will briefly cover what has been done since the first talk on Xen in Automotive domain back in 2013, what is going on now and what is still missing for broad adaptation of Xen in vehicles. The following topics will be covered:
Embedded/automotive features of Xen
Collaboration with AGL and GENIVI organizations for standardization
Efforts on Functional Safety compliance
Artem will also go over typical automotive use scenarios for Xen which may not be the same as generic computing use of hypervisor.
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...The Linux Foundation
In this keynote talk, we will give an overview of the state of the Xen Project, trends that impact the project, see whether challenges that surfaced last year have been addressed and how we did it, and highlight new challenges and solutions for the coming year.
In recent years unikernels have shown immense performance potential (e.g., boot times of only a few ms, image sizes of only hundreds of KBs).The fundamental drawback of unikernels is that they require that applications be manually ported to the underlying minimalistic OS, needing both expert work and often considerable amount of time.
The Unikraft project provides a unikernel code base and build system that significantly simplifies the building of unikernels. In addition to support for a number CPU architectures, languages and frameworks, Unikraft provides debugging and tracing features that are generally sorely missing from unikernel projects. In this talk we will talk about these features, show a set of preliminary performance numbers, and provide a roadmap for the project's future.
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...The Linux Foundation
The idea of making Xen secret-free has been floating since Spectre and Meltdown came into light. In this talk we will discuss what is being done and what needs to be done next.
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
This talk will introduce Dom0-less: a new way of using Xen to build mixed-criticality solutions. Dom0-less is a Xen feature that adds a novel approach to static partitioning based on virtualization. It allows multiple domains to start at boot time directly from the Xen hypervisor, decreasing boot times dramatically. Xen userspace tools, such as xl and libvirt, become optional.
Dom0-less extends the existing device tree based Xen boot protocol to cover information required by additional domains. Binaries, such as kernels and ramdisks, are loaded by the bootloader (u-boot) and advertised to Xen via new device tree bindings.
The audience will learn how to use Dom0-less to partition the system. Uboot and device tree configuration details will be explained to enable the audience to get the most out of this feature. The talk will include a status update and details on future plans.
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...The Linux Foundation
As the number of contributions grow, reviewer bandwidth becomes a bottleneck; and maintainers are always asking for more help. However, ultimately maintainers must at least Ack every patch that goes in; so if you're not a maintainer, how can you contribute? Why should anyone care about your opinion?
This talk will try to lay out some advice and guidelines for non-maintainers, for how they can do code review in a way which will effectively reduce the load on maintainers when they do come to review a patch.
This talk is a follow-up to our Summit 2017 presentation in which we covered our plans for Intel VMFUNC and #VE, as well as related use-cases. This year, we will provide a report on what we have accomplished in Xen 4.12, and what remains to be addressed. We will also give a brief status update of VMI on AMD hardware. The session will end with some real-world numbers of the Hypervisor Introspection solution running on Citrix Hypervisor 8.0 with #VE enabled.
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...The Linux Foundation
Safety certification is one of the essential requirements for software to be used in highly regulated industries. Besides technical and compliance issues (such as ISO 26262 vs IEC 611508) transitioning an existing project to become more easily safety certifiable requires significant changes to development practices within an open source project.
In this session, we will lay out some challenges of making safety certification achievable in open source and the Xen Project. We will outline the process the Xen Project has followed thus far and highlight lessons learned along the way. The talk will primarily focus on necessary process, tooling changes and community challenges that can prevent progress. We will be offering an in-depth review of how Xen Project is approaching this challenging goal and try to derive lessons for other projects and contributors.
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...The Linux Foundation
Safety certification is one of the essential requirements for software to be used in highly regulated industries. The Xen Project, a secure and stable hypervisor that is used in many different markets, has been exploring the feasibility of building safety certified products on top of Xen for a year, looking at key aspects of its code base and development practices.
In this session, we will lay out the motivation and challenges of making safety certification achievable in open source and the Xen Project. We will outline the process the project has followed thus far and highlight lessons learned along the way. The talk will cover technical enablers, necessary process and tooling changes and community challenges offering an in-depth review of how Xen Project is approaching this exciting and and challenging goal.
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixThe Linux Foundation
2018 saw fundamental shifts in security boundaries which were previously taken for granted. A lot of work has been done in the past 2 years, and largely in secret under embargo, but there is plenty more work to be done to strengthen the existing mitigations and to try to recover some performance without reopening security holes.
This talk will look at speculative execution sidechannels, the work which has already been done to mitigate the security holes, and future work which hopes to bring some improvements.
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdThe Linux Foundation
The Arm architecture provides a set of guidelines that any software should abide by when accessing the memory with MMU off and update page-tables. Failing to do so may result in getting TLB conflicts or breaking coherency.
In a previous talk ("Keeping coherency on Arm"), we focused on updating safely the stage-2 (aka P2M) page-tables. This talk will focus on the boot code and Xen memory management.
During this session, we will introduce some of the guidelines and when they should be used. We will also discuss how Xen boot sequence needs to be reworked to avoid breaking the guidelines.
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...The Linux Foundation
For many years the QEMU codebase has contained PV backends for Xen guests, giving them paravirtual access to storage, network, keyboard, mouse, etc. however these backends have not been configurable as QEMU devices as their implementation did not fully adhere to the QEMU Object Model (QOM).
Particularly the PV storage backend not using proper QOM devices, or qdevs, meant that the QEMU block layer needed to maintain legacy code that was cluttering up the source. This was causing push-back from the maintainers who did not want to accept any patches relating to that Xen backend until it was 'qdevified'.
In this talk, I'll explain the modifications I made to QEMU to achieve 'qdevification' of the PV storage backend, how compatibility with the libxl toolstack was maintained, and what the next steps in both QEMU and libxl development should be.
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DThe Linux Foundation
PCI is a local computer bus for attaching hardware devices in a computer, and is the main peripheral bus on modern x86 systems. As such, having a proper way to emulate it is crucial for Xen to be able to expose both fully emulated devices or passthrough devices to guests.
This talk will focus on the current status of PCI emulation in Xen, how and where it is used, what are its main limitations and future plans to improve it in order to be more robust and modular.
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsThe Linux Foundation
Volodymyr will speak about TEE mediators. This is a new feature in Xen which allows multiple virtual machines to interact with Trusted Execution Environment available on platform. He developed mediator for one of TEEs, namely OP-TEE.
He will give background information on why TEE is needed at all and share some implementation details.
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...The Linux Foundation
Xen is a very powerful hypervisor with a talented and diverse developers community. Despite the fact it's almost everywhere (from the Cloud to the embedded world), it can be difficult to set up and manage as a system administrator. General purpose distros have Xen packages, but that's just a start in your Xen journey: you need some tooling and knowledge to have a working and scalable platform.
XCP-ng was built to overcome those issues: by bringing Xen to the masses with a fully turnkey distro with Xen as its core. It's the logical sequel to the XCP project, with a community focus from the start. We'll see how it happened, what we did, and what's next. Finally, we'll see the impact of XCP-ng on the Xen Project.
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...The Linux Foundation
Doug has long advocated for more CI/CD (Continuous Integration / Continuous Delivery) processes to be adopted by the Xen Project from the use of Travis CI and now GitLab CI. This talk aims to propose ideas for building upon the existing process and transforming the development process to provide users a higher quality with each release by the Xen Project.
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...The Linux Foundation
High level toolstacks for server and cloud virtualization are very mature with large communities using and supporting them. Client virtualization is a much more niche community with unique requirements when compared to those found in the server space. In this talk, we’ll introduce a client virtualization toolstack for Xen (redctl) that we are using in Redfield, a new open-source client virtualization distribution that builds upon the work done by the greater virtualization and Linux communities. We will present a case for maturing libxl’s Go bindings and discuss what advantages Go has to offer for high level toolstacks, including in the server space.
Today Xen is scheduling guest virtual cpus on all available physical cpus independently from each other. Recent security issues on modern processors (e.g. L1TF) require to turn off hyperthreading for best security in order to avoid leaking information from one hyperthread to the other. One way to avoid having to turn off hyperthreading is to only ever schedule virtual cpus of the same guest on one physical core at the same time. This is called core scheduling.
This presentation shows results from the effort to implement core scheduling in the Xen hypervisor. The basic modifications in Xen are presented and performance numbers with core scheduling active are shown.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Leading Change strategies and insights for effective change management pdf 1.pdf
XS Oracle 2009 CVF
1. The On-going Evolutions of Power Management in Xen
Kevin Tian
Open Source Technology Center (OTC)
Intel Cooperation
2. Agenda
• Brief history
• Evolutions of Idle power management
• Evolutions of run-time power management
• Tools
• Experimental data about Xen power efficiency
• Power impact from VM
Intel Confidential
2
3. Brief History
Enhanced green computing
Improved stability
Jan. 2008 Jun. 2008
Better usability
Jul. 2007 Xen 3.2 Mature deep C-
…
released states support
Host S3
Xen 3.1 Sep. 2007 May. 2007 Aug. 2008
Dom0 Preliminary Xen 3.3
controlled cpufreq and released
freq/vol cpuidle support
scaling in Xen
Intel Confidential
3
5. Xen Summit Boston 2008
Dom0
ACPI Parser
External
Control
Schedulers
Registration Enter Idle
Hypercall Ladder Halt (C1)
CpuIdle driver Dynamic
Timekeeping
Tick
C1 C2
Mwait/IO
Xen
Intel Confidential
5
6. Enhanced C-states support
Dom0 noPM x ps p1 hv m
noC3 x ps p1 hv m
120%
C3 x ps p1 hv m
100% 100%
100%
ACPI Parser 80% 102.60%
Percentage
External 91.40%
60%
107.70%
Control
82.30%
40%
20%
Schedulers
0%
id le (W att) SPEC p o w e r
Registration Enter Idle (Sco r e )
Hypercall Ladder Halt (C1)
‘noPM’ has both cpuidle and cpufreq
CpuIdle driver disabled, and vice versa for other two
Dynamic
Timekeeping
Tick
cases. Compared to ‘C3’, noC3 has
C1 C2
Mwait/IO Deep C-states
maximum C-state limited to C2
Timer
C1E Xentrace
For idle watt, lower value means
greener. For SPECpower score, higher
value indicates more power efficient
Xen
Intel Confidential
6
7. TSC freeze
TSC freeze
Ideal TSC
Unsynchronized TSC Time went backwards warning
Dom0
CPU0
Actual TSC
Lots of lost ticks
Fluctuating TSC scale
factor
ACPI Parser
CPU1 Faster ToD
Xen system time skew
External
Control 0 1 sec 1 sec Scale error
…
Platform counter
…
TSC Restore TSC upon elapsed platform
counter since current entry
Schedulers
Percpu platform to TSC scale
…
TSC
Registration Enter Idle Restore TSC upon elapsed platform
Software compensation counter since last calibration
according to elapsed Percpu platform to TSC scale
…
counter of platform timer TSCLadder
Hypercall Halt (C1)
… Restore TSC upon elapsed platform
counter since power on
CpuIdle driver Global platform to TSC scale Dynamic
Timekeeping
… Tick
TSC
C1 C2
Mwait/IO Deep C-states
Timer
C1E TSC
Xentrace
save/restore
Hardware enhancement to have TSC
Always never stopped (e.g. by Intel Core-i7)
running TSC
Platform counter
TSC
Xen
Intel Confidential
7
8. APIC timer freeze
Dom0 Timer heap
Nearest deadline Reprogram local APIC
T count-down timer
T T
APIC timer
ACPI Parser T T T T freeze interrupt when
… count down to
zero
External Scan/execute
Control expired timers (Delayed)
Timer softirq handler Local APIC ISR
Schedulers
Registration Enter Idle
Solution
use platform timer Halt (C1)
(PIT/HPET) to carry nearest
Hypercall Ladder
deadline, when APIC timer is halted in deep C-
states:
CpuIdle driver Dynamic
Timekeeping
Tick
C1 C2
Mwait/IO Deep C-states is required since number of platform
Broadcast
Timer
timer source is less than number of CPUs
C1E PIT/HPET TSC
Xentrace
broadcast save/restore
will come soon to have more
MSI based HPET interrupt
platform timer sources with reduced broadcast traffic
Always
running TSC
will be supported in new CPU
Always running APIC timer
soon
Xen
Intel Confidential
8
9. Menu governor
Ladder
Dom0
Expected minimal residency at Cn
Expected minimal residency at Cn+1
C0
ACPI Parser
Cn
External
Control Cn+1
Inefficient
Promotion if continuous N Cn
Schedulers
residencies > expectation Demotion if current Cn+1
residency < expectation
Registration Enter Idle
less idle watt consumed (-5.2%)
Higher SPECpower score (+1.6%)
Hypercall Ladder
Menu (HVM WinWPsp1)
CpuIdle driver
Menu Halt (C1)
C1 C2
Mwait/IO Deep C-states Nearest timer deadline
C1, 1ns, 20w
Dynamic
PICK
Timekeeping
Last unexpected break event
Tick
C1E C2, 10ns, 15w
PIT/HPET TSC
broadcast save/restore
Timer
C3, 100ns, 5w Latency/power requirement
…
Always
Xentrace
running TSC
To be further tuned!
Xen
Intel Confidential
9
10. Range timer
Dom0
0(ms) 1 2 3 4 5 6
For each timer, it accepts a
range for expiration now:
C0
ACPI Parser
[expiration, expiration +
Cn
Frequent C-state
External
timer_slop]
Control Entry/exit may (default 50us for timer_slop)
Instead consume
0(ms) 1 2 3 4 5 6
More power Overlapping ranges can be
merged to reduce timer
Schedulers
interrupt count
C0
Registration Enter Idle
Cn
Hypercall Ladder
Menu Halt (C1)
CpuIdle driver Dynamic
Timekeeping
Tick
C1 C2
Mwait/IO Deep C-states Range
Timer
timer
C1E PIT/HPET TSC
Xentrace
broadcast save/restore
Always
running TSC
Xen
Intel Confidential
10
11. Range timer effect
One UP HVM RHEL5u1 Multiple idle UP HVM RHEL5u1
10000
range=50us
9000
range=1ms
timer interrupt/second
8000
7000
6000
-7.5% +1.2%
5000
4000
3000
2000
1000
Idle(watt) SPECpower(score)
0
100% 100%
50us
1HVM 2HVM 4HVM
92.50% 101.20%
1ms
Collected on a two-cores mobile platform
Intel Confidential
11
12. Current picture
Dom0
Xenpm
ACPI Parser
External
Control
Power
Schedulers
Aware
Registration Enter Idle
Hypercall Statistics Ladder
Menu Halt (C1)
CpuIdle driver Dynamic
Timekeeping
Tick
C1 C2
Mwait/IO Deep C-states Range
Timer
timer
C1E PIT/HPET TSC
Xentrace
broadcast save/restore
Always
running TSC
Xen
Intel Confidential
12
14. Xen Summit Boston 2008
Dom0
Linux
PM
Tools
On User
Others
Demand Space
Cpufreq core
Power ACPI
Others ACPI Parser
Now-K8 cpufreq
Linux Cpufreq External
Control
Query
Idle
State
Registration
Schedulers
On
Demand
Tiny cpufreq core
Enable
MSR ACPI
Cpufreq
Access
(IA32)
Perm
Xen
Intel Confidential
14
15. Current picture
Dom0
Linux
PM Xenpm
?
Tools
On User
Others
Demand Space
Cpufreq core
Power ACPI
Others ACPI Parser
Now-K8 cpufreq
Linux Cpufreq External
Control
Query
Idle
More
State
governors
Registration
Schedulers Turbo mode
On User Perfor Power
Enhanced
Demand Space mance Save
Control
User
Tiny cpufreq core Interface
Statistics
Enable
Cpu offline
MSR ACPI ACPI Power
/ online
Cpufreq Cpufreq Now
Access
(IA32) (IA64) K8
Perm
More
Xen drivers
Intel Confidential
15
17. Dom0
Retrieve run-time
Xenpm
statistics about Xen
cpuidle and cpufreq
Apply user policy on
exposed control
knobs of Xen cpufreq
(governor, set freq,
etc)
More capabilities to
be added later, e.g.
profile…
Log every state change for Xen cpuidle and cpufreq:
Xentrace
CPU0 391365842416 (+ 21204) cpu_idle_entry [ idle to state 2 ]
CPU0 391375951050 (+10108634) cpu_idle_exit [ return from state 2 ]
Raw data could be further processed by other scripts
Xen
Intel Confidential
17
19. • All data shown in this section:
– For reference only and not guaranteed
– On a two-core mobile platform, with one HVM guest created
• Server consolidation effect with multiple VMs/workloads are in progress
• Improvement when Xen cpuidle and cpufreq are enabled
– SPECpower score is normalized (100% noPM score is 1000 ssj_ops / watt)
– Similarly, consumed watt is also normalized (idle noPM watt is 10w)
1500
25
noPM-score
PM-score
normalized score (ssj_ops/watt)
noPM-watt
PM-watt 20
Normalized power (watt)
Reduced
1000
Power! 15
10
500
Improved
Efficiency
5
0
0
0 10 20 30 40 50 60 70 80 90 100
SPECpower workload (%)
Intel Confidential
19
20. • Below is a more attracting comparison
– Native WinXPsp1’s idle watt and SPECpower score are both normalized to 100% as the base
native xpsp1
140% native rhel5u1
HZ=1000 in
xen xpsp1 hvm
RHEL5u1 incurs high
timer interrupt kvm xpsp1 hvm
130%
xen rhel5u1 hvm
kvm rhel5u1 hvm
120%
Normalized percentage (%)
110%
100%
Xen is slightly
more power
90%
efficient than KVM
Similar Idle power
80% consumption for
Xen and KVM
70%
60%
Native Native
50%
idle (Watt) SPECpow er (ssj_ops/w att)
Intel Confidential
20
22. • VMM shouldn’t be blamed as only reason for high power consumption!
• A ‘bad’ VM could eat power
– Just like what ‘bad’ application could do in a native OS
– Cause high break events (e.g. timer interrupts) with short C-state residency
• Which parts in VM could draw high power?
– ‘Bad’ applications hurt just as what they could on native
– How guest OS is implemented also matters
• Periodic tick frequency – HZ
• Timer usage in drivers
• Time sub-system implementation
• …
• Green guest OS wins!
– Smaller HZ, idle tickles or fully dynamic tick, range timer, etc.
Intel Confidential
22
23. Idle power consumption
120.00% 4
115.00% 3.5
‘Bad’
guest
110.00% 3
Average residency (ms)
Normalized watt
105.00% 2.5
100.00% 2
95.00% 1.5
Green Green
90.00% 1
85.00% 0.5
80.00% 0
bare Dom0 PV HVM Winxp HZ=1000 HZ=250 HZ=100 tickless
Idle power consumption
Average C-state residency
Intel Confidential
23