"Controlling a laser with Linux is crazy, but everyone in this room is crazy in his own way. So if you want to use Linux to control an industrial welding laser, I have no problem with your using PREEMPT_RT." -- Linus Torvalds
Kernel Recipes 2015: Solving the Linux storage scalability bottlenecksAnne Nicolas
lash devices introduced a sudden shift in the performance profile of direct attached storage. With IOPS rates orders of magnitude higher than rotating storage, it became clear that Linux needed a re-design of its storage stack to properly support and get the most out of these new devices.
This talk will detail the architecture of blk-mq, the redesign of the core of the Linux storage stack, and the later set of changes made to adapt the SCSI stack to this new queuing model. Early results of running Facebook infrastructure production workloads on top of the new stack will also be shared.
Jense Axboe, Facebook
"Controlling a laser with Linux is crazy, but everyone in this room is crazy in his own way. So if you want to use Linux to control an industrial welding laser, I have no problem with your using PREEMPT_RT." -- Linus Torvalds
Kernel Recipes 2015: Solving the Linux storage scalability bottlenecksAnne Nicolas
lash devices introduced a sudden shift in the performance profile of direct attached storage. With IOPS rates orders of magnitude higher than rotating storage, it became clear that Linux needed a re-design of its storage stack to properly support and get the most out of these new devices.
This talk will detail the architecture of blk-mq, the redesign of the core of the Linux storage stack, and the later set of changes made to adapt the SCSI stack to this new queuing model. Early results of running Facebook infrastructure production workloads on top of the new stack will also be shared.
Jense Axboe, Facebook
In this talk Liran will discuss interrupt management in Linux, effective handling, how to defer work using tasklets, workqueues and timers. We'll learn how to handle interrupts in userspace and talk about the performance and latency aspects of each method as well as look at some examples from the kernel source.
Liran is the CTO at Mabel technology and co-founder of DiscoverSDK - Software Libraries directory and DiscoverCloud - Business Apps directory.
More than 20 years of training experience including courses in: Linux, Android, Real-time and Embedded systems, and many more.
Agenda:
The Linux kernel has multiple "tracers" built-in, with various degrees of support for aggregation, dynamic probes, parameter processing, filtering, histograms, and other features. Starting from the venerable ftrace, introduced in kernel 2.6, all the way through eBPF, which is still under development, there are many options to choose from when you need to statically instrument your software with probes, or diagnose issues in the field using the system's dynamic probes. Modern tools include SystemTap, Sysdig, ktap, perf, bcc, and others. In this talk, we will begin by reviewing the modern tracing landscape -- ftrace, perf_events, kprobes, uprobes, eBPF -- and what insight into system activity these tools can offer. Then, we will look at specific examples of using tracing tools for diagnostics: tracing a memory leak using low-overhead kmalloc/kfree instrumentation, diagnosing a CPU caching issue using perf stat, probing network and block I/O latency distributions under load, or merely snooping user activities by capturing terminal input and output.
Speaker:
Sasha is the CTO of Sela Group, a training and consulting company based in Israel that employs over 400 developers world-wide. Most of Sasha's work revolves around performance optimization, production debugging, and low-level system diagnostics, but he also dabbles in mobile application development on iOS and Android. Sasha is the author of two books and three Pluralsight courses, and a contributor to multiple open-source projects. He blogs at http://blog.sashag.net.
Kernel Recipes 2015: Kernel packet capture technologiesAnne Nicolas
Sniffing through the ages
Capturing packets running on the wire to send them to a software doing analysis seems at first sight a simple tasks. But one has not to forget that with current network this can means capturing 30M packets per second. The objective of this talk is to show what methods and techniques have been implemented in Linux and how they have evolved over time.
The talk will cover AF_PACKET capture as well as PF_RING, dpdk and netmap. It will try to show how the various evolution of hardware and software have had an impact on the design of these technologies. Regarding software a special focus will be made on Suricata IDS which is implementing most of these capture methods.
Eric Leblond, Stamus Networks
This presentation covers the general concepts about real-time systems, how Linux kernel works for preemption, the latency in Linux, rt-preempt, and Xenomai, the real-time extension as the dual kernel approach.
The Linux Scheduler: a Decade of Wasted Coresyeokm1
The talk I gave at Papers We Love #20 (Singapore) about this academic paper "The Linux Scheduler: a Decade of Wasted Cores" by a few researchers.
The video of this talk can be found here: https://engineers.sg/v/758
Here are some relevant links:
Paper: http://www.ece.ubc.ca/~sasha/papers/eurosys16-final29.pdf
Reference Slides: http://www.i3s.unice.fr/~jplozi/wastedcores/files/extended_talk.pdf
Reference summary: https://blog.acolyer.org/2016/04/26/the-linux-scheduler-a-decade-of-wasted-cores/
Kernel Recipes 2015 - So you want to write a Linux driver frameworkAnne Nicolas
Writing a new driver framework in Linux is hard. There are many pitfalls along the way; this talk hopes to point out some of those pitfalls and hard lessons learned through examples, advice and humorous anecdotes in the hope that it will aid those adventurous enough to take on the task of writing a new driver framework. The scope of the talk includes internal framework design as well as external API design exposed to drivers and consumers of the framework. This presentation pulls directly from the Michael Turquette’s experience authoring the Common Clock
Framework and maintaining that code for the last four years.
Additionally Mike has solicited tips and advice from other subsystem maintainers, for a well-rounded overview. Be prepared to learn some winning design patterns and hear some embarrassing stories of framework design gone wrong.
Mike Turquette, BayLibre
Windows Internals for Linux Kernel DevelopersKernel TLV
Agenda:
The Windows kernel has an honorable history of more than a quarter of a century. Since its inception in 1989, Windows NT supported a variety of modern OS features -- symmetric multiprocessing, interrupt prioritization, virtual memory, deferred interrupt processing, and many others. In this talk, targeted for Linux kernel developers, we will highlight the key features of the Windows NT kernel that are interesting or different from Linux's perspective. We will begin with a brief overview of processes, threads, and virtual memory on Windows. Next, we will talk about interrupt handling, interrupt priorities (IRQLs), bottom-half processing (DPC, APC, kernel worker threads, kernel thread pool), and I/O request flow. Among other things, we will look at device driver structure on Windows, application to driver communication (handles, IOCTLs), and the logical \DosDevices filesystem. Finally, we will discuss some features introduced in newer Windows versions, such as user-mode drivers (UMDF).
Speaker:
Sasha is the CTO of Sela Group, a training and consulting company based in Israel that employs over 400 developers world-wide. Most of Sasha's work revolves around performance optimization, production debugging, and low-level system diagnostics, but he also dabbles in mobile application development on iOS and Android. Sasha is the author of two books and three Pluralsight courses, and a contributor to multiple open-source projects. He blogs at http://blog.sashag.net.
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
High Performance Storage Devices in the Linux KernelKernel TLV
Agenda:
In this talk we will present the Linux kernel storage layers and dive into blk-mq, a scalable, parallel block layer for high performance block devices, and how it is used to unleash the performance of NVMe, flash and beyond.
Speaker:
Evgeny Budilovsky, Kernel Developer at E8 Storage
https://www.linkedin.com/company/e8-storage
HKG15-305: Real Time processing comparing the RT patch vs Core isolationLinaro
HKG15-305: Real Time processing comparing the RT patch vs Core isolation
---------------------------------------------------
Speaker: Gary Robertson
Date: February 11, 2015
---------------------------------------------------
★ Session Summary ★
Give high level overview of the components involved in a DRM/Secure. Playback use case. Presentation discusses about how Client device obtains License Keys using W3C-EME implementation of any particular DRM like Widevine, how content is decrypted, decoded and rendered and how the buffers are allocated, secured and shared among various elements in the secure playback chain.
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250810
Video: https://www.youtube.com/watch?v=zC3E9xizkoY
Etherpad: http://pad.linaro.org/p/hkg15-305
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Introduction to DTrace (Dynamic Tracing), written by Brendan Gregg and delivered in 2007. While aimed at a Solaris-based audience, this introduction is still largely relevant today (2012). Since then, DTrace has appeared in other operating systems (Mac OS X, FreeBSD, and is being ported to Linux), and, many user-level providers have been developed to aid tracing of other languages.
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
In this talk Liran will discuss interrupt management in Linux, effective handling, how to defer work using tasklets, workqueues and timers. We'll learn how to handle interrupts in userspace and talk about the performance and latency aspects of each method as well as look at some examples from the kernel source.
Liran is the CTO at Mabel technology and co-founder of DiscoverSDK - Software Libraries directory and DiscoverCloud - Business Apps directory.
More than 20 years of training experience including courses in: Linux, Android, Real-time and Embedded systems, and many more.
Agenda:
The Linux kernel has multiple "tracers" built-in, with various degrees of support for aggregation, dynamic probes, parameter processing, filtering, histograms, and other features. Starting from the venerable ftrace, introduced in kernel 2.6, all the way through eBPF, which is still under development, there are many options to choose from when you need to statically instrument your software with probes, or diagnose issues in the field using the system's dynamic probes. Modern tools include SystemTap, Sysdig, ktap, perf, bcc, and others. In this talk, we will begin by reviewing the modern tracing landscape -- ftrace, perf_events, kprobes, uprobes, eBPF -- and what insight into system activity these tools can offer. Then, we will look at specific examples of using tracing tools for diagnostics: tracing a memory leak using low-overhead kmalloc/kfree instrumentation, diagnosing a CPU caching issue using perf stat, probing network and block I/O latency distributions under load, or merely snooping user activities by capturing terminal input and output.
Speaker:
Sasha is the CTO of Sela Group, a training and consulting company based in Israel that employs over 400 developers world-wide. Most of Sasha's work revolves around performance optimization, production debugging, and low-level system diagnostics, but he also dabbles in mobile application development on iOS and Android. Sasha is the author of two books and three Pluralsight courses, and a contributor to multiple open-source projects. He blogs at http://blog.sashag.net.
Kernel Recipes 2015: Kernel packet capture technologiesAnne Nicolas
Sniffing through the ages
Capturing packets running on the wire to send them to a software doing analysis seems at first sight a simple tasks. But one has not to forget that with current network this can means capturing 30M packets per second. The objective of this talk is to show what methods and techniques have been implemented in Linux and how they have evolved over time.
The talk will cover AF_PACKET capture as well as PF_RING, dpdk and netmap. It will try to show how the various evolution of hardware and software have had an impact on the design of these technologies. Regarding software a special focus will be made on Suricata IDS which is implementing most of these capture methods.
Eric Leblond, Stamus Networks
This presentation covers the general concepts about real-time systems, how Linux kernel works for preemption, the latency in Linux, rt-preempt, and Xenomai, the real-time extension as the dual kernel approach.
The Linux Scheduler: a Decade of Wasted Coresyeokm1
The talk I gave at Papers We Love #20 (Singapore) about this academic paper "The Linux Scheduler: a Decade of Wasted Cores" by a few researchers.
The video of this talk can be found here: https://engineers.sg/v/758
Here are some relevant links:
Paper: http://www.ece.ubc.ca/~sasha/papers/eurosys16-final29.pdf
Reference Slides: http://www.i3s.unice.fr/~jplozi/wastedcores/files/extended_talk.pdf
Reference summary: https://blog.acolyer.org/2016/04/26/the-linux-scheduler-a-decade-of-wasted-cores/
Kernel Recipes 2015 - So you want to write a Linux driver frameworkAnne Nicolas
Writing a new driver framework in Linux is hard. There are many pitfalls along the way; this talk hopes to point out some of those pitfalls and hard lessons learned through examples, advice and humorous anecdotes in the hope that it will aid those adventurous enough to take on the task of writing a new driver framework. The scope of the talk includes internal framework design as well as external API design exposed to drivers and consumers of the framework. This presentation pulls directly from the Michael Turquette’s experience authoring the Common Clock
Framework and maintaining that code for the last four years.
Additionally Mike has solicited tips and advice from other subsystem maintainers, for a well-rounded overview. Be prepared to learn some winning design patterns and hear some embarrassing stories of framework design gone wrong.
Mike Turquette, BayLibre
Windows Internals for Linux Kernel DevelopersKernel TLV
Agenda:
The Windows kernel has an honorable history of more than a quarter of a century. Since its inception in 1989, Windows NT supported a variety of modern OS features -- symmetric multiprocessing, interrupt prioritization, virtual memory, deferred interrupt processing, and many others. In this talk, targeted for Linux kernel developers, we will highlight the key features of the Windows NT kernel that are interesting or different from Linux's perspective. We will begin with a brief overview of processes, threads, and virtual memory on Windows. Next, we will talk about interrupt handling, interrupt priorities (IRQLs), bottom-half processing (DPC, APC, kernel worker threads, kernel thread pool), and I/O request flow. Among other things, we will look at device driver structure on Windows, application to driver communication (handles, IOCTLs), and the logical \DosDevices filesystem. Finally, we will discuss some features introduced in newer Windows versions, such as user-mode drivers (UMDF).
Speaker:
Sasha is the CTO of Sela Group, a training and consulting company based in Israel that employs over 400 developers world-wide. Most of Sasha's work revolves around performance optimization, production debugging, and low-level system diagnostics, but he also dabbles in mobile application development on iOS and Android. Sasha is the author of two books and three Pluralsight courses, and a contributor to multiple open-source projects. He blogs at http://blog.sashag.net.
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
High Performance Storage Devices in the Linux KernelKernel TLV
Agenda:
In this talk we will present the Linux kernel storage layers and dive into blk-mq, a scalable, parallel block layer for high performance block devices, and how it is used to unleash the performance of NVMe, flash and beyond.
Speaker:
Evgeny Budilovsky, Kernel Developer at E8 Storage
https://www.linkedin.com/company/e8-storage
HKG15-305: Real Time processing comparing the RT patch vs Core isolationLinaro
HKG15-305: Real Time processing comparing the RT patch vs Core isolation
---------------------------------------------------
Speaker: Gary Robertson
Date: February 11, 2015
---------------------------------------------------
★ Session Summary ★
Give high level overview of the components involved in a DRM/Secure. Playback use case. Presentation discusses about how Client device obtains License Keys using W3C-EME implementation of any particular DRM like Widevine, how content is decrypted, decoded and rendered and how the buffers are allocated, secured and shared among various elements in the secure playback chain.
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250810
Video: https://www.youtube.com/watch?v=zC3E9xizkoY
Etherpad: http://pad.linaro.org/p/hkg15-305
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Introduction to DTrace (Dynamic Tracing), written by Brendan Gregg and delivered in 2007. While aimed at a Solaris-based audience, this introduction is still largely relevant today (2012). Since then, DTrace has appeared in other operating systems (Mac OS X, FreeBSD, and is being ported to Linux), and, many user-level providers have been developed to aid tracing of other languages.
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Today Xen is scheduling guest virtual cpus on all available physical cpus independently from each other. Recent security issues on modern processors (e.g. L1TF) require to turn off hyperthreading for best security in order to avoid leaking information from one hyperthread to the other. One way to avoid having to turn off hyperthreading is to only ever schedule virtual cpus of the same guest on one physical core at the same time. This is called core scheduling.
This presentation shows results from the effort to implement core scheduling in the Xen hypervisor. The basic modifications in Xen are presented and performance numbers with core scheduling active are shown.
Operating system 28 fundamental of schedulingVaibhav Khanna
The objective of multiprogramming is to have some process running at all times to maximize CPU utilization.
The objective of time-sharing system is to switch the CPU among processes so frequently that users can interact with each program while it is running.
For a uniprocessor system, there will never be more than one running process.
If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled
Training Slides: Intermediate 202: Performing Cluster Maintenance with Zero-D...Continuent
Join us for this intermediate training session as we explore how to leverage the power of the Tungsten Clustering to perform database and OS maintenance with zero-downtime. This training is for anyone new to Continuent without prior experience, but will also serve as a wonderful refresher for any current users. Basic MySQL knowledge is assumed.
AGENDA
- Review the cluster architecture
- Describe the rolling maintenance process
- Explore what happens during a master switch
- Discuss cluster states
- Demonstrate rolling maintenance
- Re-cap commands and resources used during the demo
The objectives of Multithreaded Programming in Operating Systems are:
- To introduce the notion of a thread—a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
- To discuss the APIs for the Pthreads, Windows, and Java thread libraries
- To explore several strategies that provide implicit threading.
- To examine issues related to multithreaded programming.
- To cover operating system support for threads in Windows and Linux.
I used these slides last year to introduce RTAI and Earliest Deadline First for the course "Real-Time Operating Systems" (in English), here at University of Bologna. They include an architectural overview of RTAI, some scheduling algorithms including EDF, and instructions to install and use RTAI.
CIS*3110 Winter 2016
CIS*3110 (Operating Systems)
Assignment 2: CPU Simulation
Due Date: Sunday, March 6, 2016 at 23:59.
Requirements and Specifications
Objective
The goal of this assignment is to develop a CPU scheduling algorithm that will complete the
execution of a group of multi-threaded processes in an OS that understands threads (kernel
threads). Since a real implementation of your scheduling algorithm is not feasible, you will
implement a simulation of your CPU scheduling algorithm. Each process will have 1-50 threads;
each of the threads has its own CPU and I/O requirements. The simulated scheduling policy is on
the thread level. While your simulation will have access to all the details of the processes that need
to execute, your CPU scheduling algorithm CANNOT take advantage of future knowledge.
Specification
Given a set of processes to execute with CPU and I/O requirements, your CPU simulator will
simulate the execution of the threads based on your developed CPU scheduling policies (FCFS
and RR). Your simulation will collect the following statistics:
• the total time required to execute all the threads in all the processes
• the CPU utilization (NOT the CPU efficiency)
• the average turnaround time for all the processes
• the service time (or CPU time), I/O time and turnaround time for each individual thread
Your simulation structure should be a next event simulation. The next event approach to simulation
is the most common simulation model. At any given time, the simulation is in a single state. The
simulation state can only change at event times, where an event is defined as an occurrence that
may change the state of the system.
Events in this CPU simulation are the following:
• thread arrival
• the transition of a thread state (e.g. when an interrupt occurs due to a time slice, the thread
moves from running state to ready state).
Each event occurs at a specified time. Since the simulation state only changes at an event, the
clock can be advanced to the next most recently scheduled event (the meaning of next event
simulation model).
CIS*3110 Winter 2016
Events are scheduled via an event queue. The event queue is a sorted queue which contains
"future" events; the queue is sorted by the time of these "future" events. The event queue is
initialized to contain the arrival of all threads. The main loop of the simulation consists of processing
the next event, perhaps adding more future events in the queue as a result, advancing the clock,
and so on until all threads have terminated.
Simulation Execution
Your simulation program will be invoked as:
simcpu [-d] [-v] [-r quantum] < input_file
where
• -d stands for detailed information
• -v stands for verbose mode
• –r indicates Round Robin scheduling with the given quantum (an integer).
You can assume only these flags will be used with your program, and that they will appear in the
order listed. The output for the de ...
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
3. Introduction
1
An operating system runs more processes than it has processors
2
Needs some plan to time share the processors between the
processes
4. Introduction
1
An operating system runs more processes than it has processors
2
Needs some plan to time share the processors between the
processes
3
A common approach is to provide each process with a virtual
processor – An illusion that it has exclusive access to the
processor
5. Introduction
1
An operating system runs more processes than it has processors
2
Needs some plan to time share the processors between the
processes
3
A common approach is to provide each process with a virtual
processor – An illusion that it has exclusive access to the
processor
4
It is then the job of the OS to multiplex these multiple virtual
processors on the underlying physical processors
7. Scheduling triggers
1
A process does I/O, put it to sleep, and schedule another process
2
Use timer interrupts to stop running on a processor after a fixed
time quantum (100 msec)
9. Context switching
• Used to achieve multiplexing
• Internally two low-level context switches are performed
10. Context switching
• Used to achieve multiplexing
• Internally two low-level context switches are performed
1
Process’s kernel thread to the current CPU’s scheduler thread
11. Context switching
• Used to achieve multiplexing
• Internally two low-level context switches are performed
1
2
Process’s kernel thread to the current CPU’s scheduler thread
Scheduler’s thread to a process’s kernel thread
12. Context switching
• Used to achieve multiplexing
• Internally two low-level context switches are performed
1
2
Process’s kernel thread to the current CPU’s scheduler thread
Scheduler’s thread to a process’s kernel thread
• No direct switching from one user-space process to another
13. Context switching
• Used to achieve multiplexing
• Internally two low-level context switches are performed
1
2
Process’s kernel thread to the current CPU’s scheduler thread
Scheduler’s thread to a process’s kernel thread
• No direct switching from one user-space process to another
• Each process has its own kernel stack and register set (its
context)
14. Context switching
• Used to achieve multiplexing
• Internally two low-level context switches are performed
1
2
Process’s kernel thread to the current CPU’s scheduler thread
Scheduler’s thread to a process’s kernel thread
• No direct switching from one user-space process to another
• Each process has its own kernel stack and register set (its
context)
• Each CPU has its own scheduler thread
15. Context switching
• Used to achieve multiplexing
• Internally two low-level context switches are performed
1
2
Process’s kernel thread to the current CPU’s scheduler thread
Scheduler’s thread to a process’s kernel thread
• No direct switching from one user-space process to another
• Each process has its own kernel stack and register set (its
context)
• Each CPU has its own scheduler thread
• Context switch involves saving the old thread’s CPU registers and
restoring previously-saved registers of the new thread (enabled
by swtch)
17. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
18. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
• Replaces the former with the latter
19. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
• Replaces the former with the latter
• Each time a process has to give up the CPU, its kernel thread
invokes swtch to save its own context and switch to the
scheduler context
20. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
• Replaces the former with the latter
• Each time a process has to give up the CPU, its kernel thread
invokes swtch to save its own context and switch to the
scheduler context
• Flow in case of an interrupt:
21. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
• Replaces the former with the latter
• Each time a process has to give up the CPU, its kernel thread
invokes swtch to save its own context and switch to the
scheduler context
• Flow in case of an interrupt:
1 trap handles the interrupt and the calls yield
22. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
• Replaces the former with the latter
• Each time a process has to give up the CPU, its kernel thread
invokes swtch to save its own context and switch to the
scheduler context
• Flow in case of an interrupt:
1 trap handles the interrupt and the calls yield
2 yield makes a call to sched
23. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
• Replaces the former with the latter
• Each time a process has to give up the CPU, its kernel thread
invokes swtch to save its own context and switch to the
scheduler context
• Flow in case of an interrupt:
1 trap handles the interrupt and the calls yield
2 yield makes a call to sched
3 sched invokes swtch(&proc->context,
cpu->scheduler)
24. swtch
• Saves and restores contexts
• Takes two arguments: struct context **old and
struct context *new
• Replaces the former with the latter
• Each time a process has to give up the CPU, its kernel thread
invokes swtch to save its own context and switch to the
scheduler context
• Flow in case of an interrupt:
1 trap handles the interrupt and the calls yield
2 yield makes a call to sched
3 sched invokes swtch(&proc->context,
cpu->scheduler)
4 Control returns to the scheduler thread
26. Scheduling mechanism
• Each process that wants to give up the processor:
1 Acquires ptable.lock (process table lock)
27. Scheduling mechanism
• Each process that wants to give up the processor:
1 Acquires ptable.lock (process table lock)
2
Releases any other locks that it is holding
28. Scheduling mechanism
• Each process that wants to give up the processor:
1 Acquires ptable.lock (process table lock)
2
3
Releases any other locks that it is holding
Updates proc->state (its own state)
29. Scheduling mechanism
• Each process that wants to give up the processor:
1 Acquires ptable.lock (process table lock)
Releases any other locks that it is holding
Updates proc->state (its own state)
4 Calls sched
2
3
30. Scheduling mechanism
• Each process that wants to give up the processor:
1 Acquires ptable.lock (process table lock)
Releases any other locks that it is holding
Updates proc->state (its own state)
4 Calls sched
2
3
• Mechanism followed by yield, and sleep and exit
31. Scheduling mechanism
• Each process that wants to give up the processor:
1 Acquires ptable.lock (process table lock)
Releases any other locks that it is holding
Updates proc->state (its own state)
4 Calls sched
2
3
• Mechanism followed by yield, and sleep and exit
• sched ensures that these steps are followed
34. Scheduling mechanism (2)
• Why must a process acquire ptable.lock before a call to
swtch?
• Breaks the convention that the thread that acquires a lock is also
responsible for releasing the lock
35. Scheduling mechanism (2)
• Why must a process acquire ptable.lock before a call to
swtch?
• Breaks the convention that the thread that acquires a lock is also
responsible for releasing the lock
• Without acquiring ptable.lock, two CPUs might want to
schedule the same process because they can access ptable
37. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
38. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
39. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
1
Idle looping while holding a lock would not allow any other CPU to
access the process table
40. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
Idle looping while holding a lock would not allow any other CPU to
access the process table
2 Idle looping (all processes are waiting for I/O) while interrupts are
disabled would not allow any I/O to arrive
1
41. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
Idle looping while holding a lock would not allow any other CPU to
access the process table
2 Idle looping (all processes are waiting for I/O) while interrupts are
disabled would not allow any I/O to arrive
1
• The first process with p->state == RUNNABLE is selected
42. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
Idle looping while holding a lock would not allow any other CPU to
access the process table
2 Idle looping (all processes are waiting for I/O) while interrupts are
disabled would not allow any I/O to arrive
1
• The first process with p->state == RUNNABLE is selected
• The process is assigned to the per-CPU proc
43. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
Idle looping while holding a lock would not allow any other CPU to
access the process table
2 Idle looping (all processes are waiting for I/O) while interrupts are
disabled would not allow any I/O to arrive
1
• The first process with p->state == RUNNABLE is selected
• The process is assigned to the per-CPU proc
• The process’s page table is switched to via switchuvm
44. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
Idle looping while holding a lock would not allow any other CPU to
access the process table
2 Idle looping (all processes are waiting for I/O) while interrupts are
disabled would not allow any I/O to arrive
1
• The first process with p->state == RUNNABLE is selected
• The process is assigned to the per-CPU proc
• The process’s page table is switched to via switchuvm
• The process is marked as RUNNING
45. scheduler
• Simple loop: find a process to run, run it until it stops, repeat
• Acquires and releases ptable.lock, and enables interrupts
on every iteration. Why?
• If CPU is idle (no RUNNABLE)
Idle looping while holding a lock would not allow any other CPU to
access the process table
2 Idle looping (all processes are waiting for I/O) while interrupts are
disabled would not allow any I/O to arrive
1
• The first process with p->state == RUNNABLE is selected
• The process is assigned to the per-CPU proc
• The process’s page table is switched to via switchuvm
• The process is marked as RUNNING
• swtch is called to start running it
48. Sleep and wakeup
• sleep and wakeup enable an IPC mechanism
• Enable sequence coordination or conditional synchronization
49. Sleep and wakeup
• sleep and wakeup enable an IPC mechanism
• Enable sequence coordination or conditional synchronization
• sleep allows one process to sleep waiting for an event
50. Sleep and wakeup
• sleep and wakeup enable an IPC mechanism
• Enable sequence coordination or conditional synchronization
• sleep allows one process to sleep waiting for an event
• wakeup allows another process to wake up processes sleeping
on an event
58. Today’s Task
• ptable.lock is a very coarse-grained lock which protects the
entire process table
• Design a mechanism (in terms of pseudocode) that splits it up
into multiple locks
• Explain why your solution will improve performance while
ensuring protection
59. Reading(s)
• Chapter 5, “Scheduling” from “xv6: a simple, Unix-like teaching
operating system”