Starting in Linux version 3.14, a new scheduling class was introduced. This class is called SCHED_DEADLINE. It implements Earliest Deadline First (EDF) along with a Constant Bandwidth Scheduler (CBS) that is used to give applications a guaranteed amount of CPU for a periodic time frame. This type of scheduling is advantageous for robotics, media players and recorders, as well as virtual machine guest management. This talk will explain the history of SCHED_DEADLINE and compare it with various other methods to deal with periodic deadlines. It will also discuss some of the current issues with the current Linux implementation and some of the improvements that are currently being worked on.
Since its conception, SCHED_DEADLINE was not being used at all. I know this because as soon as I started using it, I discovered several bugs within the code (the fixes have now been upstreamed). But the fact that these bugs have been in the kernel for so long tells me that this wonderful feature is not well known. This talk is to help spread the word, and perhaps more people can take advantage of the SCHED_DEADLINE power.
Steven Rostedt, VMware
flow shop sequencing, job shop sequencing,
Types of sequencing problems
Case 1: Processing of ‘n’ jobs through one machine
Case 2: Processing of n jobs and two machines A and B, all jobs processed in the order AB.
Case 3: Processing of n jobs and three machines A, B and C all jobs processed in the order ABC
Case 4: Processing of n jobs and m machines.
Case 5: Processing of 2 jobs m machine
flow shop sequencing, job shop sequencing,
Types of sequencing problems
Case 1: Processing of ‘n’ jobs through one machine
Case 2: Processing of n jobs and two machines A and B, all jobs processed in the order AB.
Case 3: Processing of n jobs and three machines A, B and C all jobs processed in the order ABC
Case 4: Processing of n jobs and m machines.
Case 5: Processing of 2 jobs m machine
As the leap second approaches, there is no better time to reflect on our misconceptions about time and numerals, past catastrophes and possible mitigation techniques.
Modern operating systems are complex beasts, responsible for sharing hardware resources between many competing programs. For low-latency systems, sometimes it's necessary to subvert the OS to grab back the resources your program needs. In this talk, we will explore what is actually going on when you run a program, how much time it actually gets on CPU, and strategies to help make your code run as fast as possible. By the end of this talk, you will know how to tune your software and the linux kernel to get the most from your hardware, and more importantly, how to validate that your changes have worked.
This algorithm can perform in most cases better than the round-robin algorithm. However, there are a few scenarios when it performs similar to round-robin.
operating systems , ch-05, (CPU Scheduling), 3rd level, College of Computers, Seiyun University. انظمة التشغيل لطلاب المستوى الثالث بكلية الحاسبات بجامعة سيئون المحاضرة 05
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
Highlighted notes on Optimizing Parallel Reduction in CUDA
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Interesting optimizations, i should try these soon as PageRank is basically lots of sums.
Introduction 1
Network is a technique used for planning and scheduling of large projects in the fields of construction, maintenance, fabrication, purchasing, computer system instantiation, research and development planning etc. There is multitude of operations research situations that can be modeled and solved as network. Some recent surveys reports that as much as 70% of the real-world mathematical programming problems can be represented by network related models. Network analysis is known by many names _PERT (Programme Evaluation and Review Technique), CPM (Critical Path Method), PEP (Programme Evaluation Procedure), LCES (Least Cost Estimating and Scheduling), SCANS (Scheduling and Control by Automated Network System), etc
This chapter will present three of algorithms.
1. PERT & CPM
2. Shortest- route algorithms
3. Maximum-flow algorithms
Kernel Recipes 2019 - Driving the industry toward upstream firstAnne Nicolas
Wanting to avoid the Android experience, Google developers always aimed to make their Chrome OS Linux kernels as close to mainline as possible. However, when Chromebooks were first created, Google was left with no choice, the mainline kernel, in some subsystems, still did not have all the functionalities needed by Chromebooks. Hence, similarly to Android, Chrome OS had to develop their own out-of-tree code for the kernel and maintain that for a few different kernel versions.
Luckily, over the last few years a strong and consistent effort has been happening to bring Chromebook devices closer to mainline. It has led to significant improvements that now make it possible to run mainline on Chrome OS devices. And not only Chromebooks, as these significant strides are also improving Arm-based SOCs and other key components of the rich Chromebook hardware ecosystem. In this talk, we will look at how and why upstream support for Chromebooks improved, the current status of various models, and what we expect in the future.
Enric Balletbò i Serra
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMIAnne Nicolas
As the name would suggest, a Non-Maskable Interrupt (NMI) is an interrupt-like feature that is unaffected by the disabling of classic interrupts. In Linux, NMIs are involved in some features such as performance event monitoring, hard-lockup detector, on demand state dumping, etc… Their potential to fire when least expected can fill the most seasoned kernel hackers with dread.
AArch64 (aka arm64 in the Linux tree) does not provide architected NMIs, a consequence being that features benefiting from NMIs see their use limited on AArch64. However, the Arm Generic Interrupt Controller (GIC) supports interrupt prioritization and masking, which, among other things, provides a way to control whether or not a set of interrupts can be signaled to a CPU.
This talk will cover how, using the GIC interrupt priorities, we provide a way to configure some interrupts to behave in an NMI-like manner on AArch64. We’ll discuss the implementation, some of the complications that ensued and also some of the benefits obtained from it.
Julien Thierry
More Related Content
Similar to Embedded Recipes 2017 - Understanding SCHED_DEADLINE - Steven Rostedt
As the leap second approaches, there is no better time to reflect on our misconceptions about time and numerals, past catastrophes and possible mitigation techniques.
Modern operating systems are complex beasts, responsible for sharing hardware resources between many competing programs. For low-latency systems, sometimes it's necessary to subvert the OS to grab back the resources your program needs. In this talk, we will explore what is actually going on when you run a program, how much time it actually gets on CPU, and strategies to help make your code run as fast as possible. By the end of this talk, you will know how to tune your software and the linux kernel to get the most from your hardware, and more importantly, how to validate that your changes have worked.
This algorithm can perform in most cases better than the round-robin algorithm. However, there are a few scenarios when it performs similar to round-robin.
operating systems , ch-05, (CPU Scheduling), 3rd level, College of Computers, Seiyun University. انظمة التشغيل لطلاب المستوى الثالث بكلية الحاسبات بجامعة سيئون المحاضرة 05
Optimizing Parallel Reduction in CUDA : NOTESSubhajit Sahu
Highlighted notes on Optimizing Parallel Reduction in CUDA
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Interesting optimizations, i should try these soon as PageRank is basically lots of sums.
Introduction 1
Network is a technique used for planning and scheduling of large projects in the fields of construction, maintenance, fabrication, purchasing, computer system instantiation, research and development planning etc. There is multitude of operations research situations that can be modeled and solved as network. Some recent surveys reports that as much as 70% of the real-world mathematical programming problems can be represented by network related models. Network analysis is known by many names _PERT (Programme Evaluation and Review Technique), CPM (Critical Path Method), PEP (Programme Evaluation Procedure), LCES (Least Cost Estimating and Scheduling), SCANS (Scheduling and Control by Automated Network System), etc
This chapter will present three of algorithms.
1. PERT & CPM
2. Shortest- route algorithms
3. Maximum-flow algorithms
Kernel Recipes 2019 - Driving the industry toward upstream firstAnne Nicolas
Wanting to avoid the Android experience, Google developers always aimed to make their Chrome OS Linux kernels as close to mainline as possible. However, when Chromebooks were first created, Google was left with no choice, the mainline kernel, in some subsystems, still did not have all the functionalities needed by Chromebooks. Hence, similarly to Android, Chrome OS had to develop their own out-of-tree code for the kernel and maintain that for a few different kernel versions.
Luckily, over the last few years a strong and consistent effort has been happening to bring Chromebook devices closer to mainline. It has led to significant improvements that now make it possible to run mainline on Chrome OS devices. And not only Chromebooks, as these significant strides are also improving Arm-based SOCs and other key components of the rich Chromebook hardware ecosystem. In this talk, we will look at how and why upstream support for Chromebooks improved, the current status of various models, and what we expect in the future.
Enric Balletbò i Serra
Kernel Recipes 2019 - No NMI? No Problem! – Implementing Arm64 Pseudo-NMIAnne Nicolas
As the name would suggest, a Non-Maskable Interrupt (NMI) is an interrupt-like feature that is unaffected by the disabling of classic interrupts. In Linux, NMIs are involved in some features such as performance event monitoring, hard-lockup detector, on demand state dumping, etc… Their potential to fire when least expected can fill the most seasoned kernel hackers with dread.
AArch64 (aka arm64 in the Linux tree) does not provide architected NMIs, a consequence being that features benefiting from NMIs see their use limited on AArch64. However, the Arm Generic Interrupt Controller (GIC) supports interrupt prioritization and masking, which, among other things, provides a way to control whether or not a set of interrupts can be signaled to a CPU.
This talk will cover how, using the GIC interrupt priorities, we provide a way to configure some interrupts to behave in an NMI-like manner on AArch64. We’ll discuss the implementation, some of the complications that ensued and also some of the benefits obtained from it.
Julien Thierry
Kernel Recipes 2019 - Hunting and fixing bugs all over the Linux kernelAnne Nicolas
At a rate of almost 9 changes per hour (24/7), the Linux kernel is definitely a scary beast. Bugs are introduced on a daily basis and, through the use of multiple code analyzers, *some* of them are detected and fixed before they hit mainline. Over the course of the last few years, Gustavo has been fixing such bugs and many different issues in every corner of the Linux kernel. Recently, he was in charge of leading the efforts to globally enable -Wimplicit-fallthrough; which appears by default in Linux v5.3. This presentation is a report on all the stuff Gustavo has found and fixed in the kernel with the support of the Core Infrastructure Initiative.
Gustavo A.R. Silva
Kernel Recipes 2019 - Metrics are moneyAnne Nicolas
In I.T. we all use all kinds of metrics. Operations teams rely heavily on these, especially when things go south. These metrics are sometimes overrated. Let’s dive into a few real life stories together.
Aurélien Rougemont
Kernel Recipes 2019 - Kernel documentation: past, present, and futureAnne Nicolas
The Linux kernel project includes a huge amount of documentation, but that information has seen little in the way of care over the
years. The amount of care has increased significantly recently, though, and things are improving quickly. Listen as the kernel’s documentation maintainer discusses the current state of the kernel’s docs, how we got here, where we’re trying to go, and how you can help.
Jonathan Corbet
Embedded Recipes 2019 - Knowing your ARM from your ARSE: wading through the t...Anne Nicolas
Modern SoC designs incorporate technologies from numerous vendors, each with their own inconsistent, confusing, undocumented and even contradictory terminology. The result is a mess of acronyms and product names which have a surprising impact on the ability to develop reusable, modular code thanks to the nature of the underlying IP being obscured.
This presentation will dive into some of the misnomers plaguing the Arm ecosystem, with the aim of explaining why things are like they are, how they fit together under the architectural umbrella and how you, as a developer, can decipher the baffling ingredients list of your next SoC design!
Will Deacon
Kernel Recipes 2019 - GNU poke, an extensible editor for structured binary dataAnne Nicolas
GNU poke is a new interactive editor for binary data. Not limited to editing basic ntities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them. Once a user has defined a structure for binary data (usually matching some file format) she can search, inspect, create, shuffle and modify abstract entities such as ELF relocations, MP3 tags, DWARF expressions, partition table entries, and so on, with primitives resembling simple editing of bits and bytes. The program comes with a library of already written descriptions (or “pickles” in poke parlance) for many binary formats.
GNU poke is useful in many domains. It is very well suited to aid in the development of programs that operate on binary files, such as assemblers and linkers. This was in fact the primary inspiration that brought me to write it: easily injecting flaws into ELF files in order to reproduce toolchain bugs. Also, due to its flexibility, poke is also very useful for reverse engineering, where the real structure of the data being edited is discovered by experiment, interactively. It is also good for the fast development of prototypes for programs like linkers, compressors or filters, and it provides a convenient foundation to write other utilities such as diff and patch tools for binary files.
This talk (unlike Gaul) is divided into four parts. First I will introduce the program and show what it does: from simple bits/bytes editing to user-defined structures. Then I will show some of the internals, and how poke is implemented. The third block will cover the way of using Poke to describe user data, which is to say the art of writing “pickles”. The presentation ends with a status of the project, a call for hackers, and a hint at future works.
Jose E. Marchesi
Kernel Recipes 2019 - Analyzing changes to the binary interface exposed by th...Anne Nicolas
Operating system distributors often face challenges that are somewhat different from that of upstream kernel developers. For instance, some kernel updates often need to stay at least binary compatible with modules that might be “out of tree” for some time.
In that context, being able to automatically detect and analyze changes to the binary interface exposed by the kernel to its module does have some noticeable value.
The Libabigail framework is capable of analyzing ELF binaries along with their accompanying debug info in the DWARF format, detect and report changes in types, functions, variables and ELF symbols. It has historically supported that for user space shared libraries and application so we worked to make it understand the Linux kernel
binaries.
In this presentation, we are going to present the current support of ABI analysis for Linux Kernel binaries, the challenges we face, how we address them and the plans we have for the future.
Dodji Seketeli, Jessica Yu, Matthias Männich
Embedded Recipes 2019 - Remote update adventures with RAUC, Yocto and BareboxAnne Nicolas
Different upgrade and update strategies exist when it comes to embedded Linux system. If at development time none of these strategies have been chosen, adding them afterwards can be tedious task.
Even harder it gets when the system is already deployed in the field and only accessible via a 3G connection.
This talk is a developer experience of putting in place exactly that. Giving a return of experience on one way of doing it on a system running Barebox and a Yocto-based distribution.
Patrick Boettcher
Embedded Recipes 2019 - Making embedded graphics less specialAnne Nicolas
Traditionally graphics drivers were one of the last hold-outs of proprietary software in an embedded Linux system. This situation is changing with open-source graphics drivers showing up for almost all of the graphics acceleration peripherals on the market right now. This talk will show how open-source graphics drivers are making embedded systems less special, as well as trying to provide an overview of the Linux graphics stack, de-mystifying what is often seen as black magic GPU stuff from outside observers.
Lucas Stach
Embedded Recipes 2019 - Linux on Open Source Hardware and Libre SiliconAnne Nicolas
This talk will explore Open Source Hardware projects relevant to Linux, including boards like BeagleBone, Olimex OLinuXino, Giant board and more. Looking at the benefits and challenges of designing Open Source Hardware for a Linux system, along with BeagleBoard.org’s experience of working with community, manufacturers, and distributors to create an Open Source Hardware platform. In closing also looking at the future, Libre Silicon like RISC-V designs, and where this might take Linux.
Drew Fustini
Embedded Recipes 2019 - From maintaining I2C to the big (embedded) pictureAnne Nicolas
The I2C subsystem is not the shiniest part of the Linux Kernel. For embedded devices, though, it is one of the many puzzle pieces which just have to work. Wolfram Sang has the experience of maintaining this subsystem for nearly 7 years now. This talk gives a short overview of how maintaining works in general and specifically in this subsystem. But mainly, it will highlight noteworthy points in the timeline and lessons learnt from that. It will present trends, not so much regarding I2C but more the Linux Kernel and the embedded ecosystem in general. And of course, there will be plenty of anecdotes and bits from behind the scenes for your entertainment.
Wolfram Sang
Embedded Recipes 2019 - Testing firmware the devops wayAnne Nicolas
ITRenew is selling recertified OCP servers under the Sesame brand, those servers come either with their original UEFI BIOS or with LinuxBoot. The LinuxBoot project is pushing the Linux kernel inside bios flash and using userland programs as bootloader.
To achieve quality on our software stack, as any project, we need to test it. Traditional BIOS are tested by hand, this is 2019 we need to do it automatically! We already presented the hardware setup behind the LinuxBoot CI, this talk will focus on the software.
We use u-root for our userland bootloader; this software is written in Go so we naturally choose to use Go for our testing too. We will present how we are using and extending the Go native test framework `go test` for testing embedded systems (serial console) and improving the report format for integration to a CI.
Julien Viard de Galbert
Embedded Recipes 2019 - Herd your socs become a matchmakerAnne Nicolas
About 60% of the Linux kernel source tree is devoted to drivers for a large variety of supported hardware components. Especially in the embedded world, the number of different SoC families, versions, and revisions, integrating a myriad of “IP cores”, keeps on growing.
In this presentation, Geert will explain how to match drivers against hardware, and how to support a wide variety of (dis)similar devices, without turning platform and driver code into an entangled bowl of spaghetti.tra
Starting with a brief history of driver matching in Linux, he will fast-forward to device-tree based matching. He will discuss ways to handle slight variations of the same hardware devices, and different SoC revisions, each with their own quirks and bugs. Finally, Geert will show best practices for evolving device drivers in a maintainable way, based on his experiences as an embedded Linux kernel developer and maintainer.
Geert Uytterhoeven
Embedded Recipes 2019 - LLVM / Clang integrationAnne Nicolas
Buildroot is a popular and easy to use embedded Linux build system. It generates, in few minutes, lightweight and customized Linux systems, including the cross-compilation toolchain, kernel and bootloader images, as well as a wide variety of userspace libraries and programs.
This talk is about the integration of LLVM/clang into Buildroot.
In 2018, Valentin Korenblit, supervised by Romain Naour, worked on this topic during his internship at Smile ECS. After a short introduction about llvm/clang and Buildroot, this talk will go through the numerous issues discovered while adding llvm/clang componants and how these issues were fixed. Romain will also detail the work in progress and the work to be done based on llvm/clang libraries (OpenCL, Compiler-rt, BCC. Chromium, ldd).
Romain Naour
Embedded Recipes 2019 - Introduction to JTAG debuggingAnne Nicolas
This talk introduces JTAG debugging capabilities, both for debugging hardware and software. Marek first explains what the JTAG stands for and explains the operation of the JTAG state machine. This is followed by an introduction to free software JTAG tools, OpenOCD and urJTAG. Marek shortly explains how to debug software using those tools and how that ties into the JTAG state machine. However, JTAG was designed for testing hardware. Marek explains what boundary scan testing (BST) is, what are BSDL files and their format, and practically demonstrates how to blink an LED using BST and only free software tools.
Marek Vasut
Embedded Recipes 2019 - Pipewire a new foundation for embedded multimediaAnne Nicolas
PipeWire is an open source project that aims to greatly improve audio and video handling under Linux. Utilising a fresh design, it bridges use cases that have been previously addressed by different tools – or not addressed at all -, providing ground for building complex, yet secure and efficient, multimedia systems.
In this talk, Julien is going to present the PipeWire project and the concepts that make up its design. In addition, he is going to give an update of the current and future work going on around PipeWire, both upstream and in Automotive Grade Linux, an early adopter that Julien is actively working on.
Julian Bouzas
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all startedAnne Nicolas
Ftrace’s most powerful feature is the function tracer (and function graph tracer which is built from it). But to have this enabled on production systems, it had to have its overhead be negligible when disabled. As the function tracer uses gcc’s profiling mechanism, which adds a call to “mcount” (or more recently fentry, don’t worry if you don’t know what this is, it will all be explained) at the start of almost all functions, it had to do something about the overhead that causes. The solution was to turn those calls into “nops” (an instruction that the CPU simply ignores). But this was no easy feat. It took a lot to come up with a solution (and also turning a few network cards into bricks). This talk will explain the history of how ftrace came about implementing the function tracer, and brought with it the possibility of static branches and soon static calls!
Steven Rostedt
Kernel Recipes 2019 - Suricata and XDPAnne Nicolas
Suricata is a network threat detection engine using network packets capture to reconstruct the traffic till the application layer and find threats on the network using rules that define behavior to detect. This task is really CPU intensive and discarding non interesting traffic is a solution to enable a scaling of Suricata to 40gbps and other.
This talk will present the latest evolution of Suricata that knows uses eBPF and XDP to bypass traffic. Suricata 5.0 is supporting the hardware XDP to provide ypass with network card such as Netronome. It also takes advantage of pinned maps to get persistance of the bypassed flows. This talk will cover the different usage of XDP and eBPF in Suricata and shows how it impact performance and usability. If development time permit, the talk will also cover AF_XDP and the impact on this new capture method on Suricata.
Eric Leblond
Kernel Recipes 2019 - Marvels of Memory Auto-configuration (SPD)Anne Nicolas
System memory configuration is a transparent operation nowadays, something that we all came to expect to just work out of the box. Still, it does happen behind the scenes every single time we boot our computers. This requires the cooperation of hardware components on the mainboard and on memory modules themselves, as well as firmware code to drive these. While it is possible to just let it happen, having a deeper understanding of how it works makes it possible to access valuable information from the operating system at run-time.
I will take you through the history of system memory configuration from the mid 70s to now. We will explore the different types of memory modules, how their configuration data is stored and how the firmware can access them. We will see which problems had to be solved along the way and how they were solved. Lastly we will see how Linux supports reading the memory configuration information and what you can do with that information.
Jean Delvare
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
NEWNTIDE, a leading brand in China's air energy industry, drives industry development with technological innovation, implementing national energy-saving and emission reduction policies. It pioneers an industry-focused multi-energy product line, adopting experiential marketing to meet diverse customer needs. The company has departments for R&D, marketing, operations, and sales, aiming to ultimately achieve "technological innovation, environmental friendliness, standardized management, and high-quality" as a high-tech enterprise integrating business and technical R&D, production, sales, and service.
NEWNTIDE boasts the most comprehensive support service network in the industry. Its earliest products cover 25 series, including split, integrated, wall-mounted, cabinet, and upright types, with over 100 diverse products. Commercial products include floor heating, air heaters, air conditioners for heating and cooling, oxidation and nitrogen air conditioners, and high-temperature heating. The products feature comprehensive intelligent technology management, cloud control technology, rapid heating technology, basic protection technology, remote control technology, DC inverter technology, and remote WIFI smart control, achieving a leading position in the industry with SMART interactive technology.
For over a decade, the company has adhered to a "people-oriented" business philosophy, strictly implementing industry 7S management, ISO9001/ISO14001 quality and environmental systems, and industry standards to ensure stable product quality and meet customers' dual requirements for product safety and environmental protection.
Leading the development of intelligence with technological innovation, NEWNTIDE has become a national demonstration base for the transformation of scientific and technological achievements, awarded the "China Energy Saving Technology Contribution Award" and "China Energy Science and Technology Progress Award". The company adopts a strategy of high standards, high quality, and high-tech for key products, holding core technologies and competitive advantages. It also organizes multiple strategic support projects known as the "18 Key Operational Projects" and "18 Key Operational Strategies," driving technology project approvals with multidimensional strategic product quality modules and comprehensive practical operations to enhance the quality of all products.
Since its establishment, NEWNTIDE has always committed to providing high-quality and high-end intelligent heat pump products, serving billions of global families with the goal of creating a sustainable and prosperous environment. The development of NEWNTIDE has been supported by various levels of government and widely recognized and cooperated with by internationally renowned institutions, taking on a social responsibility of providing tranquility and happiness while enjoying the environment.
Let safe heat pumps be a necessity for a beautiful human life.
2. What is SCHED_DEADLINE?
●
A new scheduling class (Well since v3.14)
– others are: SCHED_OTHER/NORMAL, SCHED_FIFO, SCHED_RR
●
SCHED_IDLE, SCHED_BATCH (out of scope for today)
●
Constant Bandwidth Scheduler
●
Earliest Deadline First
3. Other Schedulers
●
SCHED_OTHER / SCHED_NORMAL
– Completely Fair Scheduler (CFS)
– Uses “nice” priority
– Each task gets a fair share of the CPU bandwidth
●
SCHED_FIFO
– First in, first out
– Each task runs till it gives up the CPU or a higher priority task preempts it
●
SCHED_RR
– Like SCHED_FIFO but same prio tasks get slices of CPU
4. Priorities
●
You have two programs running on the same CPU
– One runs a nuclear power plant
●
Requires 1/2 second out of every second of the CPU (50% of the CPU)
– The other runs a washing machine
●
Requires 50 millisecond out of every 200 milliseconds (25% of the CPU)
– Which one gets the higher priority?
8. Rate Monotonic Scheduling (RMS)
●
Computational time vs Period
●
Can be implemented by SCHED_FIFO
●
Smallest period gets highest priority
●
Compute computation time (C)
●
Compute period time (T)
U =∑
i=1
n
Ci
Ti
9. Rate Monotonic Scheduling (RMS)
●
Add a Dishwasher to the mix...
●
Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU)
●
Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU)
●
Washing Machine: C = 100ms T = 800ms (12.5% of the CPU)
U =
500
1000
+
300
900
+
100
800
=.958333
17. Rate Monotonic Scheduling (RMS)
●
Computational time vs Period
●
Can be implemented by SCHED_FIFO
●
Smallest period gets highest priority
●
Compute computation time (C)
●
Compute period time (T)
U =∑
i=1
n
Ci
Ti
18. Rate Monotonic Scheduling (RMS)
●
Computational time vs Period
●
Can be implemented by SCHED_FIFO
●
Smallest period gets highest priority
●
Compute computation time (C)
●
Compute period time (T)
U =∑
i=1
n
Ci
Ti
≤n(
n
√2−1)
19. Rate Monotonic Scheduling (RMS)
●
Add a Dishwasher to the mix...
●
Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU)
●
Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU)
●
Washing Machine: C = 100ms T = 800ms (12.5% of the CPU)
U =
500
1000
+
300
900
+
100
800
=.958333
20. Rate Monotonic Scheduling (RMS)
●
Add a Dishwasher to the mix...
●
Nuclear Power Plant : C = 500ms T=1000ms (50% of the CPU)
●
Dishwasher: C = 300ms T = 900ms (33.3333% of the CPU)
●
Washing Machine: C = 100ms T = 800ms (12.5% of the CPU)
U =
500
1000
+
300
900
+
100
800
=.958333
U ≤n(
n
√2−1)=3(
3
√2−1)=0.77976
29. Setting a RT priority
sched_setscheduler(pid_t pid, int policy, struct sched_param *param)
struct sched_param {
u32 sched_priority;
};
30. Implementing SCHED_DEADLINE in Linux
30
Two new syscalls
sched_getattr(pid_t pid, struct sched_attr *attr, unsigned int size, unsigned int flags)
(Similar to sched_getparam(pid_t pid, struct sched_param *param)
sched_setattr(pid_t pid, struct sched_attr *attr, unsigned int flags)
(Similar to sched_setparam(pid_t pid, struct sched_param *param)
32. Implementing SCHED_DEADLINE
struct sched_attr attr;
ret = sched_getattr(0, &attr, sizeof(attr), 0);
if (ret < 0)
error();
attr.sched_policy = SCHED_DEADLINE;
attr.sched_runtime = runtime_ns;
attr.sched_deadline = deadline_ns;
ret = sched_setattr(0, &attr, 0);
if (ret < 0)
error();
33. sched_yield()
●
Most use cases are buggy
– Most tasks will not give up the CPU
●
SCHED_OTHER
– Gives up current CPU time slice
●
SCHED_FIFO / SCHED_RR
– Gives up the CPU to a task of the SAME PRIORITY
– Voluntary scheduling among same priority tasks
35. sched_yield()
●
What you want for SCHED_DEADLINE!
●
Tells the kernel the task is done with current period
●
Used to relinquish the rest of the runtime budget
36. Constant Bandwidth Server
36
scheduling deadline = current time + deadline
remaining runtime = runtime
remaining runtime
scheduling deadline−current time
>
runtime
period
37. Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
38. Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Remainder = 2/3 > 3/9
39. Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Deadline = current time + new deadline
40. Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Remaining Runtime = Runtime (3 units)
41. Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Another Deadline task?
42. Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Another Deadline task?
43. Self sleeping tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 3/9
●
Only ran 2 units in the original 9
Original Deadline
45. Deadline vs Period
●
Can't have offset holes in our donuts
●
Have a specific deadline to make within a period
runtime <= deadline <= period
●
But is this too constrained?
U =∑
i=1
n
Ci
Di
=1
46. Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
47. Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
1/1 > 2/4
48. Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
Move deadline from 4 to 7 (period from 10 to 13)
49. Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
Runs for 1 and sleeps again
50. Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
Wakes up again with 1 to go (moves deadline to 10, period to 16)
51. Self sleeping constrained tasks
Courtesy of Daniel Bristot de Oliveira
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
U = 2/4/10
●
4 out of 10! Instead of 2 out 4 in 10
53. ●
M CPUs
●
M+1 tasks
●
One task with runtime 999ms out of 1000ms
●
M tasks of runtime of 10ms out of 999ms
●
All start at the same time
●
The M tasks have a shorted deadline
●
All M tasks run on all CPUs for 10ms
●
That one task now only has 990 ms left to run 999ms.
Multi processors! (Dhall's Effect)
999
1000
+ M (
10
999
)=0.999+.01001 M <M
M=2;
999
1000
+2(
10
999
)=0.999+2∗0.01001=1.01902<2
56. Multi processors!
●
EDF can not give you better than U = 1
– No matter how many processors you have
– Full utilization should be U = N CPUs
●
Two methods
– Partitioning (Bind each task to a CPU)
– Global (let all tasks migrate wherever)
– Neither give better than U = 1 guarantees
57. Multi processors!
●
EDF partitioned
Can not always be used:
●
U_t1 = .6
●
U_t2 = .6
●
U_t3 = .5
●
The above would need special scheduling to work anyway
To figure out the best utilization is the bin packing problem
●
Sorry folks, it's NP complete
●
Don't even bother trying
58. Multi processors!
●
Global Earliest Deadline First (gEDF)
●
Can not guarantee deadlines of U > 1 for all cases
●
But special cases can be satisfied for U > 1
D_i = P_i
U_max = max{C_i/P_i}
U =∑
i=1
n
Ci
Pi
≤M −( M−1)∗U max
59. Multi processors!
●
M = 8
●
U_max = 0.5
U =∑
i=1
n
Ci
Pi
≤M −( M−1)∗U max
U =∑
i=1
n
Ci
Pi
≤8−(7)∗.5=4.5
60. Multi processors!
●
M = 2
●
U_max = 999/1000
U =∑
i=1
n
Ci
Pi
≤M −( M−1)∗U max
U =∑
i=1
n
Ci
Pi
≤2−(1)∗0.999=1.001
61. The limits of SCHED_DEADLINE
●
Runs on all CPUS (well sorta)
No limited sched affinity allowed
Global EDF is the default
Must account for sched migration overheads
●
Can not have children (no forking)
Your SCHED_DEADLINE tasks have been fixed
●
Calculating Worse Case Execution Time (WCET)
If you get it wrong, SCHED_DEADLINE may throttle your task before it finishes
62. Giving SCHED_DEADLINE Affinity
Setting task affinity on SCHED_DEADLINE is not allowed
But you can limit them by creating new sched domains
CPU sets
Implementing Partitioned EDF
76. Giving SCHED_DEADLINE Affinity
cat tasks | while read task; do
echo $task > other_set/tasks
done
echo $sched_deadline_task > my_set/tasks
77. Calculating WCET
●
Today's hardware is extremely unpredictable
●
Worse Case Execution Time is impossible to know
●
Allocate too much bandwidth instead
●
Need something between RMS and CBS
78. GRUB (not the boot loader)
●
Greedy Reclaim of Unused Bandwidth
●
Allows for SCHED_DEADLINE tasks to use up the unused utilization
of the CPU (or part of it)
●
Allows for tasks to handle WCET of a bit more than calculated.
●
Just went into mainline (v4.13)