Rust promises developers the execution speed of non-managed languages like C++, with the safety guarantees of managed languages like Go. Its fast rise in popularity shows this promise has been largely upheld.
However, the situation is a bit muddier for the newer asynchronous extensions. This talk will explore some of the pitfalls that users may face while developing asynchronous Rust applications that have direct consequences in their ability to hit that sweet low p99. We will see how the Glommio asynchronous executor tries to deal with some of those problems, and what the future holds.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Three engineers, at various points, each take their own approach adding Rust to a C codebase, each being more and more ambitious. I initially just wanted to replace the server’s networking and event loop with an equally fast Rust implementation. We’d reuse many core components that were in C and just call into them from Rust. Surely it wouldn’t be that much code…
Pelikan is Twitter’s open source and modular framework for in-memory caching, allowing us to replace Memcached and Redis forks with a single codebase and achieve better performance. At Twitter, we operate hundreds of cache clusters storing hundreds of terabytes of small objects in memory. In-memory caching is critical, and demands performance, reliability, and efficiency.
In this talk, I’ll share my adventures in working on Pelikan and how rewriting it in Rust can be more than just a meme.
Continuous Performance Regression Testing with JfrUnitScyllaDB
Gunnar Morling is a software engineer and open-source enthusiast by heart. He is leading the Debezium project, a platform for change data capture (CDC). He is a Java Champion, the spec lead for Bean Validation 2.0 (JSR 380) and has founded multiple open source projects such as Deptective and MapStruct. Prior to joining Red Hat, Gunnar worked on a wide range of Java EE projects in the logistics and retail industries. He's based in Hamburg, Germany.
Rust has something unique to offer that languages in that space have never had before, and that is a degree of safety that languages like C and C++ have never had. Rust promises to deliver equivalent or better performance and greater productivity with guaranteed memory safety and data race freedom while allowing complete and direct control over memory.
This video will cover:
What is Rust?
Benefits of Rust
Rust Ecosystem
Popular Applications in Rust
P99CONF — What We Need to Unlearn About Persistent StorageScyllaDB
System software engineers have long been taught that disks are slow and sequential I/O is key to performance. With SSD drives I/O really got much faster but not simpler. In this brave new world of rocket-speed throughputs an engineer has to distinguish sustained workload from bursts, (still) take care about I/O buffer sizes, account for disks’ internal parallelism and study mixed I/O characteristics in advance. In this talk we will share some key performance measurements of the modern hardware we’re taking at ScyllaDB and our opinion about the implications for the database and system software design.
Keeping Latency Low and Throughput High with Application-level Priority Manag...ScyllaDB
Throughput and latency are at a constant tension. ScyllaDB CTO and co-founder Avi Kivity will show how high throughput and low latency can both be achieved in a single application by using application-level priority scheduling.
Rust promises developers the execution speed of non-managed languages like C++, with the safety guarantees of managed languages like Go. Its fast rise in popularity shows this promise has been largely upheld.
However, the situation is a bit muddier for the newer asynchronous extensions. This talk will explore some of the pitfalls that users may face while developing asynchronous Rust applications that have direct consequences in their ability to hit that sweet low p99. We will see how the Glommio asynchronous executor tries to deal with some of those problems, and what the future holds.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Three engineers, at various points, each take their own approach adding Rust to a C codebase, each being more and more ambitious. I initially just wanted to replace the server’s networking and event loop with an equally fast Rust implementation. We’d reuse many core components that were in C and just call into them from Rust. Surely it wouldn’t be that much code…
Pelikan is Twitter’s open source and modular framework for in-memory caching, allowing us to replace Memcached and Redis forks with a single codebase and achieve better performance. At Twitter, we operate hundreds of cache clusters storing hundreds of terabytes of small objects in memory. In-memory caching is critical, and demands performance, reliability, and efficiency.
In this talk, I’ll share my adventures in working on Pelikan and how rewriting it in Rust can be more than just a meme.
Continuous Performance Regression Testing with JfrUnitScyllaDB
Gunnar Morling is a software engineer and open-source enthusiast by heart. He is leading the Debezium project, a platform for change data capture (CDC). He is a Java Champion, the spec lead for Bean Validation 2.0 (JSR 380) and has founded multiple open source projects such as Deptective and MapStruct. Prior to joining Red Hat, Gunnar worked on a wide range of Java EE projects in the logistics and retail industries. He's based in Hamburg, Germany.
Rust has something unique to offer that languages in that space have never had before, and that is a degree of safety that languages like C and C++ have never had. Rust promises to deliver equivalent or better performance and greater productivity with guaranteed memory safety and data race freedom while allowing complete and direct control over memory.
This video will cover:
What is Rust?
Benefits of Rust
Rust Ecosystem
Popular Applications in Rust
P99CONF — What We Need to Unlearn About Persistent StorageScyllaDB
System software engineers have long been taught that disks are slow and sequential I/O is key to performance. With SSD drives I/O really got much faster but not simpler. In this brave new world of rocket-speed throughputs an engineer has to distinguish sustained workload from bursts, (still) take care about I/O buffer sizes, account for disks’ internal parallelism and study mixed I/O characteristics in advance. In this talk we will share some key performance measurements of the modern hardware we’re taking at ScyllaDB and our opinion about the implications for the database and system software design.
Keeping Latency Low and Throughput High with Application-level Priority Manag...ScyllaDB
Throughput and latency are at a constant tension. ScyllaDB CTO and co-founder Avi Kivity will show how high throughput and low latency can both be achieved in a single application by using application-level priority scheduling.
This presentation is for Go developers and operators of Go applications who are interested in reducing costs and latency, or debugging problems such as memory leaks, infinite loops, performance regressions, etc. of such applications. We'll start with a brief description of the unique aspects of the Go runtime, and then take a look at the builtin profilers as well as Go's execution tracer. Additionally we'll look at the interoperability with popular observability tools such as Linux perf and bpftrace. After this presentation you should have a good idea of the various tools you can use, and which ones might be the most useful to you in a production environment.
Seastore: Next Generation Backing Store for CephScyllaDB
Ceph is an open source distributed file system addressing file, block, and object storage use cases. Next generation storage devices require a change in strategy, so the community has been developing crimson-osd, an eventual replacement for ceph-osd intended to minimize cpu overhead and improve throughput and latency. Seastore is a new backing store for crimson-osd targeted at emerging storage technologies including persistent memory and ZNS devices.
In file systems, large sequential writes are more beneficial than small random writes, and hence many storage systems implement a log structured file system. In the same way, the cloud favors large objects more than small objects. Cloud providers place throttling limits on PUTs and GETs, and so it takes significantly longer time to upload a bunch of small objects than a large object of the aggregate size. Moreover, there are per-PUT calls associated with uploading smaller objects.
In Netflix, a lot of media assets and their relevant metadata is generated and pushed to cloud.
We would like to propose a strategy to compact these small objects into larger blobs before uploading them to Cloud. We will discuss how to select relevant smaller objects, and manage the indexing of these objects within the blob along with modification in reads, overwrites and deletes.
Finally, we would showcase the potential impact of such a strategy on Netflix assets in terms of cost and performance.
Data Structures for High Resolution, Real-time Telemetry at ScaleScyllaDB
The challenge within telemetry in real-time systems is that you need as many sources of telemetry as possible (Throughput, latency, Errors, CPU, and many more... ) but you can't pay for extra overhead when our users are expecting sub-ms ops that scale to millions of transactions per second.
In this talk, we'll describe how we're using and improving several OSS data structures to incorporate telemetry features at scale, and showcase why they do matter on scenarios in which we have Performance/Security/Ops issues.
Using eBPF to Measure the k8s Cluster HealthScyllaDB
As a k8s cluster-admin your app teams have a certain expectation of your cluster to be available to deploy services at any time without problems. While there is no shortage on metrics in k8s its important to have the right metrics to alert on issues and giving you enough data to react to potential availability issues. Prometheus has become a standard and sheds light on the inner behaviour of Kubernetes clusters and workloads. Lots of KPIs (CPU, IO, network. Etc) in our On-Premise environment are less precise when we start to work in a Cloud environment. Ebpf is the perfect technology that fulfills that requirement as it gives us information down to the kernel level. In 2018 Cloudflare shared an opensource project to expose custom ebpf metrics in Prometheus. Join this session and learn about: • What is ebpf? • What type of metrics we can collect? • How to expose those metrics in a K8s environment. This session will try to deliver a step-by-step guide on how to take advantage of the ebpf exporter.
OSNoise Tracer: Who Is Stealing My CPU Time?ScyllaDB
In the context of high-performance computing (HPC), the Operating System Noise (osnoise) refers to the interference experienced by an application due to activities inside the operating system. In the context of Linux, NMIs, IRQs, softirqs, and any other system thread can cause noise to the application. Moreover, hardware-related jobs can also cause noise, for example, via SMIs.
HPC users and developers that care about every microsecond stolen by the OS need not only a precise way to measure the osnoise but mainly to figure out who is stealing cpu time so that they can pursue the perfect tune of the system. These users and developers are the inspiration of Linux's osnoise tracer.
The osnoise tracer runs an in-kernel loop measuring how much time is available. It does it with preemption, softirq and IRQs enabled, thus allowing all the sources of osnoise during its execution. The osnoise tracer takes note of the entry and exit point of any source of interferences. When the noise happens without any interference from the operating system level, the tracer can safely point to a hardware-related noise. In this way, osnoise can account for any source of interference. The osnoise tracer also adds new kernel tracepoints that auxiliaries the user to point to the culprits of the noise in a precise and intuitive way.
At the end of a period, the osnoise tracer prints the sum of all noise, the max single noise, the percentage of CPU available for the thread, and the counters for the noise sources, serving as a benchmark tool.
DB Latency Using DRAM + PMem in App Direct & Memory ModesScyllaDB
How does the latency of DDR4 DRAM compare to Intel Optane Persistent Memory when used in both App Direct and Memory Modes for In-Memory database access?
This talk is about the latency benchmarks that I performed by adding gettimeofday() calls around critical DB kernel operations.
This talk covers the technology, cache hit ratios, lots of histograms and lessons learned.
We describe a modification to the Linux Kernel which gives an SRE control over the combined bandwidth of logging on a node of a distributed system, while providing a way for the logging source owner (container or service) to control what happens when the bandwidth limit is hit.
Modern systems are large, and complicated, and it is often difficult to account precisely where CPU cycles are spent in production.
Once you begin measuring, you will find all sorts of strange surprises - like cleaning out strange objects from an attic that has accumulated stuff for decades.
This talk discusses surprising places where we found CPU waste in real-world production environments: From Kubelet consuming multiple percent of whole-cluster CPU, via popular machine learning libraries spending their time juggling exceptions instead of classifying, to EC2 time sources being much slower than necessary. CPU cycles are being lost in surprising places, and often it isn't in your own code.
Kernel Recipes 2016 - Speeding up development by setting up a kernel build farmAnne Nicolas
Building a full kernel takes time but is often necessary during development or when backporting patches. The nature of the kernel makes it easy to distribute its build on multiple cheap machines. This presentation will explain how to set up a build farm based on cost, size, and performance.
Willy Tarreau, HaProxy
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...ScyllaDB
RedHat built a distributed object storage solution named Ceph which first debuted ten years ago. Now we are seeing rapid developments in the industry and we want to take advantage of them. In this talk, we will briefly introduce Ceph, revisit the problems we are seeing when profiling its I/O performance with flash device, and explain why we want to embrace the future by switching to Seastar. We’ll share our experiences with the audience of how and when we are porting our software to this framework.
Measuring Latency for Monitoring and Benchmarking purposes is notoriously difficult. There are a lot of pitfalls with collecting, aggregating and analyzing latency data.
In the talk, we will make an effort to visit this topic from a top-down perspective and compile known complications and best-practice approaches on how to avoid them. This will include:
Measurement Overhead
Queuing effects – Coordinated omission
Histograms for Aggregation and Visualization
Percentile aggregation
Latency bands and burn-down charts
Latency comparison methods (QQ Plots, KS-Distance)
Rust, Wright's Law, and the Future of Low-Latency SystemsScyllaDB
The coming decade will see two important changes with profound ramifications for low-latency systems: the rise of Rust-based systems, and the ceding of Moore's Law to Wright's Law. In this talk, we will discuss these two trends, and (especially) their confluence -- and explain why we believe that the future of low-latency systems will include Rust programs in some surprising places.
Kernel Recipes 2017 - What's new in the world of storage for Linux - Jens AxboeAnne Nicolas
Storage keeps moving forward, and so does the Linux IO stack. This talk will detail some of the recent additions and changes that have gone into the Linux kernel storage stack, helping Linux get the most out of industry innovations in that space.
Jens Axboe, Facebook
Performance optimization 101 - Erlang Factory SF 2014lpgauth
In order to scale AdGear's real-time bidding (RTB) platform, we've been optimizing system and application performance. In this talk, I'll share all my findings, from basic tips to advanced topics. I'll cover coding patterns, metric collection, tracing and much more!
Talk objectives:
- Share basic tips, common pitfalls, efficient coding patterns
- Introduce tooling for metric collection (statsderl, vmstats, system-stats, riak_sysmon)
- Highlight erlang trace (fprof, flame graphs)
- Expose advanced topics (VM tuning, lock-counter, systemtap, cpuset)
Target audience:
- Anyone looking to improve the performance of their Erlang application.
Kernel Recipes 2016 - kernelci.org: 1.5 million kernel boots (and counting)Anne Nicolas
The kernelci.org project performs over 2000 kernel boot tests per day for upstream kernels on a wide variety of hardware. This talk will provide an overview of kernelci.org, how distributed board farms are used, how it is used by kernel maintainers and developers, and how you can make use of the reports, logs and pre-built images.
Kevin Hilman, BayLibre
This presentation is for Go developers and operators of Go applications who are interested in reducing costs and latency, or debugging problems such as memory leaks, infinite loops, performance regressions, etc. of such applications. We'll start with a brief description of the unique aspects of the Go runtime, and then take a look at the builtin profilers as well as Go's execution tracer. Additionally we'll look at the interoperability with popular observability tools such as Linux perf and bpftrace. After this presentation you should have a good idea of the various tools you can use, and which ones might be the most useful to you in a production environment.
Seastore: Next Generation Backing Store for CephScyllaDB
Ceph is an open source distributed file system addressing file, block, and object storage use cases. Next generation storage devices require a change in strategy, so the community has been developing crimson-osd, an eventual replacement for ceph-osd intended to minimize cpu overhead and improve throughput and latency. Seastore is a new backing store for crimson-osd targeted at emerging storage technologies including persistent memory and ZNS devices.
In file systems, large sequential writes are more beneficial than small random writes, and hence many storage systems implement a log structured file system. In the same way, the cloud favors large objects more than small objects. Cloud providers place throttling limits on PUTs and GETs, and so it takes significantly longer time to upload a bunch of small objects than a large object of the aggregate size. Moreover, there are per-PUT calls associated with uploading smaller objects.
In Netflix, a lot of media assets and their relevant metadata is generated and pushed to cloud.
We would like to propose a strategy to compact these small objects into larger blobs before uploading them to Cloud. We will discuss how to select relevant smaller objects, and manage the indexing of these objects within the blob along with modification in reads, overwrites and deletes.
Finally, we would showcase the potential impact of such a strategy on Netflix assets in terms of cost and performance.
Data Structures for High Resolution, Real-time Telemetry at ScaleScyllaDB
The challenge within telemetry in real-time systems is that you need as many sources of telemetry as possible (Throughput, latency, Errors, CPU, and many more... ) but you can't pay for extra overhead when our users are expecting sub-ms ops that scale to millions of transactions per second.
In this talk, we'll describe how we're using and improving several OSS data structures to incorporate telemetry features at scale, and showcase why they do matter on scenarios in which we have Performance/Security/Ops issues.
Using eBPF to Measure the k8s Cluster HealthScyllaDB
As a k8s cluster-admin your app teams have a certain expectation of your cluster to be available to deploy services at any time without problems. While there is no shortage on metrics in k8s its important to have the right metrics to alert on issues and giving you enough data to react to potential availability issues. Prometheus has become a standard and sheds light on the inner behaviour of Kubernetes clusters and workloads. Lots of KPIs (CPU, IO, network. Etc) in our On-Premise environment are less precise when we start to work in a Cloud environment. Ebpf is the perfect technology that fulfills that requirement as it gives us information down to the kernel level. In 2018 Cloudflare shared an opensource project to expose custom ebpf metrics in Prometheus. Join this session and learn about: • What is ebpf? • What type of metrics we can collect? • How to expose those metrics in a K8s environment. This session will try to deliver a step-by-step guide on how to take advantage of the ebpf exporter.
OSNoise Tracer: Who Is Stealing My CPU Time?ScyllaDB
In the context of high-performance computing (HPC), the Operating System Noise (osnoise) refers to the interference experienced by an application due to activities inside the operating system. In the context of Linux, NMIs, IRQs, softirqs, and any other system thread can cause noise to the application. Moreover, hardware-related jobs can also cause noise, for example, via SMIs.
HPC users and developers that care about every microsecond stolen by the OS need not only a precise way to measure the osnoise but mainly to figure out who is stealing cpu time so that they can pursue the perfect tune of the system. These users and developers are the inspiration of Linux's osnoise tracer.
The osnoise tracer runs an in-kernel loop measuring how much time is available. It does it with preemption, softirq and IRQs enabled, thus allowing all the sources of osnoise during its execution. The osnoise tracer takes note of the entry and exit point of any source of interferences. When the noise happens without any interference from the operating system level, the tracer can safely point to a hardware-related noise. In this way, osnoise can account for any source of interference. The osnoise tracer also adds new kernel tracepoints that auxiliaries the user to point to the culprits of the noise in a precise and intuitive way.
At the end of a period, the osnoise tracer prints the sum of all noise, the max single noise, the percentage of CPU available for the thread, and the counters for the noise sources, serving as a benchmark tool.
DB Latency Using DRAM + PMem in App Direct & Memory ModesScyllaDB
How does the latency of DDR4 DRAM compare to Intel Optane Persistent Memory when used in both App Direct and Memory Modes for In-Memory database access?
This talk is about the latency benchmarks that I performed by adding gettimeofday() calls around critical DB kernel operations.
This talk covers the technology, cache hit ratios, lots of histograms and lessons learned.
We describe a modification to the Linux Kernel which gives an SRE control over the combined bandwidth of logging on a node of a distributed system, while providing a way for the logging source owner (container or service) to control what happens when the bandwidth limit is hit.
Modern systems are large, and complicated, and it is often difficult to account precisely where CPU cycles are spent in production.
Once you begin measuring, you will find all sorts of strange surprises - like cleaning out strange objects from an attic that has accumulated stuff for decades.
This talk discusses surprising places where we found CPU waste in real-world production environments: From Kubelet consuming multiple percent of whole-cluster CPU, via popular machine learning libraries spending their time juggling exceptions instead of classifying, to EC2 time sources being much slower than necessary. CPU cycles are being lost in surprising places, and often it isn't in your own code.
Kernel Recipes 2016 - Speeding up development by setting up a kernel build farmAnne Nicolas
Building a full kernel takes time but is often necessary during development or when backporting patches. The nature of the kernel makes it easy to distribute its build on multiple cheap machines. This presentation will explain how to set up a build farm based on cost, size, and performance.
Willy Tarreau, HaProxy
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...ScyllaDB
RedHat built a distributed object storage solution named Ceph which first debuted ten years ago. Now we are seeing rapid developments in the industry and we want to take advantage of them. In this talk, we will briefly introduce Ceph, revisit the problems we are seeing when profiling its I/O performance with flash device, and explain why we want to embrace the future by switching to Seastar. We’ll share our experiences with the audience of how and when we are porting our software to this framework.
Measuring Latency for Monitoring and Benchmarking purposes is notoriously difficult. There are a lot of pitfalls with collecting, aggregating and analyzing latency data.
In the talk, we will make an effort to visit this topic from a top-down perspective and compile known complications and best-practice approaches on how to avoid them. This will include:
Measurement Overhead
Queuing effects – Coordinated omission
Histograms for Aggregation and Visualization
Percentile aggregation
Latency bands and burn-down charts
Latency comparison methods (QQ Plots, KS-Distance)
Rust, Wright's Law, and the Future of Low-Latency SystemsScyllaDB
The coming decade will see two important changes with profound ramifications for low-latency systems: the rise of Rust-based systems, and the ceding of Moore's Law to Wright's Law. In this talk, we will discuss these two trends, and (especially) their confluence -- and explain why we believe that the future of low-latency systems will include Rust programs in some surprising places.
Kernel Recipes 2017 - What's new in the world of storage for Linux - Jens AxboeAnne Nicolas
Storage keeps moving forward, and so does the Linux IO stack. This talk will detail some of the recent additions and changes that have gone into the Linux kernel storage stack, helping Linux get the most out of industry innovations in that space.
Jens Axboe, Facebook
Performance optimization 101 - Erlang Factory SF 2014lpgauth
In order to scale AdGear's real-time bidding (RTB) platform, we've been optimizing system and application performance. In this talk, I'll share all my findings, from basic tips to advanced topics. I'll cover coding patterns, metric collection, tracing and much more!
Talk objectives:
- Share basic tips, common pitfalls, efficient coding patterns
- Introduce tooling for metric collection (statsderl, vmstats, system-stats, riak_sysmon)
- Highlight erlang trace (fprof, flame graphs)
- Expose advanced topics (VM tuning, lock-counter, systemtap, cpuset)
Target audience:
- Anyone looking to improve the performance of their Erlang application.
Kernel Recipes 2016 - kernelci.org: 1.5 million kernel boots (and counting)Anne Nicolas
The kernelci.org project performs over 2000 kernel boot tests per day for upstream kernels on a wide variety of hardware. This talk will provide an overview of kernelci.org, how distributed board farms are used, how it is used by kernel maintainers and developers, and how you can make use of the reports, logs and pre-built images.
Kevin Hilman, BayLibre
Slides for my Perl Memory Use talk at YAPC::Asia in Tokyo, September 2012.
(This uploaded version includes quite a few slides from the OSCON version that I skipped at YAPC::Asia in order to have more time for a demo.)
Slides for my talk at the London Perl Workshop in Nov 2013, featuring the Devel::SizeMe perl module.
See also the screencast at https://archive.org/details/Perl-Memory-Profiling-LPW2013
Tips and Tricks for Increased Development EfficiencyOlivier Bourgeois
Short presentation targetted at university students showing some tools and software that are usually not talked about in courses which helps development productivity.
Customize and Secure the Runtime and Dependencies of Your Procedural Language...VMware Tanzu
Customize and Secure the Runtime and Dependencies of Your Procedural Languages Using PL/Container
Greenplum Summit at PostgresConf US 2018
Hubert Zhang and Jack Wu
Introduction to Docker (as presented at December 2013 Global Hackathon)Jérôme Petazzoni
Not on board of the Docker ship yet? This presentation will get you up to speed, and explain everything you want to know about Linux Containers and Docker, including the new features of the latest 0.7 version (which brings support for all Linux distros and kernels).
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...Yandex
Lightweight virtualization", also called "OS-level virtualization", is not new. On Linux it evolved from VServer to OpenVZ, and, more recently, to Linux Containers (LXC). It is not Linux-specific; on FreeBSD it's called "Jails", while on Solaris it’s "Zones". Some of those have been available for a decade and are widely used to provide VPS (Virtual Private Servers), cheaper alternatives to virtual machines or physical servers. But containers have other purposes and are increasingly popular as the core components of public and private Platform-as-a-Service (PAAS), among others.
Just like a virtual machine, a Linux Container can run (almost) anywhere. But containers have many advantages over VMs: they are lightweight and easier to manage. After operating a large-scale PAAS for a few years, dotCloud realized that with those advantages, containers could become the perfect format for software delivery, since that is how dotCloud delivers from their build system to their hosts. To make it happen everywhere, dotCloud open-sourced Docker, the next generation of the containers engine powering its PAAS. Docker has been extremely successful so far, being adopted by many projects in various fields: PAAS, of course, but also continuous integration, testing, and more.
UCSD NANO 266 Quantum Mechanical Modelling of Materials and Nanostructures is a graduate class that provides students with a highly practical introduction to the application of first principles quantum mechanical simulations to model, understand and predict the properties of materials and nano-structures. The syllabus includes: a brief introduction to quantum mechanics and the Hartree-Fock and density functional theory (DFT) formulations; practical simulation considerations such as convergence, selection of the appropriate functional and parameters; interpretation of the results from simulations, including the limits of accuracy of each method. Several lab sessions provide students with hands-on experience in the conduct of simulations. A key aspect of the course is in the use of programming to facilitate calculations and analysis.
It’s important to be able to figure out what’s going on when things go wrong in your Node.js production application. Tools are needed to investigate memory leaks, crashes and other "interesting" events in production. The post-mortem community working group (https://github.com/nodejs/post-mortem) is working on these problems. Come and learn about the key issues being worked, and the progress of the working group so far as illustrated through examples and code.
Working with software means working with bugs. Bugs in software, bugs in hardware; bugs in Open Source code, bugs in proprietary code. If software is eating the world, bugs might end up taking the first bite.
We will present a few typical bugs, some of them famous, some of them infamous (including bugs that actually killed people). Since one can never be too well-prepared to fend off the next infestation, we will give tools, tips, and best practices to fix bugs in Open Source software. We will give real world examples of Really Mysterious Bugs (sometimes nicknamed "Heisenbugs" because they tend to disappear when you try to observe them), and how they were fixed, in Node.js, Docker, and the Linux Kernel.
Introduction to Docker, December 2014 "Tour de France" Bordeaux Special EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the last stop of the "Tour de France" in Bordeaux. It is slightly different from the presentation which was shown in the other cities (http://www.slideshare.net/jpetazzo/introduction-to-docker-december-2014-tour-de-france-edition), and includes a detailed history of dotCloud and Docker and a few other differences.
Special thanks to https://twitter.com/LilliJane and https://twitter.com/zirkome, who gave me the necessary motivation to put together this slightly different presentation, since they had already seen the other presentation in Paris :-)
Linux Server Deep Dives (DrupalCon Amsterdam)Amin Astaneh
Over the past few years the Linux kernel has gained features that allow us to learn more about what's really happening on our servers and the applications that run on them.
This talk will explore how these new features, particularly perf_events and ebpf, enable us to answer questions about what a Drupal site is doing in real time beyond what the standard logs, server performance tools, and even strace will reveal. Attendees will be provided a brief introduction to example uses of these tools to diagnose performance problems.
This talk is intended for attendees that are familiar with Linux, the command line, and have used host observability tools in the past (top, netstat, etc).
Similar to Practical SystemTAP basics: Perl memory profiling (20)
LinuxAlt 2013: Writing a driver for unknown USB deviceLubomir Rintel
Reverse engineering an USB Video Grabber device protocol and creating a Linux kernel driver.
Mostly the same talk as in OSSConf 2013, with Alice in Wonderland pictures.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
13. Perl part: the virtual machine
●
●
●
●
Interpreter instance (my_perl) holds current
OP pointer
OP points to a package it was compiled from
Each OP is implemented by Perl_pp_*()
function
runops() loop consumes OP tree, calling
Perl_pp_*()
14. /* The package a process is
* executing */
global package;
/* A Perl instruction */
probe process
("/usr/lib/perl5/CORE/libperl.so")
.function ("Perl_pp_*")
{
pid = pid ();
package[pid] = user_string (
$my_perl->Icurcop
->cop_stashpv);
}
15. Kernel part: memory management
●
User calls malloc()
●
libc calls mmap2() to map anonymous memory
●
kernel maps read-only shared zero-page
●
write causes a page fault
●
do_wp_page() copies on write
16. /* Allocations per package and pid */
global allocs;
/* Account for COW */
probe kernel.function ("do_wp_page")
{
pid = pid ();
pkg = package[pid];
if (pkg != "")
allocs[pkg, pid] <<<
mem_page_size ();
}
17. Output
●
Do it as fast as we can
●
We are resource-constrained for safety
●
Process it in userspace later on
●
Also, flush it upon end
18. /* Dump frequently, to so that we
* won't overflow */
probe timer.ms (100), end
{
foreach ([pkg, pid] in allocs) {
printf (""%d","%s"n",
@sum (allocs[pkg, pid]),
pkg)
}
delete allocs;
}