I'm curious. For the past few months, people@openvz.org have discovered (and fixed) an ongoing stream of obscure but serious and quite long-standing bugs.
How are you discovering these bugs?
Andrew added later:
hm, OK, I was visualizing some mysterious Russian bugfinding machine or something.
Don't stop ;)
On Monday this week, I was afforded the distinct privilege to deliver the opening keynote at the OpenZFS Developer Summit in San Francisco. It was a beautiful little event, with a full day of informative presentations and lots of networking during lunch and breaks.
Kernel Recipes 2015: The stable Linux Kernel Tree - 10 years of insanityAnne Nicolas
The Linux kernel gets a stable release about once every week.
This talk will go into the process of getting a patch accepted into the stable releases, how the release process works, and how Greg does a review and release cycle. It will consist of live examples of patches submitted to be added to the stable releases, as well as doing a release “live” on stage.
Greg KH, Linux Foundation
On Monday this week, I was afforded the distinct privilege to deliver the opening keynote at the OpenZFS Developer Summit in San Francisco. It was a beautiful little event, with a full day of informative presentations and lots of networking during lunch and breaks.
Kernel Recipes 2015: The stable Linux Kernel Tree - 10 years of insanityAnne Nicolas
The Linux kernel gets a stable release about once every week.
This talk will go into the process of getting a patch accepted into the stable releases, how the release process works, and how Greg does a review and release cycle. It will consist of live examples of patches submitted to be added to the stable releases, as well as doing a release “live” on stage.
Greg KH, Linux Foundation
Kernel Recipes 2016 - The kernel reportAnne Nicolas
The Linux kernel is at the core of any Linux system; the performance and capabilities of the kernel will, in the end, place an upper bound on what he system as a whole can do. This talk will review recent events in the kernel development community, discuss the current state of the kernel and the challenges it faces, and look forward to how the kernel may address those challenges. Attendees of any technical ability should gain a better understanding of how the kernel got to its current state and what can be expected in the near future.
Jonathan Corbet, LWN.net
Kernel Recipes 2016 - Patches carved into stone tablets...Anne Nicolas
Patches carved into stone tablets, why the Linux kernel developers rely on plain text email instead of using “modern” development tools.
With the wide variety of more “modern” development tools such as github, gerrit, and other methods of software development, why is the Linux kernel team still stuck in the 1990’s with ancient requirements of plain text email in order to get patches accepted? This talk will discuss just how the kernel development process works, why we rely on these “ancient” tools, and how they still work so much better than anything else. Patches carved into stone tablets, why the Linux kernel developers rely on plain text email instead of using “modern” development tools.
With the wide variety of more “modern” development tools such as github, gerrit, and other methods of software development, why is the Linux kernel team still stuck in the 1990’s with ancient requirements of plain text email in order to get patches accepted? This talk will discuss just how the kernel development process works, why we rely on these “ancient” tools, and how they still work so much better than anything else.
Greg KH, The Linux Foundation
Let's face it: config management has grown up so far that the problems slowing us down are for most of them not technical anymore. From common DevOps misconception to the way we pay our technical debt, we can use config management and automation to actually improve and attract all the people that are not playing the game yet. This talk will enlight some great moves that happened in this world recently and show that anything can be automate properly now. Then I will take some examples on how you can improve and shave the last yaks.
"git bisect" is a command that is part of the Git distributed version control system. This command enables software users, developers and testers to easily find the commit that introduced a regression. This is done by performing a kind of binary search between a known good and a known bad commit. git bisect supports both a manual and an automated mode. The automated mode uses a test script or command. People are very happy with automated bisection, because it saves them a lot of time, it makes it easy and worthwhile for them to improve their test suite, and overall it efficiently improves software quality.
Testers, developers and advanced users, who have some basic knowledge of version control systems, will learn practical tips, techniques and strategies to efficiently debug software.
Автоматическая оптимизация алгоритмов с помощью быстрого возведения матриц в ...Alexander Borzunov
Описание декоратора для автоматической оптимизации алгоритмов с помощью быстрого возведения матриц в степень в Python.
Смотрите подробнее:
GitHub: https://github.com/borzunov/cpmoptimize
Хабрахабр: http://habrahabr.ru/post/236689/
Python Package Index: https://pypi.python.org/pypi/cpmoptimize
Доклад на конференции Bitcoin Conference Russia 2015 о новом типе фьючерсного контракта, который оказался очень удачным для биткойн-трейдеров.
Видео доклада, включая самое интересное - вопросы и ответы, смотрите на YouTube: https://youtu.be/2iCljuZWvdE
Kernel Recipes 2016 - The kernel reportAnne Nicolas
The Linux kernel is at the core of any Linux system; the performance and capabilities of the kernel will, in the end, place an upper bound on what he system as a whole can do. This talk will review recent events in the kernel development community, discuss the current state of the kernel and the challenges it faces, and look forward to how the kernel may address those challenges. Attendees of any technical ability should gain a better understanding of how the kernel got to its current state and what can be expected in the near future.
Jonathan Corbet, LWN.net
Kernel Recipes 2016 - Patches carved into stone tablets...Anne Nicolas
Patches carved into stone tablets, why the Linux kernel developers rely on plain text email instead of using “modern” development tools.
With the wide variety of more “modern” development tools such as github, gerrit, and other methods of software development, why is the Linux kernel team still stuck in the 1990’s with ancient requirements of plain text email in order to get patches accepted? This talk will discuss just how the kernel development process works, why we rely on these “ancient” tools, and how they still work so much better than anything else. Patches carved into stone tablets, why the Linux kernel developers rely on plain text email instead of using “modern” development tools.
With the wide variety of more “modern” development tools such as github, gerrit, and other methods of software development, why is the Linux kernel team still stuck in the 1990’s with ancient requirements of plain text email in order to get patches accepted? This talk will discuss just how the kernel development process works, why we rely on these “ancient” tools, and how they still work so much better than anything else.
Greg KH, The Linux Foundation
Let's face it: config management has grown up so far that the problems slowing us down are for most of them not technical anymore. From common DevOps misconception to the way we pay our technical debt, we can use config management and automation to actually improve and attract all the people that are not playing the game yet. This talk will enlight some great moves that happened in this world recently and show that anything can be automate properly now. Then I will take some examples on how you can improve and shave the last yaks.
"git bisect" is a command that is part of the Git distributed version control system. This command enables software users, developers and testers to easily find the commit that introduced a regression. This is done by performing a kind of binary search between a known good and a known bad commit. git bisect supports both a manual and an automated mode. The automated mode uses a test script or command. People are very happy with automated bisection, because it saves them a lot of time, it makes it easy and worthwhile for them to improve their test suite, and overall it efficiently improves software quality.
Testers, developers and advanced users, who have some basic knowledge of version control systems, will learn practical tips, techniques and strategies to efficiently debug software.
Автоматическая оптимизация алгоритмов с помощью быстрого возведения матриц в ...Alexander Borzunov
Описание декоратора для автоматической оптимизации алгоритмов с помощью быстрого возведения матриц в степень в Python.
Смотрите подробнее:
GitHub: https://github.com/borzunov/cpmoptimize
Хабрахабр: http://habrahabr.ru/post/236689/
Python Package Index: https://pypi.python.org/pypi/cpmoptimize
Доклад на конференции Bitcoin Conference Russia 2015 о новом типе фьючерсного контракта, который оказался очень удачным для биткойн-трейдеров.
Видео доклада, включая самое интересное - вопросы и ответы, смотрите на YouTube: https://youtu.be/2iCljuZWvdE
HaltDos is a high throughput, high performance software based network appliance that can stay updated with evolving technology and threats without requiring hardware replacements. With its multi-layered and multi-vector approach, it can defend against a wide range of DDoS attacks within seconds to ensure high uptime of your website/web services.
Каждый разработчик рано или поздно сталкивается с предметно-ориентированными языками (DSL). Мы разберемся, зачем же нам нужны DSL, и какие проблемы они нам помогают решать. Поймем, в каких случаях нам стоит разрабатывать свой язык, а в каких — использовать уже существующий. Попробуем провести грань и решить, где у нас просто библиотека, а где — предметно ориентированный язык. Придумаем свой DSL и сравним различные подходы к работе с ним в Python. Увидим, как работают лексический и синтаксический анализаторы. Обязательно поговорим про то, как облегчить жизнь пользователям нашего языка. Как сделать информативными сообщения об ошибках? Как тестировать сценарии, написанные на нашем языке? На эти вопросы мы сможем дать ответ.
Электронная коммерция: от Hadoop к Spark ScalaRoman Zykov
Как обрабатывать большой объем данных быстро с наименьшими затратами? Мы смогли этого добиться в компании
RetailRocket. Обработка данных – это наш бизнес! У нас много данных: более 100 Тбайт, в сутки нам поступает более 100 млн
событий для обработки. До недавнего времени у нас все работало на кластере на базе Hadoop относительно устаревшего
дистрибутива Cloudera CDH 4.5, программный код был написан на Pig, Hive, Python и Java. Это порождало ряд проблем с
архитектурой, производительностью. Тестирование превращалось в настоящую головную боль. В конце лета RetailRocket
перешел на Yarn на базе CDH 5.1.2. Это открыло путь к более совершенным технологиям семейства Spark. Сейчас мы
находимся в фазе полного перехода на Spark на функциональном языке Scala. Это позволило нам избавиться от зоопарка
технологий, упростив архитектуру решений и автоматизировав тестирование. Первые результаты не заставили себя ждать –
получен прирост производительности на том же железе в три-пять раз. А это значит, что мы будем меньше инвестировать в
расширение парка серверов кластера. В докладе будет рассказано о проблемах, с которыми мы столкнулись, и о том как мы
их решили. Будут примеры исходного кода для оптимизации производительности и повышения удобства работы, который мы
закоммитили в наш публичный GitHub
Value Objects, Full Throttle (to be updated for spring TC39 meetings)Brendan Eich
Slides I prepared for the 29 January 2014 Ecma TC39 meeting, on Value Objects in JS, an ES7 proposal -- this one shotgunned the roadmap-space of declarative syntax, to find the right amount per TC39 (nearly zero, turns out).
Optimising Your Front End Workflow With Symfony, Twig, Bower and GulpMatthew Davis
We take great care in our back end coding workflow, optimising, automating and abstracting as much as is possible. So why don't we do that with our front end code?
We'll take a look at some tools to help us take our front end workflow to the next level, and hopefully optimise our load times in the process!
We'll be looking at using Twig templates and optimising them for the different areas of your application, integrating Bower and Gulp for managing assets and processing our front-end code to avoid repetitive tasks - looking at how that impacts the typical Symfony workflow.
Series of Unfortunate Netflix Container Events - QConNYC17aspyker
Project Titus is Netflix's container runtime on top of Amazon EC2. Titus powers algorithm research through massively parallel model training, media encoding, data research notebooks, ad hoc reporting, NodeJS UI services, stream processing and general micro-services. As an update from last year's talk, we will focus on the lessons learned operating one of the largest container runtimes on a public cloud. We'll cover the migration we've seen of applications and frameworks from VM's to containers. We will cover the operational issues with containers that only showed after we reached the large scale (1000's of container hosts, 100's of thousands of containers launched weekly) we are currently supporting. We'll touch base on the unique features we have added to help both batch and microservices run across a variety of runtimes (Java, R, NodeJS, Python, etc) and how higher level frameworks have taken advantage of Titus's scheduling capabilities.
This talk describes the current state of the Veil-Framework and the different tools included in it such as Veil-Evasion, Veil-Catapult, Veil-Powerview, Veil-Pillage, Veil-Ordnance
Testing, CI Gating & Community Fast Feedback: The Challenge of Integration Pr...OPNFV
Jose Lausuch, Ericsson, Nikolas Hermanns, Ericsson
How can we make sure that new code in OPNFV does not break or stop CI?
How can we ensure quick feedback for each patch-set?
With the new way to snapshot a virtual deployment it is now possible to get virtual clouds up and running in about 2 min. In addition, through low amount of disk/cpu consumption and isolation of the networking it is possible to have a very high number of virtual deployments co-existing in the same bare-metal server.
My talk in Bessemer VP R&D / CTO yearly event (Jan 2020).
The presentation discusses major concept in resilience testing and MyHeritage's path to Chaos Engineering.
The Future of Security and Productivity in Our Newly Remote WorldDevOps.com
Andy has made mistakes. He's seen even more. And in this talk he details the best and the worst of the container and Kubernetes security problems he's experienced, exploited, and remediated.
This talk details low level exploitable issues with container and Kubernetes deployments. We focus on lessons learned, and show attendees how to ensure that they do not fall victim to avoidable attacks.
See how to bypass security controls and exploit insecure defaults in this technical appraisal of the container and cluster security landscape.
Historically, sharing a Linux server entailed all kinds of untenable compromises. In addition to the security concerns, there was simply no good way to keep one application from hogging resources and messing with the others. The classic “noisy neighbor” problem made shared systems the bargain-basement slums of the Internet, suitable only for small or throwaway projects.
Serious use-cases traditionally demanded dedicated systems. Over the past decade virtualization (in conjunction with Moore’s law) has democratized the availability of what amount to dedicated systems, and the result is hundreds of thousands of websites and applications deployed into VPS or cloud instances. It’s a step in the right direction, but still has glaring flaws.
Most of these websites are just piles of code sitting on a server somewhere. How did that code got there? How can it can be scaled? Secured? Maintained? It’s anybody’s guess. There simply isn’t enough SysAdmin talent in the world to meet the demands of managing all these apps with anything close to best practices without a better model.
Containers are a whole new ballgame. Unlike VMs, you skip the overhead of running an entire OS for every application environment. There’s also no need to provision a whole new machine to have a place to deploy, meaning you can spin up or scale your application with orders of magnitude more speed and accuracy.
The talk is about operating system virtualization technology known as OpenVZ. This is an effective way of partitioning a Linux machine into multiple isolated Linux containers. All containers are running on top of one single Linux kernel, which results in excellent density, performance and manageability. The talk gives an overall description of OpenVZ building blocks, such as namespaces, cgroups and various resource controllers. A few features, notably live migration and virtual swap, are described in greater details. Results of some performance measurements against VMware, Xen and KVM are given. Finally, we will provide a status update on merging bits and pieces of OpenVZ kernel to upstream Linux kernel, and share our plans for the future.
Ведущий: Макс Мороз
Обзор системы ClusterFuzz, позволяющей осуществить проверку браузера Chrome на наличие уязвимостей в режиме реального времени и получить воспроизводимые результаты исследования каждого конкретного сбоя. Будут продемонстрированы преимущества использования различных санитайзеров и LibFuzzer, библиотеки для направленного фаззинга. Будет приведена подробная статистика видов уязвимостей, найденных в Chrome. Слушатели узнают о подводных камнях распределенного фаззинга; о том, как можно запустить свои собственные фаззеры в инфраструктуре Google и получить вознаграждение за найденные уязвимости.
Leveraging chaos mesh in Astra Serverless testingPierre Laporte
Presentation at Chaos Mesh community meeting of how Datastax implemented chaos testing on its cloud database offering Astra Serverless, based on Apache Cassandra and Kubernetes
● What is Unit Testing?
● Benefits
● What is Test Driven Development?
● What is Behavior Driven Development?
● Categories of (Unit) Tests / Software Testing
Pyramid, Frameworks
● C++, Java, .NET, Perl, PHP frameworks
● Unit-testing Zend Framework application
When Node.js Goes Wrong: Debugging Node in Production
The event-oriented approach underlying Node.js enables significant concurrency using a deceptively simple programming model, which has been an important factor in Node's growing popularity for building large scale web services. But what happens when these programs go sideways? Even in the best cases, when such issues are fatal, developers have historically been left with just a stack trace. Subtler issues, including latency spikes (which are just as bad as correctness bugs in the real-time domain where Node is especially popular) and other buggy behavior often leave even fewer clues to aid understanding. In this talk, we will discuss the issues we encountered in debugging Node.js in production, focusing upon the seemingly intractable challenge of extracting runtime state from the black hole that is a modern JIT'd VM.
We will describe the tools we've developed for examining this state, which operate on running programs (via DTrace), as well as VM core dumps (via a postmortem debugger). Finally, we will describe several nasty bugs we encountered in our own production environment: we were unable to understand these using existing tools, but we successfully root-caused them using these new found abilities to introspect the JavaScript VM.
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kevin Lynch
In this presentation I talk about our motivation to converting our microservices to run on Kubernetes. I discuss many of the technical challenges we encountered along the way, including networking issues, Java issues, monitoring and alerting, and managing all of our resources!
Kubernetes is awesome! But what does it takes for a Java developer to design, implement and run Cloud Native applications? In this session, we will look at Kubernetes from a user point of view and demonstrate how to consume it effectively. We will discover which concerns Kubernetes addresses and how it helps to develop highly scalable and resilient Java applications.
FOSDEM TALK: https://fosdem.org/2017/schedule/event/cnjavadev/
Practical RISC-V Random Test Generation using Constraint Programminged271828
A proof-of-concept random test generator for RISC-V ISA is presented. The test generator uses constraint programming for specification of relationships between instructions and operands. Example scenarios to cover basic instruction randomization, data hazards, and non-sharing are presented. The tool integrates the RISC-V instruction set simulator to enable the generation of self-checking tests. The tool is implemented in Python using a freely-available constraint solver library. A summary of problems encountered is provided and next steps are discussed.
Similar to LinuxCon 2011: OpenVZ and Linux Kernel Testing (20)
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
3. 3
Andrew Morton
I'm curious. For the past few months, people@openvz.org have
discovered (and fixed) an ongoing stream of obscure but serious and
quite long-standing bugs.
How are you discovering these bugs?
Andrew added later:
hm, OK, I was visualizing some mysterious Russian bugfinding
machine or something.
Don't stop ;)
David Miller
This issue has existed since the very creation of the netlink code :-)
4. 4
Linux Containers (LXC)
Many isolated environments on top of a single kernel
●
Namespaces
●
Resource accounting
●
Better resource accounting
●
Checkpointing and live migration
●
Extra features: cpu limits, NFS inside CTs, etc
OpenVZ Containers
5. 5
What makes a good test lab?
●
Fully automated system with deployment service
●
A web interface for test scheduling
●
Standard test sets (“combo #3, make it large”)
●
A web interface for test results (comparisons, graphs,
logs)
●
Integration with a bug tracking system
●
Net or serial console to collect kernel oopses
●
KVM, power switch, other goodies
6. 6
How do we find bugs in the mainstream kernel
Containers help us find more bugs
●
Independent life cycles
●
Precise resource accounting
Containers allow us to
●
Test initialization/finalization of kernel subsystems
●
Test error paths
●
Catch more leaks than the regular testing does
●
Catch more race conditions by means of stress testing
7. 7
Start/stop test
●
Massive parallel start/stop and suspend/resume
●
Random resource parameters
Helps to catch:
●
Race conditions
●
Test error paths
●
Memory leaks
8. 8
What makes a good performance test?
●
Effective load:
●
Atomic (UnixBench)
●
Complex (LAMP, SPEC-JBB, vConsolidate)
●
Sane test environment (no random cron jobs etc.)
●
Automation (minimize human interaction)
●
Reproducible results, minimize variability
●
Understand test results, even good ones
9. 12
Density testing
●
High density is important feature of OpenVZ (vs VMs)
●
Test measures response time on a number of CTs
●
increasing the number of CTs until time is bad
●
It's not a stress test
●
Produce a big resource overcommit
10. 13
Other useful tests
●
Week load test replays real httpd logs in real containers
●
Feature tests: isolation, CPU scheduler, checkpointing,
network virtualization, second level quota, etc.
●
Third-party tests: LTP, Сonnectathon, vSpecJBB,
vConsolidate, UNIX bench, sysbench, DVD-store, Netperf
12. 15
(1) How a Russian bug finding machine works
●
QA found a leak of 78 bytes of kernel memory
●
Developer was unable to reproduce a bug
●
He found that this is a leak of a 'struct user' object
●
He audited kernel code which references this object
●
Found one suspicious place
●
Wrote a demo code to trigger the bug, and a fix
●
...
●
PROFIT!
13. 16
(2) How resource controls prevented a DoS attack
uid / resource held maxheld barrier limit failcnt
numothersocks 9 360 360 360 1
uid / resource held maxheld barrier limit failcnt
kmemsize 1237973 14372344 14372700 14790164 80
numothersocks 9 360 360 360 1
A simple kernel attack using socketpair()
a.k.a. CVE 2010-4249
14. 18
(3) How a guy measured netns performance
●
It was a nice sunny day...
●
5 different configurations to test
●
Unpredictable, random results
●
CPU throttling caused by overheating;
adding a case fan helped!
15. 20
Conclusion
● Containers are good for kernel testing
● Resource limits (cgroups) are also helpful
● [most] performance tests are hoax
My name is Andrey Vagin. I have been working on OpenVZ for the last 5 years. I started working as a QA engineer, developing and running Linux kernel tests. Then I moved to the Linux kernel team as a developer. This talk tries to summarize the experience of me and my colleagues at Parallels.
I want to tell you how we test OpenVZ Linux kernel. I start by explaining what OpenVZ really is. Next, I share some thoughts about an ideal test lab. Then we'll see which testing techniques are good for kernel testing, and in particular why OpenVZ is helping us to find more bugs. Also, I'd like to say a few words about performance testing. Finally, a few anecdotal cases of bugs found will be presented.
We regularly find and fix bugs in different subsystems of the Linux kernel. Often these bugs are obscure, long-standing and hard to catch. Sometimes maintainers wonder, how we find those bugs. Right now I want to reveal all of our deep secrets.
But before I start, I want to say a few words about Linux Containers and OpenVZ Containers. A container is an isolated environment. Each container has its own user, network, filesystem and other namespaces that virtualize various kernel subsystems. Plus, there are cgroups for additional resource accounting. All containers are running on top of one single kernel – this is what makes them different from virtual machines. Containers do have some restrictions (like, on a Linux machine we can only have Linux containers), but the technology is more effective, because it doesn't do things such as emulation of hardware devices, or running multiple kernels. Compared to LXC, OpenVZ Containers have better resource accounting and some extra features such as cpu limits, checkpointing and live migration, NFS and FUSE inside containers and so on.
Based on our experience, these are the requirements for a good test lab. First, a test system is fully automatic. It should include the Deployment Service, the results portal, many different configurations of servers and additional hardware such as kvm, power switches and so on. All this components should be tightly integrated together and work smoothly. They may be controlled via web interface. The test system should have easy way to execute tests and find or compare restuls.
A lot of people are testing the Linux kernel, but for us containers play a special role in the process. A container initializes many kernel subsystems on start and destroys them on stop. On a usual system such operations are only done on boot and shutdown. It is hard to perform these operations many times, plus usually after all deinit operations the system is shutting down. Containers give us a way to perform multiple concurrent init/deinit sequences. It helps to find bugs such as not freeing of some resource. Plus, we have per-container resource accounting, which helps in detecting memory leaks. Also it enables to test various seldom error paths when we set different limits on resources.
Now I want to tell about one of significant tests, it's called Start-stop test. It starts/stops and suspends/resumes many containers simultaneously and sets random resource limits, just for some more fun. Can you imagine this test may find many bugs? Probably you are not sure, but it does, and finds bugs not only in OpenVZ kernel, but in the mainstream kernel, too. Actually it's also a stress test, since it generates a heavy load. In additional it executes many initialization and finalization of kernel subsystems. Also, this test forces the kernel to execute error paths due to randomization of resource limits. On each iteration it does some sanity checks. For example, it checks that all resource usage counters are zero after a container is stopped. It catches leaks, race conditions, errors on subsystem finalization and even leaks on error paths caused by race conditions.
Performance Testing is the most difficult part of testing. The results of these tests are published and users look at the numbers when choosing a product. So, test results should be comprehensible and reproducible. A main problem in creating of a performance test is to think up a useful workload. All performance tests may be divided into atomic tests and complex tests. Atomic tests make simple basic operations such as context switching, creating a file or forking a process. The to see a full picture, so they are more interested in complex tests. A complex test simulates some real workload. What should be a good performance test? Ideally the test should be fully automatic to avoid human factors and ensure consistency. A person may forget to do something or may do it in another way next time. If you can't automate the test, you should at least describe the process in great details. You should avoid side effects such as cron jobs, other extra daemons doing some work from time to time, data base index rebuild, CPU scaling and other such stuff. You can't be too much careful here. We have a special script which validates a test environment. The script is regularly updated when we find a new thing. The test should run several iterations and calculate statistical errors, to make sure results are reproducible. Often the system requires some time for stabilization and for this purpose you can execute a few warm-up iterations, ignoring their results. Then performing a comparison test, all products should be configured in the same or similar way. For example, when comparing network performance of virtualized systems, we should try to use the same networking setup (say, bridged networking). Finally, all the test results, both good and bad, should be analyzed and explained. Analysts are usually done only for bad results, and good ones are taken for granted. The thing is, in some cases good results mean there's something wrong with the test itself. If you can't explain your test results, they are totally useless, except maybe for marketing purposes.
Now let me show some results of our performance measurements. We compared XEN, ESXi, KVM and OpenVZ. I choose a LAMP test, because most of out customers are hosting providers. From the following results you can understand how well such type of workloads run in virtualized environment and how many web servers can you run on a single piece of hardware.
On this slide you can see the number of virtual machines affects performance, measured in the number of serviced requests per second. Here we can see that in case of 20 VMs all the products have very similar performance. In case of 40 VMs performance difference becomes more obvious. In case of 60 VMs we can see that all products except for OpenVZ have worse performance than with 40 VMs. This is because the system is too small to handle that amounts of VMs. With OpenVZ, containers are more lightweight so you can have greater number of containers than you could have VMs. In other words, OpenVZ density is higher.
Indeed, OpenVZ high container density is an important feature, so we regularly compare it to other products and try to improve. For that, we have a special density test. This test simulates a typical web hosting workload. Each container has an web server, mail server (with Spam Assasin and an Anti-virus) and Parallels Plesk Panel. This test tries to simulate a workload by sending requests to each service with a defined frequency. On each iteration of the test we add some more containers and measure service response time, making sure it is below a certain limit. Test is stopped when response time is bad. Test result is the number of containers for which the response time is still good. As for every other test, if we see a regression, we try to understand why it happened, and from time to time we find interesting things. For example, last time we found out that the directory entry cache shrinker was too aggressive doing its work, slowing down the whole system.
One more good test is a week load test. It is one of few tests which creates a non-synthetic workload, it replays of real users apache logs. We have many our own tests for testing OpenVZ specific features and use foreign test suites for other functionality.
Now I want to tell a real life story of how one of my colleagues, has fixed a bug in the Linux kernel, causing a comment from Andrew Morton about russian bugfinding machine. In the course of OpenVZ kernel testing, our QA (Quality Assurance) team found a leak of 78 bytes of kernel memory. Who cares about 78 bytes, especially on a server with 16 gigabytes of RAM? We do. We checked the beancounters debug information which showed that one struct user object has leaked. He then tried to reproduce that but with no luck. Bugs that can not be reproduced are hard. The only option left was to audit the kernel source code. That involved finding all the places where struct user object is referenced, and checking the code correctness. It took him 4 hours to do the audit, and he found one place where the reference to an object might be lost. The bug was present not ony OpenVZ kernel, but in the mainstream kernel too. In this case, after the problem was found, fixing it was pretty simple. So he wrote a fix and a demo code to trigger the bug, tested the fix and sent it to Linux kernel mailing list. Why is this particular incident so important? It's OpenVZ resource limiting code which helped to detect the leak in the first place -- as the bug is very hard to trigger and the leak is small enough that it might not be discovered at all. This bug is in fact a security issue. An ordinary user could exploit the bug and eat all the kernel memory, thus bringing the whole system down. Worse scenarios could be possible as well. Incidentally, OpenVZ is protected from this security issue -- because the kmemsize beancounter (which helped to found it) limits kernel memory usage per Container.
. About a year ago a DoS exploit which leads to system unresponsiveness was published. It looks like most kernels are indeed vulnerable. The good news is OpenVZ is not vulnerable. Why? Because of user beancounters. The nature of exploit is to create an unlimited number of sockets, thus rendering the whole system unusable so you need to power-cycle it to bring it back to life. Now, if you run this exploit in an OpenVZ container, you will hit the numothersock beancounter limit pretty soon and the script will exit. I went further and set numothersock limit to 'unlimited', and re-run the exploit. The situation is much worse in that case, the system slows down considerably, but I was still able to login to the physical server using ssh and kill the offending task from the host system using SIGTERM. Now, another beancounter, kmemsize, is working to save the system. Of course, if you set all beancounters to unlimited, exploit will work. So don't do that, unless your CT is completely trusted. Those limits are there for a reason, you know.
One of OpenVZ team members, Kirill Kolishkin, decided to suspend a container, but forgot to specify one parameter. Vzctl returned an error, that this parameter wasn't specified. When Kir executes vzctl with correct parameters, it returned the error “No such container”. After small investigation, he found that the config file disappeared. Kir didn't guess what the problem in a minute, but then he's understood how it may be reproduced and where the problem in the code. Now look at this code: This code allocates one variable on the stack, then validates a parameter and initialized the variable. While we do not see anything strange, but let's see what will occur, if the parameter is invalid. Oh, not. The code in the error path uses the uninitialized variable, it removes a file with name from this variable. By some chance, this variable contains the path to the container's config. Bad luck. GCC doesn't report any warning in this case.
One hot summer day, my colleague made performance measurements of network namespaces. He got some results, which look like a set of random data. It's not first measurements and the procedure was well tested. Where is a problem? The day was hot, a brain worked not well and probably not brain only. It required more then one hour, that he noticed a note about CPU throttling due to overheating. The host had not a body fan, after it is set up, the results is stabilized. What is conclusion of this story? Make sure, that the results is reproducible and remember about sideeffects.