GlusterFS is scale-out software defined storage. It was presented at LISA15 in Washington D.C. from November 8-13, 2015. The presentation covered GlusterFS installation, configuration of trusted storage pools, creating and managing distributed, replicated, and other volume types, expanding and shrinking volumes, self-healing, and accessing data using native GlusterFS clients, NFS, and SMB/CIFS. Configuration details for CTDB and sharing volumes over SMB were also provided.
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Kirill Tsym discusses Vector Packet Processing:
* Linux Kernel data path (in short), initial design, today's situation, optimization initiatives
* Brief overview of DPDK, Netmap, etc.
* Userspace Networking projects comparison: OpenFastPath, OpenSwitch, VPP.
* Introduction to VPP: architecture, capabilities and optimization techniques.
* Basic Data Flow and introduction to vectors.
* VPP Single and Multi-thread modes.
* Router and switch for namespaces example.
* VPP L4 protocol processing - Transport Layer Development Kit.
* VPP Plugins.
Kiril is a software developer at Check Point Software Technologies, part of Next Generation Gateway and Architecture team, developing proof of concept around DPDK and FD.IO VPP. He has years of experience in software, Linux kernel and networking development and has worked for Polycom, Broadcom and Qualcomm before joining Check Point.
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
This talk will provide an introduction to injection options of Envoy and then deep dive into ongoing Linux kernel work that enables injecting Envoy while introducing as little latency as possible.
The servicemesh and the sidecar proxy model are on a steep trajectory to redefine many networking and security use cases. This talk explains and demos a new socket redirect Linux kernel technology that allows running Envoy with similar performance as if the sidecar was linked to the application using a UNIX domain socket. The talk will also give an outlook on how Envoy can use the recently merged kernel TLS functionality to gain access to the clear text payload transparently for end to end encrypted applications without requiring to decrypt and re-encrypt any data to further reduce the overhead and latency.
The Linux Block Layer - Built for Fast StorageKernel TLV
The arrival of flash storage introduced a radical change in performance profiles of direct attached devices. At the time, it was obvious that Linux I/O stack needed to be redesigned in order to support devices capable of millions of IOPs, and with extremely low latency.
In this talk we revisit the changes the Linux block layer in the
last decade or so, that made it what it is today - a performant, scalable, robust and NUMA-aware subsystem. In addition, we cover the new NVMe over Fabrics support in Linux.
Sagi Grimberg
Sagi is Principal Architect and co-founder at LightBits Labs.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Kirill Tsym discusses Vector Packet Processing:
* Linux Kernel data path (in short), initial design, today's situation, optimization initiatives
* Brief overview of DPDK, Netmap, etc.
* Userspace Networking projects comparison: OpenFastPath, OpenSwitch, VPP.
* Introduction to VPP: architecture, capabilities and optimization techniques.
* Basic Data Flow and introduction to vectors.
* VPP Single and Multi-thread modes.
* Router and switch for namespaces example.
* VPP L4 protocol processing - Transport Layer Development Kit.
* VPP Plugins.
Kiril is a software developer at Check Point Software Technologies, part of Next Generation Gateway and Architecture team, developing proof of concept around DPDK and FD.IO VPP. He has years of experience in software, Linux kernel and networking development and has worked for Polycom, Broadcom and Qualcomm before joining Check Point.
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
This talk will provide an introduction to injection options of Envoy and then deep dive into ongoing Linux kernel work that enables injecting Envoy while introducing as little latency as possible.
The servicemesh and the sidecar proxy model are on a steep trajectory to redefine many networking and security use cases. This talk explains and demos a new socket redirect Linux kernel technology that allows running Envoy with similar performance as if the sidecar was linked to the application using a UNIX domain socket. The talk will also give an outlook on how Envoy can use the recently merged kernel TLS functionality to gain access to the clear text payload transparently for end to end encrypted applications without requiring to decrypt and re-encrypt any data to further reduce the overhead and latency.
In this talk we will discuss how to build and run containers without root privileges. As part of the discussion, we will introduce new programs like fuse-overlayfs and slirp4netns and explain how it is possible to do this using user namespaces. fuse-overlayfs allows to use the same storage model as "root" containers and use layered images. slirp4netns emulates a TCP/IP stack in userland and allows to use a network namespace from a container and let it access the outside world (with some limitations).
We will also introduce Usernetes, and how to run Kubernetes in an unprivileged user namespace
https://sched.co/Jcgg
Using eBPF for High-Performance Networking in CiliumScyllaDB
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
[KubeCon EU 2022] Running containerd and k3s on macOSAkihiro Suda
https://sched.co/ytpi
It has been very hard to use Mac for developing containerized apps. A typical way is to use Docker for Mac, but it is not FLOSS. Another option is to install Docker and/or Kubernetes into VirtualBox, often via minikube, but it doesn't propagate localhost ports, and VirtualBox also doesn't support the ARM architecture. This session will show how to run containerd and k3s on macOS, using Lima and Rancher Desktop. Lima wraps QEMU in a simple CLI, with neat features for container users, such as filesystem sharing and automatic localhost port forwarding, as well as DNS and proxy propagation for enterprise networks. Rancher Desktop wraps Lima with k3s integration and GUI.
Webinar topic: Zabbix for Monitoring
Presenter: Achmad Mardiansyah
In this webinar series, How Zabbix for Monitoring
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/en/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram also discord
Recording available on Youtube
https://youtu.be/iUs2G_9FS-M
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
GTIDs were introduced to solve replication problems and improve database consistency in MySQL database replication.
When, accidentally, transactions occur on a replica, this introduces GTIDs on that replica that don't exist on the master. When, on a master failover, this replica becomes the new master, and the corresponding binlogs of the errant GTIDs are already purged, replication breaks on the replicas of this new master, because those missing GTIDs can't be retrieved from the binlogs of this new master.
This presentation will talk about GTIDs and how to detect errant GTIDs on a replica (before the corresponding binlogs are purged) and how to look at the corresponding transactions in the binlogs. I'll give some examples of transactions that could happen on a replica that didn't originate from a primary node, explain how this is possible and share some tips on how to avoid this.
Basic understanding of MySQL database replication is assumed.
This presentation was at Percona Live 2019 in Austin, Texas.
https://www.percona.com/live/19/sessions/errant-gtids-breaking-replication-how-to-detect-and-avoid-them
Open vSwitch - Stateful Connection Tracking & Stateful NATThomas Graf
Update on status of connection tracking and stateful NAT addition to the Linux kernel datapath. Followed by a discussion on the topic to collect ideas and come up with next steps.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Kubernetes Networking with Cilium - Deep DiveMichal Rostecki
Cilium is open source software for providing and transparently securing network connectivity and load balancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. The foundation of Cilium is the new Linux kernel technology BPF which supports the dynamic insertion of BPF bytecode into the Linux kernel at various integration points. This presentation reveals the secrets of Kubernetes networking and gives you a deep dive into Cilium and why it is awesome!
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
From KubeCon to ContainerDays, eBPF is trendy in the Cloud Native world. What is eBPF, and why is it revolutionary, and what can it bring to you specifically?
Through concrete examples applied to observability, networking, and security, this talk will explain the principles of eBPF and its concrete advantages to connect and secure Cloud Native applications.
This talk will explain what is eBPF, why it is revolutionary is several fields, give examples of tools using eBPF and what they gain from it, and open up to the future of that technology.
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
In this talk we will discuss how to build and run containers without root privileges. As part of the discussion, we will introduce new programs like fuse-overlayfs and slirp4netns and explain how it is possible to do this using user namespaces. fuse-overlayfs allows to use the same storage model as "root" containers and use layered images. slirp4netns emulates a TCP/IP stack in userland and allows to use a network namespace from a container and let it access the outside world (with some limitations).
We will also introduce Usernetes, and how to run Kubernetes in an unprivileged user namespace
https://sched.co/Jcgg
Using eBPF for High-Performance Networking in CiliumScyllaDB
The Cilium project is a popular networking solution for Kubernetes, based on eBPF. This talk uses eBPF code and demos to explore the basics of how Cilium makes network connections, and manipulates packets so that they can avoid traversing the kernel's built-in networking stack. You'll see how eBPF enables high-performance networking as well as deep network observability and security.
[KubeCon EU 2022] Running containerd and k3s on macOSAkihiro Suda
https://sched.co/ytpi
It has been very hard to use Mac for developing containerized apps. A typical way is to use Docker for Mac, but it is not FLOSS. Another option is to install Docker and/or Kubernetes into VirtualBox, often via minikube, but it doesn't propagate localhost ports, and VirtualBox also doesn't support the ARM architecture. This session will show how to run containerd and k3s on macOS, using Lima and Rancher Desktop. Lima wraps QEMU in a simple CLI, with neat features for container users, such as filesystem sharing and automatic localhost port forwarding, as well as DNS and proxy propagation for enterprise networks. Rancher Desktop wraps Lima with k3s integration and GUI.
Webinar topic: Zabbix for Monitoring
Presenter: Achmad Mardiansyah
In this webinar series, How Zabbix for Monitoring
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/en/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram also discord
Recording available on Youtube
https://youtu.be/iUs2G_9FS-M
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
GTIDs were introduced to solve replication problems and improve database consistency in MySQL database replication.
When, accidentally, transactions occur on a replica, this introduces GTIDs on that replica that don't exist on the master. When, on a master failover, this replica becomes the new master, and the corresponding binlogs of the errant GTIDs are already purged, replication breaks on the replicas of this new master, because those missing GTIDs can't be retrieved from the binlogs of this new master.
This presentation will talk about GTIDs and how to detect errant GTIDs on a replica (before the corresponding binlogs are purged) and how to look at the corresponding transactions in the binlogs. I'll give some examples of transactions that could happen on a replica that didn't originate from a primary node, explain how this is possible and share some tips on how to avoid this.
Basic understanding of MySQL database replication is assumed.
This presentation was at Percona Live 2019 in Austin, Texas.
https://www.percona.com/live/19/sessions/errant-gtids-breaking-replication-how-to-detect-and-avoid-them
Open vSwitch - Stateful Connection Tracking & Stateful NATThomas Graf
Update on status of connection tracking and stateful NAT addition to the Linux kernel datapath. Followed by a discussion on the topic to collect ideas and come up with next steps.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Kubernetes Networking with Cilium - Deep DiveMichal Rostecki
Cilium is open source software for providing and transparently securing network connectivity and load balancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. The foundation of Cilium is the new Linux kernel technology BPF which supports the dynamic insertion of BPF bytecode into the Linux kernel at various integration points. This presentation reveals the secrets of Kubernetes networking and gives you a deep dive into Cilium and why it is awesome!
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
From KubeCon to ContainerDays, eBPF is trendy in the Cloud Native world. What is eBPF, and why is it revolutionary, and what can it bring to you specifically?
Through concrete examples applied to observability, networking, and security, this talk will explain the principles of eBPF and its concrete advantages to connect and secure Cloud Native applications.
This talk will explain what is eBPF, why it is revolutionary is several fields, give examples of tools using eBPF and what they gain from it, and open up to the future of that technology.
DockerCon 2017 - Cilium - Network and Application Security with BPF and XDPThomas Graf
This talk will start with a deep dive and hands on examples of BPF, possibly the most promising low level technology to address challenges in application and network security, tracing, and visibility. We will discuss how BPF evolved from a simple bytecode language to filter raw sockets for tcpdump to the a JITable virtual machine capable of universally extending and instrumenting both the Linux kernel and user space applications. The introduction is followed by a concrete example of how the Cilium open source project applies BPF to solve networking, security, and load balancing for highly distributed applications. We will discuss and demonstrate how Cilium with the help of BPF can be combined with distributed system orchestration such as Docker to simplify security, operations, and troubleshooting of distributed applications.
At the Public Sector Red Hat Storage Days on 1/20/16 and 1/21/16, Jason Calloway walked attendees through the basics of scalable POSIX file systems in the cloud.
This presentation talks about how to use GlusterFS in Openshift to provide Storage for application pods. If you need more details please refer http://humblec.com/persistent-volume-and-persistent-volume-claim-in-openshift-and-kubernetes-using-glusterfs-volume-plugin/
DCEU 18: Tips and Tricks of the Docker CaptainsDocker, Inc.
Brandon Mitchell - Solutions Architect, BoxBoat
Docker Captain Brandon Mitchell will help you accelerate your adoption of Docker containers by delivering tips and tricks on getting the most out of Docker. Topics include managing disk usage, preventing subnet collisions, debugging container networking, understanding image layers, getting more value out of the default volume driver, and solving the UID/GID permission issues with volumes in a way that allows images to be portable from any developer laptop and to production.
CONFidence 2017: Escaping the (sand)box: The promises and pitfalls of modern ...PROIDEA
Users of modern Linux containerization technologies are frequently at loss with what kind of security guarantees are delivered by tools they use. Typical questions range from Can these be used to isolate software with known security shortcomings and rich history of security vulnerabilities? to even Can I used such technique to isolate user-generated and potentially hostile assembler payloads?
Modern Linux OS code-base as well as independent authors provide a plethora of options for those who desire to make sure that their computational loads are solidly confined. Potential users can choose from solutions ranging from Docker-like confinement projects, through Xen hypervisors, seccomp-bpf and ptrace-based sandboxes, to isolation frameworks based on hardware virtualization (e.g. KVM).
The talk will discuss available today techniques, with focus on (frequently overstated) promises regarding their strength. In the end, as they say: “Many speed bumps don’t make a wall
This talk will focus on a brief overview of Kubernetes, with a brief demo, and then more of an in-depth focus on issues we've faced moving PHP projects into Docker and Kubernetes like signal propagation, init systems, and logging.
Talk from Cape Town PHP meetup on Feb. 7, 2016:
https://www.meetup.com/Cape-Town-PHP-Group/events/237226310/
Code: https://github.com/zoidbergwill/kubernetes-php-examples
Slides as markdown: http://www.zoidbergwill.com/presentations/2017/kubernetes-php/index.md
Docker in Production: Reality, Not Hype
DramaFever uses AWS to power our streaming video platform. We've been running Docker in production since about October 2013 (well before it even went 1.0). This talk gives an overview of how we use it to make development more consistent and deployment more repeatable.
kubernetes install and practice
* Environment (bare metal installation, not using cloud service)
- VM 1 : Mater node, 30GB, 2 vCPU, 4GB Mem
- VM 2 : Worker node, 30GB, 2 vCPU, 4GB Mem
* Practice
- deploying pod, make a deployment and service
- expose service using ingress(nginx-ingress)
While probably the most prominent, Docker is not the only tool for building and managing containers. Originally meant to be a "chroot on steroids" to help debug systemd, systemd-nspawn provides a fairly uncomplicated approach to work with containers. Being part of systemd, it is available on most recent distributions out-of-the-box and requires no additional dependencies.
This deck will introduce a few concepts involved in containers and will guide you through the steps of building a container from scratch. The payload will be a simple service, which will be automatically activated by systemd when the first request arrives.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Lisa 2015-gluster fs-hands-on
1. November 8–13, 2015 | Washington, D.C.
www.usenix.org/lisa15 #lisa15
GlusterFSGlusterFS
A Scale-out Software Defined Storage
Rajesh Joseph
Poornima Gurusiddaiah
2. 05/17/16
Note
● This holds good for 3.7 version of GlusterFS, other version might
have variations
● Commands shown here work on CentOS, other distributions
might have different command or options
● At the right corner of the slides, there is a link to the live demo
3. 05/17/16
GlusterFS Installation
Installation via Repo
Download latest repo file from download.gluster.org
Install GlusterFS
Installation via RPM
Download latest gluster RPMs from download.gluster.org
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/epel7/x86_64/
wget P /etc/yum.repos.d
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/epel7/x86_64/
yum install glusterfsserver
5. 05/17/16
Ports used by GlusterFS
UDP Ports
111 – RPC
963 – NFS lock manager (NLM)
TCP Ports
22 – For sshd used by geo-replication
111 – RPC
139 – netbios service
445 – CIFS protocol
965 – NLM
6. 05/17/16
Ports used by GlusterFS
TCP Ports
2049 – NFS exports
4379 – CTDB
24007 – GlusterFS Daemon (Management)
24008 – GlusterFS Daemon (RDMA port for Management)
24009 – Each brick of every volume on the node (GlusterFS version < 3.4)
49152 – Each brick of every volume on the node (GlusterFS version >= 3.4)
38465-38467 – GlusterFS NFS service
38468 – NFS Lock Manager (NLM)
38469 – NFS ACL Support
7. 05/17/16
Starting Gluster Server
Gluster server/service can be started by the following command
Gluster server should be started on all the nodes
To automatically start GlusterFS on node start use chkconfig command
or
# systemctl start glusterd
# systemctl enable glusterd
# chkconfig glusterd on
8. 05/17/16
Setting up Trusted Storage Pool
Use gluster peer probe command to include a new Node to the Trusted Storage
Pool
Removing Node from the Trusted Storage Pool
Verify the peer probe/detach succeeded by executing the following command
on all the nodes
# gluster peer status
# gluster peer probe <Node IP/Hostname of new Node>
# gluster peer detach <Node IP/Hostname>
11. Distribute Volume
Storage Node Storage Node Storage Node
Brick Brick Brick
Client
File1 File2
[0, a] [a + 1, b] [b + 1, c]
File1 Hash = x, Where 0 <= x <= a
File2 Hash = y, Where b < y <= c
12. 05/17/16
Creating Volumes - Distribute
Distributed volumes distributes files throughout the bricks in the volume
Its advised to provide a nested directory in the brick mount point as the brick
directory
If transport type is not specified 'tcp' is used as default
# gluster volume create <volume name> [transport <tcp|rdma|
tcp,rdma>] <Node IP/hostname>:<brick path>.... [force]
e.g.
# gluster volume create dist_vol host1:/mnt/brick1/data
host2:/mnt/brick1/data
Demo
14. 05/17/16
Creating Volumes - Replicate
Replicated volumes provides file replication across n (replica) bricks
Number of bricks must be a multiple of the replica count
It is advised to have bricks in different servers
The replication is synchronous in nature, hence it is not advised to combine a
brick in different geo location as it may reduce the performance drastically
# gluster volume create <volume name> [replica <COUNT>] [transport
<tcp|rdma|tcp,rdma>] <Node IP/hostname>:<brick path>.... [force]
e.g.
# gluster volume create repl_vol replica 3 host1:/mnt/brick1/data
host2:/mnt/brick1/data host3:/mnt/brick1/data
16. 05/17/16
Creating Volumes – Distribute Replicate
Distributed replicated volumes distributes files across replicated bricks in the
volume
Number of bricks must be a multiple of the replica count.
Brick order decides replica set and distribution set
# gluster volume create <volume name> [replica <COUNT>] [transport
<tcp|rdma|tcp,rdma>] <Node IP/hostname>:<brick path>.... [force]
e.g.
# gluster volume create repl_vol replica 3 host1:/mnt/brick1/data
host2:/mnt/brick1/data host3:/mnt/brick1/data host1:/mnt/brick2/data
host2:/mnt/brick2/data host3:/mnt/brick2/data
18. 05/17/16
Creating Volumes – Disperse
Dispersed volumes are based on erasure codes, providing space-efficient
protection against disk or server failures
The data protection offered by erasure coding can be represented as n = k + m
n = total number of bricks, disperse count
k = total number of data bricks, disperse-data count
m = number of brick failure that can be tolerated, redundancy count
Any two counts need to be specified while creating volume
# gluster volume create <volume name> [disperse COUNT] [disperse
data COUNT] [redundancy COUNT] [transport tcp|rdma|tcp,rdma] <Node
IP/hostname>:<brick path>.... [force]
Eg: 6 = 4 + 2 i.e. a 10MB file is split into 6 2.5MB chunks and stored in all 6
bricks(=15MB) but can withstand failure of 2 bricks
20. 05/17/16
Creating Volumes – Sharded
Sharded volume is similar to striped volume
Unlike other volume types shard is a volume option which can be set on any
volume
To disable sharding it is advisable to create a new volume without sharding and
copy out contents of this volume into the new volume
This feature is disabled by default, and is beta in 3.7.4 release
# gluster volume set <volume name> features.shard on
21. 05/17/16
Starting Volumes
Volumes must be started before they can be mounted
Use the following command to start volume
# gluster volume start <volname>
e.g.
# gluster volume start dist_vol
22. 05/17/16
Configuring Volume Options
Current volume options
Volume options can be configured using the following command
# gluster volume info
# gluster volume set <volname> <option> <value>
23. 05/17/16
Expanding Volume
Volume can be expanded when the cluster is online and available
Add Node to the Trusted Storage Pool
Add bricks to the volume
In case of replicate, the bricks count should be multiple of replica count
# gluster peer probe <IP/hostname>
# gluster volume addbrick <VOLNAME> <Node IP/hostname>:<brick path>....
24. 05/17/16
Expanding Volume
To change the replica count, following command needs to be executed
Number of replica bricks to be added must be equal to the number of distribute
sub-volumes
e.g change replica 2 distribute 3, to replica 3 distribute 3 for volume dist-repl
Rebalance the bricks
# gluster volume addbrick replica <new count> <VOLNAME> <Node IP/hostname>:<brick path>...
# gluster volume distrepl replica 3 host1:/brick1/brick1 host2:/brick1/brick1
host3:/brick1/brick1
# gluster volume rebalance <volname> <start | status | stop>
25. 05/17/16
Shrinking Volume
Remove a brick using the following command
You can view the status of the remove brick operation using the following
command
After status shows complete run the following command to remove brick
# gluster volume removebrick <volname> BRICK start [force]
# gluster volume removebrick <volname> BRICK commit
# gluster volume removebrick <volname> BRICK status
26. 05/17/16
Volume Self Healing
In Replicate volume when an offline bricks comes online the updates on the
online brick needs to be synced to this brick – Self Healing
File is healed by
Self-Heal daemon (SHD)
On-access
On-demand
SHD automatically initiates heal every 10 minutes
SHD can be turned on/off by the following command# gluster volume set <volname> cluster.selfhealdeamon <on | off>
27. 05/17/16
Volume Self Healing
On-demand healing can by done by
To enable/disable healing when file is accessed from the mount point
# gluster volume set <volname> cluster.dataselfheal off
# gluster volume heal <volname> cluster.entryselfheal off
# gluster volume heal <volname> cluster.metadataselfheal off
# gluster volume heal <volname>
# gluster volume heal <volname> full
# gluster volume heal <volname> info
28. 05/17/16
Volume Self Healing
In Replicate volume when an offline bricks comes online the updates on the
online brick needs to be synced to this brick – Self Healing
File is healed by
Self-Heal daemon (SHD)
On-access
On-demand
SHD automatically initiates heal every 10 minutes
SHD can be turned on/off by the following command# gluster volume set <volname> cluster.selfhealdeamon <on | off>
29. 05/17/16
Accessing Data
Volume can be mounted on local file-system
Following protocols supported for accessing volume
GlusterFS Native client
Filesystem in Userspace (FUSE)
NFS
NFS Ganesha
Gluster NFSv3
SMB / CIFS
30. 05/17/16
GlusterFS Native Client
Client machines should install GlusterFS client packages
Mount the started GlusterFS volume
Use any Node from Trusted Storage Pool to mount
Use /etc/fstab for automatic mount
# mount t glusterfs host1:/distvol /mnt/glusterfs
e.g. to mount distvol append following to /etc/fstab
host1:/distvol /mnt/glusterfs glusterfs defaults,_netdev,transport=tcp 0 0
Demo
31. 05/17/16
NFS Client
Install NFS client packages
Mount the started GlusterFS volume via NFS
Gluster NFS supports only version 3
Use /etc/fstab for automatic mount
# mount t nfs o vers=3 host1:/distvol /mnt/glusterfs
32. 05/17/16
SMB Client
For high availability and lock synchronization SMB uses CTDB
Install CTDB and GlusterFS Samba packages
GlusterFS Samba pacakges can be downloaded from
http://download.gluster.org/pub/gluster/glusterfs/samba/
Demo
33. 05/17/16
CTDB Setup
Create n-way replicated CTDB volume
n – Number of nodes that will be used as samba server
Replace META=all to META=ctdb in the below files on all the nodes
Start the ctdb volume
# gluster volume start ctdb
# gluster volume create ctdb replica 4 host1:/mnt/brick1/ctdb
host2:/mnt/brick1/ctdb host3:/mnt/brick1/ctdb host4:/mnt/brick1/ctdb
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
/var/lib/glusterd/hooks/1/stop/pre/S29CTDBteardown.sh
Demo
34. 05/17/16
CTDB Setup
On volume start following entries are created in /etc/samba/smb.conf
CTDB configuration files are stored on all the nodes used as Samba server
Create /etc/ctdb/nodes file on all the nodes that is used by Samba server
clustering = yes
idmap backend = tdb2
/etc/sysconfig/ctdb
192.168.8.100
192.168.8.101
192.168.8.102
192.168.8.103
Demo
35. 05/17/16
CTDB Setup
For IP failover create /etc/ctdb/public_addresses file on all the nodes
Add virtual IPs that CTDB should create in this file
<Virtual IP>/<routing prefix><node interface>
e.g.
192.168.1.20/24 eth0
192.168.1.21/24 eth0
Demo
36. 05/17/16
Sharing Volumes over Samba
Set following options to gluster volume
Edit /etc/glusterfs/glusterd.vol in each Node and add the following
Restart glusterd service on each Node
Set following options to gluster volume
# gluster volume set <volname> statprefetch off
# gluster volume set <volname> server.allowinsecure on
option rpcauthallowinsecure on
# gluster volume set <volname> storage.batchfsyncdelayusec 0
Demo
37. 05/17/16
Sharing Volumes over Samba
On GlusterFS volume start following entry will be added to /etc/samba/smb.conf
Start SMBD
Specify the SMB password. This password is used during the SMB mount
[glusterVOLNAME]
comment = For samba share of volume VOLNAME
vfs objects = glusterfs
glusterfs:volume = VOLNAME
glusterfs:logfile = /var/log/samba/VOLNAME.log
glusterfs:loglevel = 7
path = /
read only = no
guest ok = yes
# systemctl start smb
# smbpasswd a username
Demo
38. 05/17/16
Mounting Volumes using SMB
Mount from Windows system
Mount from Linux system
# net use <drive letter> <virtual IP>glusterVOLNAME
e.g.
# net use Z: 192.168.1.20glusterdistvol
# mount t cifs <virtual IP>glusterVOLNAME /mnt/cifs
e.g.
# mount t cifs 192.168.1.20glusterdistvol /mnt/cifs
Demo
39. 05/17/16
Troubleshooting
Log files
Following command will give you log file location
Log dir will contain logs for each GlusterFS process
glusterd - /var/log/glusterfs/etcglusterfsglusterd.vol.log
Bricks - /var/log/glusterfs/bricks/<path extraction of brick path>.log
Cli - /var/log/glusterfs/cmd_history.log
Rebalance - /var/log/glusterfs/VOLNAMErebalance.log
Self-Heal Daemon (SHD) - /var/log/glusterfs/glustershd.log
Quota - /var/log/glusterfs/quotad.log
# gluster –printlogdir
40. 05/17/16
Troubleshooting
Log files
Log dir will contain logs for each GlusterFS process
NFS - /var/log/glusterfs/nfs.log
Samba - /var/log/samba/glusterfsVOLNAME<ClientIP>.log
NFS-Ganesha - /var/log/nfsganesha.log
Fuse Mount - /var/log/glusterfs/<mountpoint path extraction>.log
Geo-replication - /var/log/glusterfs/georeplication/<master>
Volume status
# gluster volume status [volname]
42. 05/17/16
Troubleshooting – Split Brain
Is a scenario where in a replicate volume GlusterFS is not in a position to
determine the correct copy of file
Three different types of split-brain
Data split-brain
Metadata split-brain
Entry split-brain
The only way to resolve split-brains is by manually inspecting the file contents
from the backend and deciding which is the true copy
43. 05/17/16
Troubleshooting – Preventing Split Brain
Configuring Server-Side Quorum
Number of server failures that the trusted storage pool can sustain
Server quorum can be by volume option
All bricks on the node are brought down in case server-side quorum is not met
# gluster volume set all cluster.serverquorumratio <Percentage>
e.g.
# gluster volume set all cluster.serverquorumratio 51%
45. 05/17/16
Troubleshooting – Preventing Split Brain
Configuring Client-Side Quorum
Determines number of bricks that must be up for allowing data modification
Files will become read-only in case of quorum failure
Two types of client-side quorum
Fixed – fixed number of bricks should be up
Auto – Quorum conditions are determined by GlusterFS
# gluster volume set all cluster.quorumtype <fixed | auto>
# gluster volume set all cluster.quorumcount <count>
46. 05/17/16
Troubleshooting – Preventing Split Brain
Configuring Client-Side Quorum
Auto quorum type
At least n/2 brick needs to be up, where is n is the replica count
If n is even and exactly n/2 bricks are up then first brick of the replica set
should be up
47. 05/17/16
Community
IRC channels:
#gluster – For any gluster usage or related discussions
#gluster-dev – For any gluster development related discussions
#gluster-meeting – To attend the weekly meeting and bug triage
Mailing lists:
gluster-users@gluster.org - For any user queries or related discussions
gluster-devel@gluster.org - For any gluster development related
queries/discussions
Repos for other major distributions will be available at http://download.gluster.org/pub/gluster/glusterfs/LATEST
Configure firewall based on what all feature being used by GlusterFS
Most of these commands have various options by which we can improve performance. Based on your workloads carefully select various options.
When shrinking distributed replicated volumes, the number of bricks being removed must be a
multiple of the replica count. For example, to shrink a distributed replicated volume with a
replica count of 2, you need to remove bricks in multiples of 2 (such as 4, 6, 8, etc.). In
addition, the bricks you are removing must be from the same sub-volume (the same replica
set). In a non-replicated volume, all bricks must be available in order to migrate data and
perform the remove brick operation. In a replicated volume, at least one of the bricks in the
replica must be available
#gluster volume heal &lt;VOLNAME&gt; #trigger self-healing only on the files which require healing
#gluster volume heal &lt;VOLNAME&gt; full #trigger self-healing on all the files on a volume
#gluster volume heal &lt;VOLNAME&gt; info #view the list of files that need healing
Rpc-auth-allow-insecure allows SMBD to talk to gluster bricks on unprivileged ports
cluster.quorum-count → If quorum-type is &quot;fixed&quot; only allow writes if this many bricks or present