Ganeti is a robust cluster virtualization management software tool. It’s built on top of existing virtualization technologies such as Xen and KVM and other Open Source software. Its integration with various technologies such as DRBD and LVM results in a cheaper High Availability infrastructure and linear scaling.
This hands-on tutorial will cover a basic overview of Ganeti, the step-by-step install & setup of a single-node and multi-node Ganeti cluster, operating the cluster, and some best practices of Ganeti.
If attendees want to participate in the optional hands-on portions of the tutorial, there will be virtual machine images available online and at the tutorial itself. We’ll be using VirtualBox and Vagrant to deploy Ganeti on two to three Ubuntu nodes.
Hands on Virtualization with Ganeti (part 1) - LinuxCon 2012Lance Albertson
Ganeti is a robust cluster virtualization management software tool. It’s built on top of existing virtualization technologies such as Xen and KVM and other Open Source software. Its integration with various technologies such as DRBD and LVM results in a cheaper High Availability infrastructure and linear scaling.
This hands-on tutorial will cover a basic overview of Ganeti, the step-by-step install & setup of a single-node and multi-node Ganeti cluster, operating the cluster, and some best practices of Ganeti.
Ganeti is a cluster virtualization management software tool built on top of existing virtualization technologies such as Xen or KVM and other Open Source software. This hands-on tutorial will give an overview of Ganeti, how to install it, how to get started deploying VMs, & administrative guide to Ganeti. The tutorial will also cover installing & using Ganeti Web Manager as a web front-end.
Ganeti Web Manager: Cluster Management Made SimpleOSCON Byrum
Looking for an easy, scalable way to manage your Ganeti-based clusters? Ganeti Web Manager provides admins an easy to deploy, Django based GUI that effectively manages private clusters & works equally well for providing customers access. With a caching system designed to scale to thousands of virtual machines without decreasing performance, Ganeti Web Manager makes cluster management truly simple.
Introduction to Docker (and a bit more) at LSPE meetup SunnyvaleJérôme Petazzoni
What's Docker, why does it matter, how does it use Linux Containers, why should you use it, and how? You'll find answers to those questions (and a bit more) in this presentation, given February 20th 2014 at the Large Scale Production Engineering Meet-Up at Yahoo, in Sunnyvale.
Hands on Virtualization with Ganeti (part 1) - LinuxCon 2012Lance Albertson
Ganeti is a robust cluster virtualization management software tool. It’s built on top of existing virtualization technologies such as Xen and KVM and other Open Source software. Its integration with various technologies such as DRBD and LVM results in a cheaper High Availability infrastructure and linear scaling.
This hands-on tutorial will cover a basic overview of Ganeti, the step-by-step install & setup of a single-node and multi-node Ganeti cluster, operating the cluster, and some best practices of Ganeti.
Ganeti is a cluster virtualization management software tool built on top of existing virtualization technologies such as Xen or KVM and other Open Source software. This hands-on tutorial will give an overview of Ganeti, how to install it, how to get started deploying VMs, & administrative guide to Ganeti. The tutorial will also cover installing & using Ganeti Web Manager as a web front-end.
Ganeti Web Manager: Cluster Management Made SimpleOSCON Byrum
Looking for an easy, scalable way to manage your Ganeti-based clusters? Ganeti Web Manager provides admins an easy to deploy, Django based GUI that effectively manages private clusters & works equally well for providing customers access. With a caching system designed to scale to thousands of virtual machines without decreasing performance, Ganeti Web Manager makes cluster management truly simple.
Introduction to Docker (and a bit more) at LSPE meetup SunnyvaleJérôme Petazzoni
What's Docker, why does it matter, how does it use Linux Containers, why should you use it, and how? You'll find answers to those questions (and a bit more) in this presentation, given February 20th 2014 at the Large Scale Production Engineering Meet-Up at Yahoo, in Sunnyvale.
Docker and Containers for Development and Deployment — SCALE12XJérôme Petazzoni
Docker is an Open Source engine to build, run, and manage containers. We'll explain what are Linux Containers, what powers them (under the hood), and what extra value Docker brings to the table. Then we'll see what the typical Docker workflow looks like from a developer point of view. We'll also give an Ops perspective, including deployment options. If you already saw a "Docker 101", consider this presentation as the February 2014 update! :-)
KVM (Kernel-based Virtual Machine) is a full virtualization solution built into the Linux kernel. OpenStack Foundation user surveys consistently indicate that KVM is the most commonly used Hypervisor for OpenStack deployments, managed using the Libvirt driver for OpenStack Compute (Nova). Despite this sustained popularity development of the driver, and indeed the underlying Hypervisor itself, continues at a frantic pace.
This presentation will help you make sense of it all starting with an overview of the way Nova, Libvirt, and KVM interact before analysing progress made in Kilo on utilizing key Libvirt/KVM features in Nova including:
Instance vCPU pinning
Huge page backed instances
Enhanced NUMA topology awareness
...and more! The session will close with a discussion of how in addition to exposing existing Libvirt/KVM features emerging OpenStack use cases - such as Network Function Virtualization (NFV) and High Performance Computing (HPC) - are driving open innovation in the Libvirt, QEMU, and KVM projects themselves.
OpenNebulaConf2015 2.02 Backing up your VM’s with Bacula - Alberto GarcíaOpenNebula Project
How to use Bacula and live snapshot’s capabilities on OpenNebula to make backups of your virtual machines and store them.
Author Biography
Automate all the things! I love using any tool to make things to work automagically.
MongoDB World 2019: Terraform New Worlds on MongoDB Atlas MongoDB
MongoDB Atlas, MongoDB's database as a service platform, has made it faster and easier than ever to use MongoDB and as teams find their Atlas "flow" they smartly want to automate it to increase developer velocity. Many are creating this kind of automation with HashiCorp's Terraform so let's bring these two great platforms together! We'll look at the resources provided by the Atlas API and then I'll show how to automate a flow securely with a Terraform Provider for Atlas. We will end by covering how MongoDB is making this experience even better going forward.
It's presentation for technet 2015 in korea.
I changed the format to pptx,
목차는 아래와 같습니다.
Openstack 인프라 구축 (4 node 구성) [ 30분]
Openstack 위에 VM 생성 [ 20분 ]
docker 구축 기초 [ 30분]
오픈스택에 docker를 연결 [ 30분]
Docker로 WEB서비스 구축 [ 15분]
Openstack 위에 Docker로 WEB서비스 구축 [ 15분]
Docker로 jenkins 구현 [30분]
Deploying MongoDB sharded clusters easily with Terraform and AnsibleAll Things Open
Presented by: Ivan Groenewold
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: Installing big clusters is always a challenge, and can be a very time-consuming task. At a high level, we need to provision the hardware, install the software, configure monitoring, and set up a backup process.
In this talk we will see how to develop a complete pipeline to deploy MongoDB sharded clusters at the push of a button, that can accomplish all of these tasks for you.
By combining Terraform for the hardware provisioning, and Ansible for the software installation, we can completely automate the process, saving time and providing a standardized reusable solution.
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the storage subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Storage with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
Talk held by Javier Fontan at the CentOS Dojo in Paris, August 25th (http://wiki.centos.org/Events/Dojo/Paris2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
John and I presented the status of Replication in OpenStack Cinder at the Tokyo Summit. This presentation is current for the Liberty release and has some previews of what will be in Mitaka.
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...Jérôme Petazzoni
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
This talk was delivered at DockerCon Europe 2015 in Barcelona.
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...Yandex
Lightweight virtualization", also called "OS-level virtualization", is not new. On Linux it evolved from VServer to OpenVZ, and, more recently, to Linux Containers (LXC). It is not Linux-specific; on FreeBSD it's called "Jails", while on Solaris it’s "Zones". Some of those have been available for a decade and are widely used to provide VPS (Virtual Private Servers), cheaper alternatives to virtual machines or physical servers. But containers have other purposes and are increasingly popular as the core components of public and private Platform-as-a-Service (PAAS), among others.
Just like a virtual machine, a Linux Container can run (almost) anywhere. But containers have many advantages over VMs: they are lightweight and easier to manage. After operating a large-scale PAAS for a few years, dotCloud realized that with those advantages, containers could become the perfect format for software delivery, since that is how dotCloud delivers from their build system to their hosts. To make it happen everywhere, dotCloud open-sourced Docker, the next generation of the containers engine powering its PAAS. Docker has been extremely successful so far, being adopted by many projects in various fields: PAAS, of course, but also continuous integration, testing, and more.
Bare Metal to OpenStack with Razor and ChefMatt Ray
Slides from the OpenStack Spring 2013 Summit workshop presented by Egle Sigler (@eglute) and Matt Ray (@mattray) from Rackspace and Opscode respectively. Please refer to http://anystacker.com/ for additional content.
Kernel Recipes 2013 - Kernel for your deviceAnne Nicolas
Any industrial project based on Linux involves Longterm management of a Linux kernel and therefore a number of questions to ask about the choices to be made. BSP, Linux distribution, kernel.org? Which version?
These questions will be reviewed and best practices to facilitate this maintenance.
This presentation gives overview or our Ganeti deployment
There is related YouTube playlist (in Croatian) with presentations https://www.youtube.com/playlist?list=PLDMnMa3XBHD_K6Rl2FBe2CC-MS6mdTOrJ
Let's trace Linux Lernel with KGDB @ COSCUP 2021Jian-Hong Pan
https://coscup.org/2021/en/session/39M73K
https://www.youtube.com/watch?v=L_Gyvdl_d_k
Engineers have plenty of debug tools for user space programs development, code tracing, debugging and analyzing. Except “printk”, do we have any other debug tools for Linux kernel development? The “KGDB” mentioned in Linux kernel document provides another possibility.
Will share how to experiment with the KGDB in a virtual machine. And, use GDB + OpenOCD + JTAG + Raspberry Pi in the real environment as the demo in this talk.
開發 user space 軟體時,工程師們有方便的 debug 工具進行查找、分析、除錯。但在 Linux kernel 的開發,除了 printk 外,還可以有哪些工具可以使用呢?從 Linux kernel document 可以看到 KGDB 相關的資訊,提供了在 kernel 除錯時的另一個可能性。
本次將分享,從建立最簡單環境的虛擬機機開始,到實際使用 GDB + OpenOCD + JTAG + Raspberry Pi 當作展示範例。
Docker and Containers for Development and Deployment — SCALE12XJérôme Petazzoni
Docker is an Open Source engine to build, run, and manage containers. We'll explain what are Linux Containers, what powers them (under the hood), and what extra value Docker brings to the table. Then we'll see what the typical Docker workflow looks like from a developer point of view. We'll also give an Ops perspective, including deployment options. If you already saw a "Docker 101", consider this presentation as the February 2014 update! :-)
KVM (Kernel-based Virtual Machine) is a full virtualization solution built into the Linux kernel. OpenStack Foundation user surveys consistently indicate that KVM is the most commonly used Hypervisor for OpenStack deployments, managed using the Libvirt driver for OpenStack Compute (Nova). Despite this sustained popularity development of the driver, and indeed the underlying Hypervisor itself, continues at a frantic pace.
This presentation will help you make sense of it all starting with an overview of the way Nova, Libvirt, and KVM interact before analysing progress made in Kilo on utilizing key Libvirt/KVM features in Nova including:
Instance vCPU pinning
Huge page backed instances
Enhanced NUMA topology awareness
...and more! The session will close with a discussion of how in addition to exposing existing Libvirt/KVM features emerging OpenStack use cases - such as Network Function Virtualization (NFV) and High Performance Computing (HPC) - are driving open innovation in the Libvirt, QEMU, and KVM projects themselves.
OpenNebulaConf2015 2.02 Backing up your VM’s with Bacula - Alberto GarcíaOpenNebula Project
How to use Bacula and live snapshot’s capabilities on OpenNebula to make backups of your virtual machines and store them.
Author Biography
Automate all the things! I love using any tool to make things to work automagically.
MongoDB World 2019: Terraform New Worlds on MongoDB Atlas MongoDB
MongoDB Atlas, MongoDB's database as a service platform, has made it faster and easier than ever to use MongoDB and as teams find their Atlas "flow" they smartly want to automate it to increase developer velocity. Many are creating this kind of automation with HashiCorp's Terraform so let's bring these two great platforms together! We'll look at the resources provided by the Atlas API and then I'll show how to automate a flow securely with a Terraform Provider for Atlas. We will end by covering how MongoDB is making this experience even better going forward.
It's presentation for technet 2015 in korea.
I changed the format to pptx,
목차는 아래와 같습니다.
Openstack 인프라 구축 (4 node 구성) [ 30분]
Openstack 위에 VM 생성 [ 20분 ]
docker 구축 기초 [ 30분]
오픈스택에 docker를 연결 [ 30분]
Docker로 WEB서비스 구축 [ 15분]
Openstack 위에 Docker로 WEB서비스 구축 [ 15분]
Docker로 jenkins 구현 [30분]
Deploying MongoDB sharded clusters easily with Terraform and AnsibleAll Things Open
Presented by: Ivan Groenewold
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: Installing big clusters is always a challenge, and can be a very time-consuming task. At a high level, we need to provision the hardware, install the software, configure monitoring, and set up a backup process.
In this talk we will see how to develop a complete pipeline to deploy MongoDB sharded clusters at the push of a button, that can accomplish all of these tasks for you.
By combining Terraform for the hardware provisioning, and Ansible for the software installation, we can completely automate the process, saving time and providing a standardized reusable solution.
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebula Project
In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the storage subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Storage with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
Talk held by Javier Fontan at the CentOS Dojo in Paris, August 25th (http://wiki.centos.org/Events/Dojo/Paris2014)
In this talk we talk about OpenNebula from the perspsective of the CentOS, explaining tips and considerations for power users.
John and I presented the status of Replication in OpenStack Cinder at the Tokyo Summit. This presentation is current for the Liberty release and has some previews of what will be in Mitaka.
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...Jérôme Petazzoni
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
This talk was delivered at DockerCon Europe 2015 in Barcelona.
"Lightweight Virtualization with Linux Containers and Docker". Jerome Petazzo...Yandex
Lightweight virtualization", also called "OS-level virtualization", is not new. On Linux it evolved from VServer to OpenVZ, and, more recently, to Linux Containers (LXC). It is not Linux-specific; on FreeBSD it's called "Jails", while on Solaris it’s "Zones". Some of those have been available for a decade and are widely used to provide VPS (Virtual Private Servers), cheaper alternatives to virtual machines or physical servers. But containers have other purposes and are increasingly popular as the core components of public and private Platform-as-a-Service (PAAS), among others.
Just like a virtual machine, a Linux Container can run (almost) anywhere. But containers have many advantages over VMs: they are lightweight and easier to manage. After operating a large-scale PAAS for a few years, dotCloud realized that with those advantages, containers could become the perfect format for software delivery, since that is how dotCloud delivers from their build system to their hosts. To make it happen everywhere, dotCloud open-sourced Docker, the next generation of the containers engine powering its PAAS. Docker has been extremely successful so far, being adopted by many projects in various fields: PAAS, of course, but also continuous integration, testing, and more.
Bare Metal to OpenStack with Razor and ChefMatt Ray
Slides from the OpenStack Spring 2013 Summit workshop presented by Egle Sigler (@eglute) and Matt Ray (@mattray) from Rackspace and Opscode respectively. Please refer to http://anystacker.com/ for additional content.
Kernel Recipes 2013 - Kernel for your deviceAnne Nicolas
Any industrial project based on Linux involves Longterm management of a Linux kernel and therefore a number of questions to ask about the choices to be made. BSP, Linux distribution, kernel.org? Which version?
These questions will be reviewed and best practices to facilitate this maintenance.
This presentation gives overview or our Ganeti deployment
There is related YouTube playlist (in Croatian) with presentations https://www.youtube.com/playlist?list=PLDMnMa3XBHD_K6Rl2FBe2CC-MS6mdTOrJ
Let's trace Linux Lernel with KGDB @ COSCUP 2021Jian-Hong Pan
https://coscup.org/2021/en/session/39M73K
https://www.youtube.com/watch?v=L_Gyvdl_d_k
Engineers have plenty of debug tools for user space programs development, code tracing, debugging and analyzing. Except “printk”, do we have any other debug tools for Linux kernel development? The “KGDB” mentioned in Linux kernel document provides another possibility.
Will share how to experiment with the KGDB in a virtual machine. And, use GDB + OpenOCD + JTAG + Raspberry Pi in the real environment as the demo in this talk.
開發 user space 軟體時,工程師們有方便的 debug 工具進行查找、分析、除錯。但在 Linux kernel 的開發,除了 printk 外,還可以有哪些工具可以使用呢?從 Linux kernel document 可以看到 KGDB 相關的資訊,提供了在 kernel 除錯時的另一個可能性。
本次將分享,從建立最簡單環境的虛擬機機開始,到實際使用 GDB + OpenOCD + JTAG + Raspberry Pi 當作展示範例。
Kernel Recipes 2015 - Kernel dump analysisAnne Nicolas
Kernel dump analysis
Cloud this, cloud that…It’s making everything easier, especially for web hosted services. But what about the servers that are not supposed to crash ? For applications making the assumption the OS won’t do any fault or go down, what can you write in your post-mortem once the server froze and has been restarted ? How to track down the bug that lead to service unavailability ?
In this talk, we’ll see how to setup kdump and how to panic a server to generate a coredump. Once you have the vmcore file, how to track the issue with “crash” tool to find why your OS went down. Last but not least : with “crash” you can also modify your live kernel, the same way you would do with gdb.
Adrien Mahieux – System administrator obsessed with performance and uptime, tracking down microseconds from hardware to software since 2011. The application must be seen as a whole to provide efficiently the requested service. This includes searching for bottlenecks and tradeoffs, design issues or hardware optimization.
While probably the most prominent, Docker is not the only tool for building and managing containers. Originally meant to be a "chroot on steroids" to help debug systemd, systemd-nspawn provides a fairly uncomplicated approach to work with containers. Being part of systemd, it is available on most recent distributions out-of-the-box and requires no additional dependencies.
This deck will introduce a few concepts involved in containers and will guide you through the steps of building a container from scratch. The payload will be a simple service, which will be automatically activated by systemd when the first request arrives.
Dockerizing the Hard Services: Neutron and Novaclayton_oneill
Talk about the benefits and pitfalls involved in successfully running complex services like Neutron and Nova inside of Docker containers.
Topics include:
* What magic incantations are needed to run these services at all?
* How to prevent HA router failover on service restarts.
* How to prevent network namespaces from breaking everything.
* Bonus: How network namespace fixes also helped fix Cinder NFS backend
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
A talk about methods and tools to automate deployment of Plone sites. With a few steps an environment is prepared for a new Plone site on a test, staging or production layer. These steps take a couple of minutes, doing this manually took around one hour.
We use Puppet to prepare our hosts/clusters to get an environment to deploy to. Fabric is used to deploy Plone on this environment and to extend the webserver configuration under the hood. These complementary techniques provide a complete solution to get a working Plone site, including rollbacks.
Presentation by: Pawel Lewicki and Kim Chee Leong
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
1. Ganeti Setup & Walk-
thru Guide
(part 2)
Lance Albertson
@ramereth
Associate Director
OSU Open Source Lab
2. Session Overview (part 2)
● Hands on Demo
● Installation and Initialization
● Cluster Management
● Adding instances (VMs)
● Controlling instances
● Auto Allocation
● Dealing with node failures
3. Ganeti Setup & Walk-thru Guide
● 3-node cluster using Vagrant &
VirtualBox
● Useful for testing
● Try out Ganeti on your laptop!
● Environment configured via puppet
● Installation requirements:
○ git, VirtualBox, & Vagrant
4. Repo & Vagrant Setup
Make sure you have hardware virtualization enabled in
your BIOS prior to running VirtualBox. You will get an error
from VirtualBox while starting the VM if you don't have it
enabled.
gem install vagrant
git clone git://github.com/ramereth/vagrant-
ganeti.git
git submodule update --init
5. Starting up & accessing the nodes
The Vagrantfile is setup to where you can deploy one, two, or three nodes
depending on your use case. Node1 will have Ganeti already initialized while
the other two will only have Ganeti installed and primed.
NOTE: Root password is 'vagrant' on all nodes.
# Starting a single node (node1)
vagrant up node1
vagrant ssh node1
# Starting node2
vagrant up node2
vagrant ssh node1
gnt-node add -s 33.33.34.12 node2
# Starting node3
vagrant up node3
vagrant ssh node1
gnt-node add -s 33.33.34.13 node3
6. What Vagrant will do for you
1. Install all dependencies required for Ganeti
2. Setup the machine to function as a Ganeti
node
3. Install Ganeti, Ganeti Htools, and Ganeti
Instance Image
4. Setup and initialize Ganeti (node1 only)
7. Installing Ganeti
We’ve already installed Ganeti for you on the VMs, but
here are the steps that we did for documentation
purposes.
tar -zxvf ganeti-2.*tar.gz
cd ganeti-2.*
./configure --localstatedir=/var --sysconfdir=/etc &&
make && make install &&
cp doc/examples/ganeti.initd /etc/init.d/ganeti
chmod +x /etc/init.d/ganeti
update-rc.d ganeti defaults 20 80
8. Initialize Ganeti
Ganeti will be already initialized on node1 for you, but here are the steps
that we did. Be aware that Ganeti is very picky about extra spaces in the “-H
kvm:” line.
gnt-cluster init
--vg-name=ganeti -s 33.33.34.11
--master-netdev=br0
-I hail
-H kvm:kernel_path=/boot/vmlinuz-kvmU,
initrd_path=/boot/initrd-kvmU,
root_path=/dev/sda2,nic_type=e1000,disk_type=scsi,
vnc_bind_address=0.0.0.0, serial_console=true
-N link=br0 --enabled-hypervisors=kvm
ganeti.example.org
9. Testing the cluster: verify
root@node1:~# gnt-cluster verify
Submitted jobs 4, 5
Waiting for job 4 ...
Thu Jun 7 06:03:54 2012 * Verifying cluster config
Thu Jun 7 06:03:54 2012 * Verifying cluster certificate files
Thu Jun 7 06:03:54 2012 * Verifying hypervisor parameters
Thu Jun 7 06:03:54 2012 * Verifying all nodes belong to an existing group
Waiting for job 5 ...
Thu Jun 7 06:03:54 2012 * Verifying group 'default'
Thu Jun 7 06:03:54 2012 * Gathering data (1 nodes)
Thu Jun 7 06:03:54 2012 * Gathering disk information (1 nodes)
Thu Jun 7 06:03:54 2012 * Verifying configuration file consistency
Thu Jun 7 06:03:54 2012 * Verifying node status
Thu Jun 7 06:03:54 2012 * Verifying instance status
Thu Jun 7 06:03:54 2012 * Verifying orphan volumes
Thu Jun 7 06:03:54 2012 * Verifying N+1 Memory redundancy
Thu Jun 7 06:03:54 2012 * Other Notes
Thu Jun 7 06:03:54 2012 * Hooks Results
11. Adding an Instance
root@node1:~# gnt-os list
Name
image+cirros
image+default
root@node1:~# gnt-instance add -n node1 -o image+cirros -t plain -s 1G
--no-start instance1
Thu Jun 7 06:05:58 2012 * disk 0, vg ganeti, name 780af428-3942-4fa9-8307-1323de416519.
disk0
Thu Jun 7 06:05:58 2012 * creating instance disks...
Thu Jun 7 06:05:58 2012 adding instance instance1.example.org to cluster config
Thu Jun 7 06:05:58 2012 - INFO: Waiting for instance instance1.example.org to sync
disks.
Thu Jun 7 06:05:58 2012 - INFO: Instance instance1.example.org's disks are in sync.
Thu Jun 7 06:05:58 2012 * running the instance OS create scripts...
12. Listing Instance Information
root@node1:~# gnt-instance list
Instance Hypervisor OS Primary_node Status Memory
instance1.example.org kvm image+cirros node1.example.org ADMIN_down -
13. Listing Instance Information
root@node1:~# gnt-instance info instance1
Instance name: instance1.example.org
UUID: bb87da5b-05f9-4dd6-9bc9-48592c1e091f
Serial number: 1
Creation time: 2012-06-07 06:05:58
Modification time: 2012-06-07 06:05:58
State: configured to be down, actual state is down
Nodes:
- primary: node1.example.org
- secondaries:
Operating system: image+cirros
Allocated network port: 11000
Hypervisor: kvm
- console connection: vnc to node1.example.org:11000 (display 5100)
…
Hardware:
- VCPUs: 1
- memory: 128MiB
- NICs:
- nic/0: MAC: aa:00:00:dd:ac:db, IP: None, mode: bridged, link: br0
Disk template: plain
Disks:
- disk/0: lvm, size 1.0G
access mode: rw
logical_id: ganeti/780af428-3942-4fa9-8307-1323de416519.disk0
on primary: /dev/ganeti/780af428-3942-4fa9-8307-1323de416519.disk0 (252:1)
14. Controlling Instances
root@node1:~# gnt-instance start instance1
Waiting for job 10 for instance1.example.org ...
root@node1:~# gnt-instance console instance1
login as 'vagrant' user. default password: 'vagrant'. use 'sudo' for root.
cirros login:
# Press crtl+] to escape console.
root@node1:~# gnt-instance shutdown instance1
Waiting for job 11 for instance1.example.org ...
15. Changing the Disk Type
root@node1:~# gnt-instance shutdown instance1
Waiting for job 11 for instance1.example.org ...
root@node1:~# gnt-instance modify -t drbd -n node2 instance1
Thu Jun 7 06:09:07 2012 Converting template to drbd
Thu Jun 7 06:09:08 2012 Creating additional volumes...
Thu Jun 7 06:09:08 2012 Renaming original volumes...
Thu Jun 7 06:09:08 2012 Initializing DRBD devices...
Thu Jun 7 06:09:09 2012 - INFO: Waiting for instance instance1.example.org to sync
disks.
Thu Jun 7 06:09:11 2012 - INFO: - device disk/0: 5.10% done, 20s remaining (estimated)
Thu Jun 7 06:09:31 2012 - INFO: - device disk/0: 86.00% done, 3s remaining (estimated)
Thu Jun 7 06:09:34 2012 - INFO: - device disk/0: 98.10% done, 0s remaining (estimated)
Thu Jun 7 06:09:34 2012 - INFO: Instance instance1.example.org's disks are in sync.
Modified instance instance1
- disk_template -> drbd
Please don't forget that most parameters take effect only at the next start of the
instance.
16. Instance Failover
root@node1:~# gnt-instance failover -f instance1
Thu Jun 7 06:10:09 2012 - INFO: Not checking memory on the secondary node as instance
will not be started
Thu Jun 7 06:10:09 2012 Failover instance instance1.example.org
Thu Jun 7 06:10:09 2012 * not checking disk consistency as instance is not running
Thu Jun 7 06:10:09 2012 * shutting down instance on source node
Thu Jun 7 06:10:09 2012 * deactivating the instance's disks on source node
17. Instance Migration
root@node1:~# gnt-instance start instance1
Waiting for job 14 for instance1.example.org …
root@node1:~# gnt-instance migrate -f instance1
Thu Jun 7 06:10:38 2012 Migrating instance instance1.example.org
Thu Jun 7 06:10:38 2012 * checking disk consistency between source and target
Thu Jun 7 06:10:38 2012 * switching node node1.example.org to secondary mode
Thu Jun 7 06:10:38 2012 * changing into standalone mode
Thu Jun 7 06:10:38 2012 * changing disks into dual-master mode
Thu Jun 7 06:10:39 2012 * wait until resync is done
Thu Jun 7 06:10:39 2012 * preparing node1.example.org to accept the instance
Thu Jun 7 06:10:39 2012 * migrating instance to node1.example.org
Thu Jun 7 06:10:44 2012 * switching node node2.example.org to secondary mode
Thu Jun 7 06:10:44 2012 * wait until resync is done
Thu Jun 7 06:10:44 2012 * changing into standalone mode
Thu Jun 7 06:10:45 2012 * changing disks into single-master mode
Thu Jun 7 06:10:46 2012 * wait until resync is done
Thu Jun 7 06:10:46 2012 * done
21. Using Htools
root@node1:~# gnt-instance add -I hail -o image+cirros -t drbd -s 1G --no-start instance2
Thu Jun 7 06:14:05 2012 - INFO: Selected nodes for instance instance2.example.org via
iallocator hail: node2.example.org, node1.example.org
Thu Jun 7 06:14:06 2012 * creating instance disks...
Thu Jun 7 06:14:08 2012 adding instance instance2.example.org to cluster config
Thu Jun 7 06:14:08 2012 - INFO: Waiting for instance instance2.example.org to sync
disks.
Thu Jun 7 06:14:09 2012 - INFO: - device disk/0: 6.30% done, 16s remaining (estimated)
Thu Jun 7 06:14:26 2012 - INFO: - device disk/0: 73.20% done, 6s remaining (estimated)
Thu Jun 7 06:14:33 2012 - INFO: - device disk/0: 100.00% done, 0s remaining (estimated)
Thu Jun 7 06:14:33 2012 - INFO: Instance instance2.example.org's disks are in sync.
Thu Jun 7 06:14:33 2012 * running the instance OS create scripts...
root@node1:~# gnt-instance list
Instance Hypervisor OS Primary_node Status Memory
instance1.example.org kvm image+cirros node1.example.org running 128M
instance2.example.org kvm image+cirros node2.example.org ADMIN_down -
22. Using Htools: hbal
root@node1:~# hbal -L
Loaded 2 nodes, 2 instances
Group size 2 nodes, 2 instances
Selected node group: default
Initial check done: 0 bad nodes, 0 bad instances.
Initial score: 3.89180108
Trying to minimize the CV...
1. instance1 node1:node2 => node2:node1 0.04771505 a=f
Cluster score improved from 3.89180108 to 0.04771505
Solution length=1
root@node1:~# hbal -L -X
Loaded 2 nodes, 2 instances
Group size 2 nodes, 2 instances
Selected node group: default
Initial check done: 0 bad nodes, 0 bad instances.
Initial score: 3.89314516
Trying to minimize the CV...
1. instance1 node1:node2 => node2:node1 0.04905914 a=f
Cluster score improved from 3.89314516 to 0.04905914
Solution length=1
Executing jobset for instances instance1.example.org
Got job IDs 18
23. Using Htools: hspace
root@node1:~# hspace --memory 128 --disk 1024 -L
The cluster has 2 nodes and the following resources:
MEM 1488, DSK 53304, CPU 4, VCPU 256.
There are 2 initial instances on the cluster.
Normal (fixed-size) instance spec is:
MEM 128, DSK 1024, CPU 1, using disk template 'drbd'.
Normal (fixed-size) allocation results:
- 2 instances allocated
- most likely failure reason: FailMem
- initial cluster score: 0.04233871
- final cluster score: 0.04233871
- memory usage efficiency: 34.41%
- disk usage efficiency: 18.25%
- vcpu usage efficiency: 1.56%
24. Recovering from a Node Failure
# Setup node3
gnt-node add -s 33.33.34.13 node3
# You should see something like the following now:
root@node1:~# gnt-node list
Node DTotal DFree MTotal MNode MFree Pinst Sinst
node1.example.org 26.0G 23.3G 744M 213M 585M 1 1
node2.example.org 26.0G 23.3G 744M 247M 542M 1 1
node3.example.org 26.0G 25.5G 744M 114M 650M 0 0
25. Simulating a node failure
# Log out of node1 / Simulate node2 failure
vagrant halt -f node2
# Log back into node1
gnt-cluster verify
gnt-node modify -O yes -f node2
gnt-cluster verify
gnt-node failover --ignore-consistency node2
gnt-node evacuate -I hail -s node2
gnt-cluster verify
26. Re-adding node2
# Log out of node1 / rebuild node2
vagrant destroy -f node2
vagrant up node2
# Log back into node1
# Add node2 back into cluster
gnt-node add --readd node2
gnt-cluster verify