Continuously-Integrated Puppet in a Dynamic EnvironmentPuppet
This talk will show how we deploy Puppet without a Puppetmaster on an autoscaling Amazon Web Services infrastructure. Key points of interest: - Masterless Puppet - Use of Jenkins for Puppet manifest testing and environment promotion (test->staging->production) - Puppet integration with Amazon CloudFormation
Sam Bashton
Director, Bashton Ltd
After working for a number of Internet Service Providers, Sam founded Bashton Ltd in 2004. Focussing exclusively on Linux and Open Source software, Sam and his team provide consultancy, support and 24/7 infrastructure management for a number of high-traffic websites. A serial early adopter, Sam has travelled the world providing training and consultancy and generally spreading the Open Source message. Sam lives in Manchester, UK.
Running at Scale: Practical Performance Tuning with Puppet - PuppetConf 2013Puppet
"Running at Scale: Practical Performance Tuning with Puppet" by Sam Kottler Engineer, Red Hat.
Presentation Overview: This session will talk about some production issues I've seen running Puppet in large environments. From how to manage a single master with hundreds of hosts to real-life patterns for building high availability clusters that scale to 10's of thousands of agents. Another important topic that will be covered is how to deploy networked filesystems that perform well under high load and streaming files to many hosts simultaneously.
Speaker Bio: Sam Kottler is a software engineer in the Virtualization R&D group at Red Hat. He's helped build infrastructure for leading startups, including Digg.com, Acquia, and Venmo and is a contributor to Puppet, the Fedora Project, Drupal, and the Rubygems.org. Sam speaks around the world on the topics of internet security, systems automation, and software architecture.
An introductory talk to Foreman, with an overview of how Foreman's plugin ecosystem can help you manage your data center. We'll talk about Discovery, Katello, Docker, and additional configuration management platforms beyond Puppet.
This session will be an overview of highly available components that can be deployed with Puppet Enterprise. It will focus on some of the current Beta support in PuppetDB as well as tips and tricks from the professional services department. The session will cover field solutions ( both supported and unsupported ) that allow architectures to be designed that align with different levels of high availability across the services that support running puppet on agent nodes during an outage of your primary puppet infrastructure.
Continuously-Integrated Puppet in a Dynamic EnvironmentPuppet
This talk will show how we deploy Puppet without a Puppetmaster on an autoscaling Amazon Web Services infrastructure. Key points of interest: - Masterless Puppet - Use of Jenkins for Puppet manifest testing and environment promotion (test->staging->production) - Puppet integration with Amazon CloudFormation
Sam Bashton
Director, Bashton Ltd
After working for a number of Internet Service Providers, Sam founded Bashton Ltd in 2004. Focussing exclusively on Linux and Open Source software, Sam and his team provide consultancy, support and 24/7 infrastructure management for a number of high-traffic websites. A serial early adopter, Sam has travelled the world providing training and consultancy and generally spreading the Open Source message. Sam lives in Manchester, UK.
Running at Scale: Practical Performance Tuning with Puppet - PuppetConf 2013Puppet
"Running at Scale: Practical Performance Tuning with Puppet" by Sam Kottler Engineer, Red Hat.
Presentation Overview: This session will talk about some production issues I've seen running Puppet in large environments. From how to manage a single master with hundreds of hosts to real-life patterns for building high availability clusters that scale to 10's of thousands of agents. Another important topic that will be covered is how to deploy networked filesystems that perform well under high load and streaming files to many hosts simultaneously.
Speaker Bio: Sam Kottler is a software engineer in the Virtualization R&D group at Red Hat. He's helped build infrastructure for leading startups, including Digg.com, Acquia, and Venmo and is a contributor to Puppet, the Fedora Project, Drupal, and the Rubygems.org. Sam speaks around the world on the topics of internet security, systems automation, and software architecture.
An introductory talk to Foreman, with an overview of how Foreman's plugin ecosystem can help you manage your data center. We'll talk about Discovery, Katello, Docker, and additional configuration management platforms beyond Puppet.
This session will be an overview of highly available components that can be deployed with Puppet Enterprise. It will focus on some of the current Beta support in PuppetDB as well as tips and tricks from the professional services department. The session will cover field solutions ( both supported and unsupported ) that allow architectures to be designed that align with different levels of high availability across the services that support running puppet on agent nodes during an outage of your primary puppet infrastructure.
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
OpenNebulaConf 2016 - LAB ONE - Vagrant running on OpenNebula? by Florian HeiglOpenNebula Project
Do you remember Vagrant? It was that last hipster thing before Docker turned into the most recent hipster thing! It's also still really helpful for software evaluations or lab environments. Normally, it works with VirtualBox on your laptop, but this approach can be too limiting. Even running just 10 VMs becomes a stretch on a laptop. It burns through your battery, SSD lifetime, disk space and threatens how many dozen browser tabs you can open... Enter the Vagrant OpenNebula providers! You can actually control Vagrant on your workstation but have the VMs running on your cloud. There are multiple ways to do that, and also limitations. In the workshop, we'll look at what is possible and how you can best benefit from - oh right! - your cloud!
This presentation was given at the Linux Open Administration Days in Antwerpen, Belgium. It covers how puppetmanaged.org, a set of common puppet modules can be implemented in any current existing puppet setup.
OpenNebulaConf2018 - 5 Things We Wish We Knew Before Deploying OpenNebula in ...OpenNebula Project
We've been running OpenNebula in production at Nordeus for more than a year now. In the process, we've learnt that it's far from easy, especially when starting from scratch. There are so many questions that need to be answered. And, if you've never run a private cloud, or used any of its technologies before, these answers are hard to find. That's why, in this talk, we're going to help you find them.
We'll share with you how we started, what worked for us and what you need to consider before going into production. Our goal? To give you a few tips how to deploy and manage an OpenNebula cluster with more ease!
The tutorial covers the process of installing, configuring and operating private, public and hybrid clouds using OpenNebula. Additionally the program briefly addresses the integration of OpenNebula with other components in the data center. The target audience is devops and system administrators interested in deploying a private cloud solution, or in the integration of OpenNebula with other platform.
Making your first contribution to ForemanDominic Cleal
Have you fixed a bug in Foreman, but not got the patch accepted? Perhaps you know where a bug is happening, but aren't sure how to fix it. In this session, we'll help you through the process and get your first patch accepted!
How Can OpenNebula Fit Your Needs: A European Project FeedbackNETWAYS
BonFIRE is an european project which aims at providing a ”multi-site cloud facility for applications, services and systems research and experimentation”. Grouping different research cloud providers behind a common set of tools, APIs and services, it enables users to run their experiment against a heterogeneous set of infrastructure, hypervisors, networks, etc …
BonFIRE, and thus the (OpenNebula) testbeds, provide a relatively small set of images used to boot VMs. However, the experimental nature of BonFIRE projects results in a big ”turnover” of running VMs. Lot of VMs are used for a time period between a few hours and a few days, and an experiment startup can trigger deployment of many VMs at same time on a small set of OpenNebula workers, which does not correspond to usual Cloud workflow.
Default OpenNebula is not optimized for such usecase (small amount of worker nodes, high VMs turnover). However, thanks to its ability to be easily modified at each level of a Cloud deployment workflow, OpenNebula has been tuned to make it fit better with BonFIRE deployment process. This presentation will explain how to change OpenNebula TM and VMM to improve the parrallel deployment of many VMs in a short amount of time, reducing time needed to deploy an experiment to its lowest without lot of expensive hardware.
In the session we have discussed about Vagrant, and how it ease the life of a developer or System Administrator in creating and provisioning Virtual Machine for use in the network.
Also, we discussed about Puppet and provisioning of Vagrant VM using Shell and Puppet Standalone mode and how Puppet helps us to avoid writing in-house shell scripts and emphasise on quote "good coders code, great coders reuse".
How can OpenNebula fit your needs - OpenNebulaConf 2013 Maxence Dunnewind
In the scope of a European Project (BonFIRE - www.bonfire-project.eu ), I had to tune openNebula to fit our requirement that are unusual in a private cloud environment (small hardware, small number of base images, but lot of vms created).
These slides explain how, thanks to how OpenNebula enables administrators to tune it, I updated the transfer manager scripts to improve our deployment speed by almost 8.
Order from chaos: automating monitoring configurationSensu Inc.
In a high-performance computing shop with over 3,000 nodes, Harvard FAS Research Computing can’t afford chaos around our monitoring checks! In this Sensu Summit 2019 talk, you'll hear from Harvard SRE Molly Duggan about how they’re using CI/CD pipelines and the Sensu Go API to ensure that all changes to their monitoring system are validated, reproducible, and version controlled.
ContainerCon - Test Driven InfrastructureYury Tsarev
Great external coverage of this presentation can be found at https://www.cedric-meury.ch/2016/10/test-driven-infrastructure-with-puppet-docker-test-kitchen-and-serverspec-yury-tsarev-gooddata/
The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
OpenNebulaConf 2016 - LAB ONE - Vagrant running on OpenNebula? by Florian HeiglOpenNebula Project
Do you remember Vagrant? It was that last hipster thing before Docker turned into the most recent hipster thing! It's also still really helpful for software evaluations or lab environments. Normally, it works with VirtualBox on your laptop, but this approach can be too limiting. Even running just 10 VMs becomes a stretch on a laptop. It burns through your battery, SSD lifetime, disk space and threatens how many dozen browser tabs you can open... Enter the Vagrant OpenNebula providers! You can actually control Vagrant on your workstation but have the VMs running on your cloud. There are multiple ways to do that, and also limitations. In the workshop, we'll look at what is possible and how you can best benefit from - oh right! - your cloud!
This presentation was given at the Linux Open Administration Days in Antwerpen, Belgium. It covers how puppetmanaged.org, a set of common puppet modules can be implemented in any current existing puppet setup.
OpenNebulaConf2018 - 5 Things We Wish We Knew Before Deploying OpenNebula in ...OpenNebula Project
We've been running OpenNebula in production at Nordeus for more than a year now. In the process, we've learnt that it's far from easy, especially when starting from scratch. There are so many questions that need to be answered. And, if you've never run a private cloud, or used any of its technologies before, these answers are hard to find. That's why, in this talk, we're going to help you find them.
We'll share with you how we started, what worked for us and what you need to consider before going into production. Our goal? To give you a few tips how to deploy and manage an OpenNebula cluster with more ease!
The tutorial covers the process of installing, configuring and operating private, public and hybrid clouds using OpenNebula. Additionally the program briefly addresses the integration of OpenNebula with other components in the data center. The target audience is devops and system administrators interested in deploying a private cloud solution, or in the integration of OpenNebula with other platform.
Making your first contribution to ForemanDominic Cleal
Have you fixed a bug in Foreman, but not got the patch accepted? Perhaps you know where a bug is happening, but aren't sure how to fix it. In this session, we'll help you through the process and get your first patch accepted!
How Can OpenNebula Fit Your Needs: A European Project FeedbackNETWAYS
BonFIRE is an european project which aims at providing a ”multi-site cloud facility for applications, services and systems research and experimentation”. Grouping different research cloud providers behind a common set of tools, APIs and services, it enables users to run their experiment against a heterogeneous set of infrastructure, hypervisors, networks, etc …
BonFIRE, and thus the (OpenNebula) testbeds, provide a relatively small set of images used to boot VMs. However, the experimental nature of BonFIRE projects results in a big ”turnover” of running VMs. Lot of VMs are used for a time period between a few hours and a few days, and an experiment startup can trigger deployment of many VMs at same time on a small set of OpenNebula workers, which does not correspond to usual Cloud workflow.
Default OpenNebula is not optimized for such usecase (small amount of worker nodes, high VMs turnover). However, thanks to its ability to be easily modified at each level of a Cloud deployment workflow, OpenNebula has been tuned to make it fit better with BonFIRE deployment process. This presentation will explain how to change OpenNebula TM and VMM to improve the parrallel deployment of many VMs in a short amount of time, reducing time needed to deploy an experiment to its lowest without lot of expensive hardware.
In the session we have discussed about Vagrant, and how it ease the life of a developer or System Administrator in creating and provisioning Virtual Machine for use in the network.
Also, we discussed about Puppet and provisioning of Vagrant VM using Shell and Puppet Standalone mode and how Puppet helps us to avoid writing in-house shell scripts and emphasise on quote "good coders code, great coders reuse".
How can OpenNebula fit your needs - OpenNebulaConf 2013 Maxence Dunnewind
In the scope of a European Project (BonFIRE - www.bonfire-project.eu ), I had to tune openNebula to fit our requirement that are unusual in a private cloud environment (small hardware, small number of base images, but lot of vms created).
These slides explain how, thanks to how OpenNebula enables administrators to tune it, I updated the transfer manager scripts to improve our deployment speed by almost 8.
Order from chaos: automating monitoring configurationSensu Inc.
In a high-performance computing shop with over 3,000 nodes, Harvard FAS Research Computing can’t afford chaos around our monitoring checks! In this Sensu Summit 2019 talk, you'll hear from Harvard SRE Molly Duggan about how they’re using CI/CD pipelines and the Sensu Go API to ensure that all changes to their monitoring system are validated, reproducible, and version controlled.
ContainerCon - Test Driven InfrastructureYury Tsarev
Great external coverage of this presentation can be found at https://www.cedric-meury.ch/2016/10/test-driven-infrastructure-with-puppet-docker-test-kitchen-and-serverspec-yury-tsarev-gooddata/
In this session, I will discuss the use of puppeteer for implementing a simple feature of export to pdf. We will also discuss some of the problems that one can face, and how they can be resolved.
TechWiseTV Workshop: Open NX-OS and Devops with Puppet LabsRobb Boyd
Two incredible engineers: Shane Corban from Cisco and Carl Caum from Puppet Labs came together to be our guest experts for this workshop. See the demos in the replay at bit.ly/1lJQm3A
Linux host orchestration with Foreman, Puppet and GitlabBen Tullis
A brief look at the Foreman host lifecycle management system, beginning with its rapid provisioning features and moving onto its integration with the Puppet configuration management system.
GItlab is introduced to the mix and an example is given of how it can be integrated with Forman and Puppet to form an on-premise configuration versioning component. This configuration, which builds upon the Puppet multiple environments feature, is currently being employed in the task of building a test-driven continuous delivery system for the OpenCorporates project.
As engineers we spend much of our time getting stuff to production and making sure our infrastructure doesn’t burn down out right. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
From things like Infrastructure as Code, Service Discovery and Config Management to replicated databases, caching strategies and geo spatial considerations of the replicas. We have tried, failed and tried again until we got to a solution that works for us.
This allows for teams to quickly put infrastructure in place while allowing teams to seperate deployment and release phases of their work without having to switch over big bang style.
This talk will guide us through the moving parts of our highly reliable and available drupal setup. The audience will see an analysis of the good, the bad and the ugly side of our setup and will show ways for them to validate theirs.
Puppet Camp Silicon Valley 2015: How TubeMogul reached 10,000 Puppet Deployme...Nicolas Brousse
TubeMogul grew from few servers to over two thousands servers and handling over one trillion http requests a month, processed in less than 50ms each. To keep up with the fast growth, the SRE team had to implement an efficient Continuous Delivery infrastructure that allowed to do over 10,000 puppet deployment and 8,500 application deployment in 2014. In this presentation, we will cover the nuts and bolts of the TubeMogul operations engineering team and how they over come challenges.
Instant LAMP Stack with Vagrant and PuppetPatrick Lee
Do you enjoy installing and configuring Apache, PHP, and MySQL every time you reinstall your OS or switch to a new machine? Neither do I. And we never have to do it again. Vagrant can use the VirtualBox API and configuration defined in Puppet to spin up a development VM in a couple of minutes. And it's really easy to do. I'll start with the simplest possible example and work up to a cluster of VM's. Feel free to bring your laptop and follow along.
Developing and Testing with Enhanced OscarJeff Scelza
Using the Oscar Plug-in to do Puppet Module development locally.
Create Rspec
Write related Puppet Code
Run Rspec to validate Catalog
create Hiera data
Create ServerSpec code to validate end state
Run Your code againt Master and Agent running in Vagrant
Run ServerSpec to validate the end state of the agent match what module set.
Configuration Management - Finding the tool to fit your needsSaltStack
This presentation was originally given by Joseph Hall, SaltStack senior engineer, at the combined Montreal Python and DevOps Montreal meet up on April 14, 2014. Here is the talk abstract: In ye olde days of web, a company might manage a handful of servers, each manually and frequently tuned and re-tuned to the company's needs. Those days are gone. Server farms now dominate, and it is no longer reasonable to manage individual servers by hand. Various configuration management tools have stepped in to help the modern engineer, but which to choose? It is not an easy question, and canned pitches from sales people are unlikely to take into account all of your variables. This talk will attempt to discuss The Big Four objectively, and from what angles they approach the task at hand.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Elizabeth Buie - Older adults: Are we really designing for our future selves?
De-centralise and conquer: Masterless Puppet in a dynamic environment
1. De-centralise and
Conquer
Masterless Puppet in a Dynamic
Environment
Sam Bashton, Bashton Ltd
2. Who am I?
● Linux guy since Slackware, floppy disks and
root + boot
● Using Puppet since 2007
● Run a company Manchester, North West
England
3. Our Environments
● We provide outsourced ops for other
companies
● High traffic environments
● Most are now on Amazon Web Services
● #1 reason for moving to AWS? The ability to
scale on demand
5. How we use Puppet
● No Puppetmaster
● Puppet manifests and modules distributed to
all machines
6. What's wrong with standard Puppet?
● Pets vs Cattle
● Standard Puppet configuration assumes that
servers are pets, not cattle
7. What's wrong with standard Puppet?
● Standard Puppetmaster/Puppet Client
configuration makes assumptions about
environments
○ Machine creation is a manual operation
■ Sign certs
○ No in-built mechanism to automatically clean up old
machines
8. What's wrong with standard Puppet?
● Puppetmaster is a single point of failure
● When servers are pets, this isn't too much of
a problem
○ Existing servers continue to work, but not any
updates
9. What's wrong with standard Puppet?
● When servers are auto-scaling cattle, new
instances can appear at any time
● New instances require config to become
operational
● Configuration requires Puppet
10. What's wrong with standard Puppet?
● Our environments span multiple data centres
('availability zones')
● Imagine a data centre fails
● New instances get auto-provisioned to
replace missing capacity
● But these instances need the Puppetmaster
● ..which was in the failed AZ
11. What's wrong with standard Puppet?
● Resource contention
● Even when Puppetmaster isn't in the failed
zone, multiple concurrent connections slow
things down
12. What's wrong with standard Puppet?
● None of these problems are insurmountable
● We could have configured a Puppetmaster a
cluster of Puppetmasters for our needs
○ With autosign
○ and some sort of certificate distribution mechanism
○ uuid certificate names
○ And a mechanism for cleaning up old machines
13. Meanwhile, on the other side of the
room...
● Another team was evaluating Pulp
● Provides yum repository management
● To be used for managing security updates
and deploying application code
http://pulpproject.org/
14. Pulp
● Allows cloning of repos, copying packages
between repos
● Allows us to push packages to clients
○ Uses qpid message queue
● Has 'content distribution servers' for easy
replication + clustering
15. How we deploy code
● Everything managed via the Jenkins
continuous integration server
● Jenkins uses Pulp to install code on remote
machines
16. How we deploy code
● Jenkins fetches code from source control
(git)
● An RPM is built
● Tests are run
● The RPM is added to the relevant Pulp
repository
● RPM installed on the target machine(s)
17. How we deploy code
● Jenkins also manages deployment lifecycle
● 'Promoted Builds' plugin used to install
previously built RPMs on staging
● Promoted Builds plugin then used to install
the same RPMs on live once testing is
complete
18. Deploying configuration as code
● Idea: Why not just build an RPM of our
Puppet manifests + modules?
● Have puppet apply as part of the %
postinst
19. Deploying configuration as code
● Allowed us to reuse our existing code
deployment infrastructure
● Manage configuration deployment from
Jenkins
20. How we deploy configuration
● Puppet manifests and modules are checked
into git
● Jenkins builds configuration into an RPM
● Jenkins promoted builds plugin applies the
updates to environments via Pulp
21. Our system architecture
● Quite AWS specific
● Concepts could be applied to other clouds
○ Once they catch up in terms of toolsets..
22. Separation of Roles
● CloudFormation - defines infrastructure
● Puppet manages configuration
● Pulp manages package versions
○ Pulp in turn managed via Jenkins for custom repos
23. Instance Provisioning
● Minimal images used
● cloud-init the only addition beyond standard
CentOS install
● cloud-init allows us to specify script to be run
at boot
24. Puppet bootstrap
● cloud-init script adds local Puppet yum repo
and installs the Puppet configuration RPM
● Installing the RPM installs Puppet and
applies the configuration
25. Machine metadata
● cloud-init also sets some variables in
/etc/environment
● $HOST_TYPE - the type of machine this is, eg
web, cache
26. Machine metadata
● Also set facts to be used by facter, eg RDS
database hostname
○ Values from CloudFormation
● $FACTER_DBHOST set via cloud-init too, eg /root/.my.cnf
27. Defining machine roles
● For each machine type there is a manifest
/etc/puppet/manifests/$HOST_TYPE.pp
● This file looks something like this:
node default {
import global
...
}
28. Building the RPM
● Puppet manifests and modules are all
packed into an RPM
● Owner set to root, mode 600
● %postinst creates an at job set for now + 1
minute to run puppet apply
31. Free wins
● Greater control over the timing of Puppet
runs
● Improved visibility - for ops and devs
● Configuration changes now have to be
deployed to testing/staging first
32. More free wins
● Puppet configs now have a version
● Easy to find config version on the machine
itself
● Config changelogs accessible on every
machine
○ (Git changelog added to RPM)
34. Cheap wins
● Jenkins performs syntax checks with
puppet parser validate
● Jenkins also runs puppet-lint on
manifests
35. Cheap wins
● Config change required for new code?
○ Make the Puppet RPM version a dependency
36. The downsides
● Puppet manifests and modules on all
machines
○ Potentially a security issue?
● No reporting*
37. Alternative implementations
● Don't want to use Pulp?
● Could do basically the same thing with yum
s3 plugin
https://github.com/jbraeuer/yum-s3-plugin