This document compares configuration management tools Puppet, Ansible, and Chef. It discusses their approaches, languages, stored data formats, use of agents, and provides examples of configuration in each. Chef uses Ruby and JSON files, supports hierarchical execution and searching, and can run with or without a server. Puppet uses Ruby, YAML files and dependency-based configuration. Ansible is agentless and uses YAML files and Python plugins.
More info at http://blog.carlossanchez.eu/tag/devops
Video en español: http://youtu.be/E_OE4l3t5BA
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Code testing and Continuous Integration are just the first step in a source code to production process. Combined with infrastructure-as-code tools such as Puppet the whole process can be automated, and tested!
More info at http://blog.carlossanchez.eu/tag/devops
Video en español: http://youtu.be/E_OE4l3t5BA
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Code testing and Continuous Integration are just the first step in a source code to production process. Combined with infrastructure-as-code tools such as Puppet the whole process can be automated, and tested!
Jumpstart your education on learning Chef InSpec to turn your DevOps into DevSecOps, by automating your integration testing and compliance/security scanning.
Jumpstart your education on learning Chef InSpec to turn your DevOps into DevSecOps, by automating your integration testing and compliance/security scanning.
How to Develop Puppet Modules: From Source to the Forge With Zero ClicksCarlos Sanchez
Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves. Three steps for a fully automated process
* Continuous Integration of Puppet Modules
* Automatic release and upload to the Puppet Forge
* Deploy to Puppet master
My talk from DevOpsCon Berlin 2016.
Ansible is a radically simple and lightweight provisioning framework which makes your servers and applications easier to provision and deploy. By orchestrating your application deployments you gain benefits such as documentation as code, testability, continuous integration, version control, refactoring, automation and autonomy of your deployment routines, server and application configuration. Ansible uses a language that approaches plain English, uses SSH and has no agents to install on remote systems. It is the simplest way to automate and orchestrate application deployment, configuration management and continuous delivery.
In this tutorial you will be given an introduction to Ansible and learn how to provision Linux servers with a web-proxy, a database and some other packages. Furthermore we will automate zero downtime deployment of a Java application to a load balanced environment.
Drupal Camp Brighton 2015: Ansible Drupal Medicine showGeorge Boobyer
In this session we are going to look at the latest craze amongst developers with some Sysadmin responsibilities - Ansible.
As with all trending technologies you can be led to believe that it is the new wonder drug (multi purpose in a jar - if you ain't ill it will fix your car). But in this case we will look at some of the key ways that automated provisioning, configuration and state management can actually cure some of the critical headaches you face securing and managing production infrastructure and Drupal sites - (as with all such wonder drugs seek the advice of your GP before radically changing your lifestyle). Also as a warning once you start delving deeper into the world of web security you'll need a pretty thick skin - denial was a comfortable place to be. We won’t be covering Ansible for use in local development with systems such as VLAD - that hopefully will be the subject of other presentations.
Critically we are going to look at Ansible in a Drupal context with a focus on security and hopefully encourage participation in the development of tighter integration with Drupal site deployment and management as well as security defence measures.
By the end of the session we hope to have been convinced that with the adoption of Ansible you will feel more secure, more efficient and more relaxed about managing your infrastructure and sites and also to show how the principles of collaboration common within the Drupal community can transpose with great effect to the Ansible community . Code examples will be provided to support the topics covered.
Common configuration with Data Bags - Fundamentals Webinar Series Part 4Chef
Part 4 of a 6 part series introducing you to the fundamentals of Chef.
This session includes an introducing Data Bags & Data Bag Items
After viewing this webinar you will be able to:
- Use Data Bags for data-driven recipes
- Use multiple recipes for a node's run list
Video of this webinar can be found at the following URL
https://www.youtube.com/watch?v=fS_yrFNSL9w&list=PL11cZfNdwNyPnZA9D1MbVqldGuOWqbumZ
Zero Downtime Deployment with Ansible - learn how to provision Linux servers with a web-proxy, a database and automate zero downtime deployment of a Java application to a load balanced environment.
These are the slides from a tutorial held at the Velocity Conference in Barcelona November 19th, 2014.
Git repo: https://github.com/steinim/zero-downtime-ansible
Salt conf 2014-installing-openstack-using-saltstack-v02Yazz Atlas
OpenStack is an open source implementation of cloud computing, potentially at very large scale. However, it has many moving parts and is complex to operate. SaltStack appears to provide scalable and secure orchestration for OpenStack. But like all powerful solutions to complex problems, a great deal of the useful know-how has to be discovered by actual practice and hard-won experience. This session will share the inside knowledge gained through practical experience. This is not a howto install OpenStack.
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Jumpstart your education on learning Chef InSpec to turn your DevOps into DevSecOps, by automating your integration testing and compliance/security scanning.
Jumpstart your education on learning Chef InSpec to turn your DevOps into DevSecOps, by automating your integration testing and compliance/security scanning.
How to Develop Puppet Modules: From Source to the Forge With Zero ClicksCarlos Sanchez
Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves. Three steps for a fully automated process
* Continuous Integration of Puppet Modules
* Automatic release and upload to the Puppet Forge
* Deploy to Puppet master
My talk from DevOpsCon Berlin 2016.
Ansible is a radically simple and lightweight provisioning framework which makes your servers and applications easier to provision and deploy. By orchestrating your application deployments you gain benefits such as documentation as code, testability, continuous integration, version control, refactoring, automation and autonomy of your deployment routines, server and application configuration. Ansible uses a language that approaches plain English, uses SSH and has no agents to install on remote systems. It is the simplest way to automate and orchestrate application deployment, configuration management and continuous delivery.
In this tutorial you will be given an introduction to Ansible and learn how to provision Linux servers with a web-proxy, a database and some other packages. Furthermore we will automate zero downtime deployment of a Java application to a load balanced environment.
Drupal Camp Brighton 2015: Ansible Drupal Medicine showGeorge Boobyer
In this session we are going to look at the latest craze amongst developers with some Sysadmin responsibilities - Ansible.
As with all trending technologies you can be led to believe that it is the new wonder drug (multi purpose in a jar - if you ain't ill it will fix your car). But in this case we will look at some of the key ways that automated provisioning, configuration and state management can actually cure some of the critical headaches you face securing and managing production infrastructure and Drupal sites - (as with all such wonder drugs seek the advice of your GP before radically changing your lifestyle). Also as a warning once you start delving deeper into the world of web security you'll need a pretty thick skin - denial was a comfortable place to be. We won’t be covering Ansible for use in local development with systems such as VLAD - that hopefully will be the subject of other presentations.
Critically we are going to look at Ansible in a Drupal context with a focus on security and hopefully encourage participation in the development of tighter integration with Drupal site deployment and management as well as security defence measures.
By the end of the session we hope to have been convinced that with the adoption of Ansible you will feel more secure, more efficient and more relaxed about managing your infrastructure and sites and also to show how the principles of collaboration common within the Drupal community can transpose with great effect to the Ansible community . Code examples will be provided to support the topics covered.
Common configuration with Data Bags - Fundamentals Webinar Series Part 4Chef
Part 4 of a 6 part series introducing you to the fundamentals of Chef.
This session includes an introducing Data Bags & Data Bag Items
After viewing this webinar you will be able to:
- Use Data Bags for data-driven recipes
- Use multiple recipes for a node's run list
Video of this webinar can be found at the following URL
https://www.youtube.com/watch?v=fS_yrFNSL9w&list=PL11cZfNdwNyPnZA9D1MbVqldGuOWqbumZ
Zero Downtime Deployment with Ansible - learn how to provision Linux servers with a web-proxy, a database and automate zero downtime deployment of a Java application to a load balanced environment.
These are the slides from a tutorial held at the Velocity Conference in Barcelona November 19th, 2014.
Git repo: https://github.com/steinim/zero-downtime-ansible
Salt conf 2014-installing-openstack-using-saltstack-v02Yazz Atlas
OpenStack is an open source implementation of cloud computing, potentially at very large scale. However, it has many moving parts and is complex to operate. SaltStack appears to provide scalable and secure orchestration for OpenStack. But like all powerful solutions to complex problems, a great deal of the useful know-how has to be discovered by actual practice and hard-won experience. This session will share the inside knowledge gained through practical experience. This is not a howto install OpenStack.
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Introduction to Chef: Automate Your Infrastructure by Modeling It In CodeJosh Padnick
Presentation by Josh Padnick given at Desert Code Camp on April 5, 2014. Introduces OpsCode Chef with a special emphasis on learning the key Chef concepts. Also includes tips & tricks and references to best practices.
Configuration Management with AWS OpsWorks for Chef AutomateAmazon Web Services
AWS OpsWorks for Chef Automate provides a fully managed Chef server and suite of automation tools that give you workflow automation for continuous deployment, automated testing for compliance and security, and a user interface that gives you visibility into your nodes and their status. The Chef server gives you full stack automation by handling operational tasks such as software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes. OpsWorks for Chef Automate is completely compatible with tooling and cookbooks from the Chef community and automatically registers new nodes with your Chef server.
AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...Lucas Jellema
Introduction of Apache Kafka - the open source platform for real time message queuing and reliable, scalable, distributed event handling and high volume pub/sub implementation.
see GitHub https://github.com/MaartenSmeets/kafka-workshop for the workshop resources.
Jenkins and Chef: Infrastructure CI and Automated DeploymentDan Stine
This presentation discusses two key components of our deployment pipeline: Continuous integration of Chef code and automated deployment of Java applications. CI jobs for Chef code run static analysis and then provision, configure and test EC2 instances. Release jobs publish new cookbook versions to the Chef server. Deployment jobs identify target EC2 and VMware nodes and orchestrate Chef client runs. The flexibility of Jenkins is essential to our overall delivery architecture.
Join us to discover how to use the PHP frameworks and tools you love in the Cloud with Heroku. We will cover best practices for deploying and scaling your PHP apps and show you how easy it can be. We will show you examples of how to deploy your code from Git and use Composer to manage dependencies during deployment. You will also discover how to maintain parity through all your environments, from development to production. If your apps are database-driven, you can also instantly create a database from the Heroku add-ons and have it automatically attached to your PHP app. Horizontal scalability has always been at the core of PHP application design, and by using Heroku for your PHP apps, you can focus on code features, not infrastructure.
5/13/13 presentation to Austin DevOps Meetup Group, describing our system for deploying 15 websites and supporting services in multiple languages to bare redhat 6 VMs. All system-wide software is installed using RPMs, and all application software is installed using GIT or Tarball.
Melbourne Infracoders: Compliance as Code with InSpecMatt Ray
Presentation to the Melbourne Infrastructure Coders Meetup November 8, 2016. Overview of InSpec (https://inspec.io) and the idea of "Compliance as Code"
http://www.meetup.com/Infrastructure-Coders/events/233990769/
Ansible is tool for Configuration Management. The big difference to Chef and Puppet is, that Ansible doesn't need a Master and doesn't need a special client on the servers. It works completely via SSH and the configuration is done in Yaml.
These slides give a short introduction & motivation for Ansible.
Puppet getting started will show the different components used in puppet environments, starting with facter and puppet to different webinterfaces like puppet enterprise console and foreman. It will also cover an exemplary design for scaling the puppet master and for development livecycle of modules. Furthermore an example for design of modules will be given.
Similar to Chef - industrialize and automate your infrastructure (20)
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
5. 3 . 24
CHEF VS PUPPET/ANSIBLE
PUPPET
Language : DSL Ruby/Json
Approach : Execution by dependency and
chained action
Stored data : YAML
Agent : Yes
ANSIBLE
Language : Python
Approach : Hierarchical Execution
Stored data : YAML
Agent : No
CHEF
Language : Full Ruby & DSL
Approach : Hierarchical Execution
Stored data : JSON
Agent : Yes
11. 7
CHEF VS PUPPET/ANSIBLE
Advantages Disadvantages
Chef Fast & powerful
Dev Oriented => Flexible
Full Ruby
Search
Crypted data
Flexibility -> Complexity
Puppet Mature
Large Community
Lot of tools
Slow
Complex Language
Complex execution order
Ansible Agentless
« No code »
Multilingual plugins
Immaturity: Small community and few tools
Data stored in les
12. 8 . 1
CHEF SERVER
Free/Basic Version
Free & no node limit
In your Infrastructure
› No access to "premium" features
14. 8 . 3
CHEF AUTOMATE
Deliver a continuous deployment pipeline for infrastructure and applications.
Gain insight into operational, compliance, and work ow events.
Identify compliance issues, security risks, and outdated software with customizable
reports
15. 9
NOT ONLY CHEF SERVER...
Chef Solo
No Server
Deploy & Run recipes directly on the node
› No search & dedicated attributes on a node
› Need to push cookbooks on each nodes
› Not compatible with version constraint
Chef Zero
Chef Server instance in memory
Chef Local mode (-z)
embedded chef-zero (faster)
17. 11
KNIFE – CHEF'S SWISS KNIFE
Management of chef environment
Search, SSH (Executing commands in parallel)
Plugins (VMware, solo, spork...)
18. 12
CLIENTS AND NODES
A client is a registered machine with the server
A Node is a client that executes one or more recipe
→ A node is a client but a client is not necessarily a node
20. 14
THE COOKBOOKS – WHAT WE WILL COOK TODAY ?
Create a cookbook
Structure
Metadata
Recipes
Attributes
Files & Templates
Resources & Providers
LWRP & HWRP
De nitions
Library
21. 15
THE COOKBOOKS - CREATE
Three options to create a cookbook:
Create your own
Get a cookbook from Chef Supermarket
Get a cookbook from Github (or other)
knife cookbook create nginx
chef generate cookbook nginx
knife cookbook site install nginx
26. 19 . 2
THE COOKBOOKS - ATTRIBUTES
case node['platform']
when "debian"
case
when node['platform_version'].to_f < 6.0 # All 5.X
default['postgresql']['version'] = "8.3"
when node['platform_version'].to_f < 7.0 # All 6.X
default['postgresql']['version'] = "8.4"
else
default['postgresql']['version'] = "9.1"
end
default['postgresql']['client']['packages'] = ["postgresql-client-#{node['postgresql']['version']}","libpq-dev"]
default['postgresql']['server']['packages'] = ["postgresql-#{node['postgresql']['version']}"]
default['postgresql']['contrib']['packages'] = ["postgresql-contrib-#{node['postgresql']['version']}"]
when "fedora"
...
30. 20 . 2
THE COOKBOOKS - TEMPLATES
› memcached/templates/default/memcached.conf.erb
# Run memcached as a daemon. This command is implied, and is not needed for the
# daemon to run. See the README.Debian that comes with this package for more
# information.
-d
# Log memcached's output to /var/log/memcached
logfile /var/log/<%= @logfilename %>
# Be verbose
#-v
# Be even more verbose (print client commands as well)
# -vv
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m <%= @memory %>
# Default connection port is 11211
-p <%= @port %>
#-U <%= @udp_port %>
# Run the daemon as root. The start-memcached will default to running as root if no
# -u command is present in this config file
-u <%= @user %>
# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
31. 21 . 1
THE COOKBOOKS - FILES
› cookbook/app/ les/default/application-5.8.3.pm
› cookbook/app/ les/default/application-5.20.1.pm
cookbook_file "application.pm" do
path case node['platform']
when "centos","redhat"
"/usr/lib/version/1.2.3/dir/application.pm"
when "arch"
"/usr/share/version/core_version/dir/application.pm"
else
"/etc/version/dir/application.pm"
end
source "application-#{node['languages']['perl']['version']}.pm"
owner 'root'
group 'root'
mode '0644'
end
33. 22
THE COOKBOOKS – RESOURCES & PROVIDERS (LWRP / HWRP)
$ tree cookbooks/mdm/
cookbooks/mdm/
├── CHANGELOG.md
├── README.md
├── attributes
├── definitions
├── files
│ └── default
├── libraries
├── metadata.rb
├── providers
├── recipes
│ └── default.rb
├── resources
└── templates
└── default
Resources : Used to de ne a set of actions and attributes
Providers : Used to say to chef-client what to do foreach
de ned actions
user 'random' do
supports :manage_home => true
comment 'Random User'
uid 1234
gid 'users'
home '/home/random'
shell '/bin/bash'
password '$1$JJsvHslV$szsCjVEroftprNn4JHtDi'
end
38. 24
THE COOKBOOKS – RESOURCES & PROVIDERS
(HWRP)
Create a resource with pure Ruby
Bypass the limits of Chef DSL
http://tech.yipit.com/2013/05/09/advanced-chef-writing-
heavy-weight-resource-providers-hwrp/
39. 25 . 1
THE COOKBOOKS - DEFINITIONS
Used to de ne a set of actions with or without parameters
(function)
You can call your de nition many times in one or more
recipes
This is the same as a resource (LWRP) except that you may
not notify (trigger) other resources
40. 25 . 2
THE COOKBOOKS - DEFINITIONS
define :host_porter, :port => 4000, :hostname => nil do
params[:hostname] ||= params[:name]
directory "/etc/#{params[:hostname]}" do
recursive true
end
file "/etc/#{params[:hostname]}/#{params[:port]}" do
content "some content"
end
end
host_porter node['hostname'] do
port 4000
end
host_porter "www1" do
port 4001
end
42. 26 . 2
CUSTOM RESOURCES
From Chef version 12.5
De nitions are useless. Advise you to use a resources instead.
Custom resources is a provider redesigned to be simpler.
Located only in "resources" directory
43. 26 . 3
CUSTOM RESOURCES
exampleco/resources/site.rb
property :homepage, String, default: '<h1>Hello world!</h1>'
load_current_value do
if ::File.exist?('/var/www/html/index.html')
homepage IO.read('/var/www/html/index.html')
end
end
action :create do
package 'httpd'
service 'httpd' do
action [:enable, :start]
end
file '/var/www/html/index.html' do
content homepage
end
end
action :delete do
package 'httpd' do
action :delete
end
end
exampleco_site 'httpd' do
homepage '<h1>Welcome to the Example Co. website!</h1>'
action :create
end
44. 27 . 1
THE COOKBOOKS - LIBRARIES
Allows to extend the Chef's classes or create your own Ruby lib
Do what Chef does not already do
Do external data processing to use in Chef
49. POLICYFILE
Policies are a new feature of Chef that combine the very best
parts of Roles, Environments and cookbook dependency
resolvers (Berkshelf) into a single easy to use work ow.
It is associated with a group of nodes, cookbooks, and
settings. When these nodes run, they run the recipes
speci ed in the Policy le run-list
Resolves real-world problems of team work ow
50. 30 . 130 . 2
POLICYFILE
De ne in Policy le:
› Run list
› Cookbook dependencies with version constraint and source
› Attributes overriding
name "jenkins-master"
run_list "java", "jenkins::master", "recipe[policyfile_demo]"
default_source :supermarket, "https://mysupermarket.example"
cookbook "policyfile_demo", path: "cookbooks/policyfile_demo"
cookbook "jenkins", "~> 2.1"
cookbook "mysql", github: "chef-cookbooks/mysql", branch: "master"
default['java']['version'] = '8'
51. 30 . 3
POLICYFILE
Create your policy le into your cookbook
$ cd cookbooks/app
$ chef generate policyfile
$ ls -l
-rw-r--r-- 1 mlopez wheel 596 12 oct 21:57 Policyfile.rb
Create your policy le into policies directory
$ ls -l
total 24
-rw-r--r-- 1 mlopez wheel 70 12 oct 22:16 LICENSE
-rw-r--r-- 1 mlopez wheel 1546 12 oct 22:16 README.md
-rw-r--r-- 1 mlopez wheel 1067 12 oct 22:16 chefignore
drwxr-xr-x 4 mlopez wheel 136 12 oct 22:16 cookbooks
drwxr-xr-x 4 mlopez wheel 136 12 oct 22:16 data_bags
drwxr-xr-x 4 mlopez wheel 136 12 oct 23:13 policies
$ chef generate policyfile policies/web
52. 30 . 4
Create the lock le
$ chef install [path/to/policyfile.rb]
Update lock le after modi cation
$ chef update [path/to/policyfile.rb]
Upload cookbook and policy le
$ chef push POLICY_GROUP PATH/TO/POLICYFILE.rb
→ If POLICY_GROUP doesn't exists it will be created
Assign policy name and policy group into client.rb's node or in
node con guration
53. 31 . 1
DATA BAGS
Stock & share data
Read the desired data from a recipe or from your desktop (knife)
Write data collected during a run
$ knife data bag create DATA_BAG_NAME [ITEM]
knife data bag create users toto
knife data bag from file users data_bags/users/toto.json
54. 31 . 2
DATA BAGS
› knife data bag create users myUser
{
"id": "myUser",
"ssh_keys": [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDakMx4mjWYMko2r085yq/vq0Ey2DiVWXeJ
],
"groups": [
"mdm_qa"
],
"uid": 4001,
"shell": "/bin/bash",
"comment": "My User",
"action": "remove"
}
› knife data bag show users myUser –Fj > data_bags/users/myUser.json
55. 31 . 3
DATA BAGS
admins = []
search(:admins, "*:*").each do |admin|
login = admin[‘id’]
admins << login
home = "/home/#{login}"
user login do
uid admin['uid']
gid admin['gid']
shell admin['shell']
home home
comment admin['comment’]
supports :manage_home => true
end
end
56. 31 . 4
Write in a data bag
DATA BAGS
sam = {
"id" => "sam",
"Full Name" => "Sammy",
"shell" => "/bin/zsh"
}
databag_item = Chef::DataBagItem.new
databag_item.data_bag("users")
databag_item.raw_data = sam
databag_item.save
sam = data_bag_item("users", "sam")
sam["Full Name"] = "Samantha"
sam.save
57. 31 . 5
What should I do if I have sensitive data ?
DATA BAGS
→ Encrypt !!
$ knife data bag create passwords postgresql –secret-file <path>/encrypted_data_bag_secret</path>
$ knife data bag show passwords postgresql -Fj
{
"id": "postgresql",
"password": {
"encrypted_data": "nu0GFIaJuzefK1iCgmYxWbRO64tvEezZJA/7iOUT87NLg=n",
"iv": "LWK$u1omaWHHNfzfDcYN45g==n",
"version": 1,
"cipher": "aes-256-cbc"
},
"databases": {
"encrypted_data": "lG4EULs9UQKKwjfzef8/WrccoGilQO2m7O6JNnIeMu199jGIT2l+/MvR+bX6dnk2U/_dwn
"iv": "UmtXyR9m0ornADWbiayPyw==n",
"version": 1,
"cipher": "aes-256-cbc"
}
}
58. 32 . 1
CHEF VAULT
Whithout Chef Vault, nodes share a shared secret key le
→ Not good for security
With Chef Vault, nodes and workstation use their own
keypair to decrypt data bag. Chef administrators de ne
which node or admin can access to the encrypted data
59. 32 . 2
or
CHEF VAULT
gem install chef-vault
Create a Vault
Allow mlopez, vaubert and nodes with web role assigned to decrypt content
$ knife vault create credentials database -A mlopez,vaubert -M client -S ‘roles:web'
-J '{“db_password”:”some_password”}'
$ knife encrypt create credentials database --json '{“db_password”:”some_password”}'
--search 'role:web'
--admins mlopez, vaubert --mode client
60. 32 . 3
or
CHEF VAULT
From an Unauthorized admin's workstation
$ knife vault show credentials database
db_password:
cipher: aes-256-cbc
encrypted_data: dsiBtADAV8Sbis89yKuYBvbdNXPpu8bQfJrS20op7zoysfR8roFlzpVHyoaG2
4yb3
iv: +0siNLzFHHqEkP07k6JhYw==
version: 1
id: database
Authorized users see the decrypted content
$ knife vault show credentials database
$ knife decrypt credentials database --mode client
db_password: some_password
id: database
61. 32 . 4
CHEF VAULT
Content of database_keys
$ knife data bag show credentials database_keys
admins:
mlopez
vaubert
clients:
web-01
web-02
id: database_test_keys
mlopez: SOME KEY
vaubert: SOME KEY
web-01: SOME KEY
web-02: SOME KEY
62. 32 . 5
CHEF VAULT
Add a new admin workstation
$ knife vault update credentials database -A mhue
Rotate all keys
$ knife vault rotate all keys
63. 32 . 6
CHEF VAULT
Use Vault in recipe
chef_gem 'chef-vault' do
compile_time true if respond_to?(:compile_time)
end
require 'chef-vault'
# Or just include chef_vault coobkook
include_recipe 'chef-vault'
case ChefVault::Item.data_bag_item_type('credentials', 'database')
when :normal
...
when :encrypted
...
when :vault
...
end
64. 33 . 1
OHAI
Ohai is a tool that is used to detect attributes on a node, inventory the
system (platform, cpu, memory...), and then provide automatically these
attributes to the chef-client at the start of every chef-client run.
Check Automatic attributes previously seen. You can list node attributes
by running "ohai" command
$ ohai
{
"hostname": "node-01",
"machinename": "node-01",
"fqdn": "node-01.my.local",
"domain": "my.local",
"network": {
"interfaces": {
"lo": {
"encapsulation": "Loopback",
"addresses": {
"127.0.0.1": {
"family": "inet",
"prefixlen": "8",
"netmask": "255.0.0.0",
...
65. 33 . 2
OHAI - CUSTOM PLUGIN
You can create your own Ohai plugin to collect data before
the run and set them into Ohai as an attribute
Use Ohai cookbook and put your plugin into ' les' directory
› cookbooks/ohai/ les/default/plugins/haproxy.rb
# Encoding: utf-8
# Get current version of Haproxy
Ohai.plugin(:Haproxy) do
provides 'haproxy'
collect_data(:linux) do
haproxy Mash.new
[['dpkg-query -W haproxy | awk '{print $2}' | sed 's/(^[1-9].[1-9]).*/1/'',
:installed_version]].each do |cmd, property|
so = shell_out(cmd)
haproxy[property] = so.stdout.delete("n")
end
end
end
66. 34
TEAM WORKING
Git -> Create branches
Test locally before uploading his cookbook
Vagrant / Virtualbox, Kitchen CI...etc.
Chef solo/zero/local
Bump version of your cookbook and upload it into environment
Freeze uploaded version and apply version constraint into environnements
Knife Spork – Plugin allow you to bump, upload and promote a cookbook more easily.
Noti cation plugin, auto git add …
$ knife cookbook upload nginx [--freeze] [--force] [-E <environment>]
$ knife spork omni nginx –l minor –e qa_group</environment>
67. 35 . 1
TEST HIS JOB
Syntax
Logical tests with Foodcritic
Obsolescence of resources used, invalid search queries, syntax best-
practice…
Unit tests with ChefSpec
$ knife cookbook test nginx
package 'foo'
require 'chefspec'
describe 'example::default' do
let(:chef_run) { ChefSpec::SoloRunner.converge(described_recipe) }
it 'installs foo' do
expect(chef_run).to install_package('foo')
end
end
68. 35 . 2
TEST HIS JOB
Integration tests with Kitchen CI
Check if a run runs without errors (converge)
Include unit tests (Rspec, Bats…)
Run many tests suite in one time (client / server) on many platforms
Check which nodes use your cookbook
Simulate an execution of a run
knife preflight web::ws
knife search node -i "recipes:web::ws »
chef-client --why-run
knife ssh ‘name:srv-01.pp’ ‘sudo chef-client –W’
69. 36
DEBUG – WHY IT DOESN'T WORK ?
chef-client –l debug
Generate logs
Chef::Log.debug(« Doesn’t work »)
puts myVariable
Raise Exceptions
Chef::Log.fatal!('Deployment failure...')
raise
70. 37 . 1
ADVANCED
Override a run-list (-o "recipe[]")
Override a community Cookbook
knife ssh 'name:srv-01.dev' 'sudo chef-client –o "recipe[firefox]"'
include_recipe 'nginx'
resources("template[/etc/nginx/nginx.conf]").cookbook 'myNginx')