Presented at SF SlackDevs Meetup on April 11, 2016
[Record of presentation]
http://www.ustream.tv/recorded/85566064 (0:26:00~)
[GitHub page of slacktee]
https://github.com/course-hero/slacktee
This document discusses using Capistrano, an open source tool for automating software deployments. It describes some of the issues with manual deployment processes and why automation is needed. Capistrano allows developers to deploy applications by writing scripts that execute commands remotely via SSH. It handles tasks like updating code, databases, and symlinking shared files. Capistrano provides a consistent, secure way for developers to deploy applications while still giving system administrators control over server environments.
In the talk we have covered how to manage React application in production, we focused on Webpack, caching, client side logs, error handling, NPM dependencies.
This document discusses Capistrano, a remote server automation and deployment tool. Some key points:
- Capistrano allows reliable deployment of web applications to multiple machines simultaneously, with features like rollback, adding tasks, and automating common tasks.
- It works by creating a new folder for each deployment on servers and symlinking the current version. Shared files are not overwritten on redeploys.
- Configuration involves setting stages, roles, branches, and other parameters in Capistrano files. Tasks can be added for custom actions.
- Deploying runs tasks sequentially like updating servers, publishing, finishing. Rollback has similar reversing tasks. Plugins add features like maintenance modes.
Salt conf 2014 - Using SaltStack in high availability environmentsBenjamin Cane
This document discusses best practices for using SaltStack in high availability environments. It recommends automating processes like system builds, configurations, application installations and updates to replace manual human processes that often cause downtime. Specific techniques covered include using pillars to define server configurations, templates to deploy consistent configuration files, scripts to install third-party applications, and automatically running states on a schedule while staggering restarts across servers. It cautions that automatic state runs may not always be appropriate and recommends using test runs to validate changes.
Vous avez besoin d'une stratégie pour déployer votre infrastructure ? Je vais vous donner une méthode qui vient du monde du développement. Dans le but d'avoir une plate-forme stable.
Décrivez ce dont vous avez besoin dans votre future recette "Je veux un serveur HTTP" est testez avec un outil du type BDD (Développement Piloté par les Fonctionnalités).
Maintenant, vous allez devoir de décrire (avec un langage de plus bas niveau) les prérequis dont vous avez besoin pour cette fonctionnalité (vérifier si NGINX est installé).
Vous basculez donc dans le mode TDD (Développement Piloté par les Tests). Quand votre recette est prête, vous allez l'ajouter à votre gestionnaire de sources. Et votre système d'Intégration Continue va tester votre recette à chaque mise à jour.
Comme pour un système de développement. Je vous l'ai dit “Infrastructure as code”
This document provides instructions for deploying a Rails application using Capistrano. It includes steps to set up Capistrano, configure the deploy.rb file, generate SSH keys, add the deploy key to GitHub, run Capistrano tasks to deploy the application, and make subsequent deploys when code changes. The application is deployed to a server at 192.168.255.54 running Mongrel and uses Git for version control.
This document discusses using Capistrano, an open source tool for automating software deployments. It describes some of the issues with manual deployment processes and why automation is needed. Capistrano allows developers to deploy applications by writing scripts that execute commands remotely via SSH. It handles tasks like updating code, databases, and symlinking shared files. Capistrano provides a consistent, secure way for developers to deploy applications while still giving system administrators control over server environments.
In the talk we have covered how to manage React application in production, we focused on Webpack, caching, client side logs, error handling, NPM dependencies.
This document discusses Capistrano, a remote server automation and deployment tool. Some key points:
- Capistrano allows reliable deployment of web applications to multiple machines simultaneously, with features like rollback, adding tasks, and automating common tasks.
- It works by creating a new folder for each deployment on servers and symlinking the current version. Shared files are not overwritten on redeploys.
- Configuration involves setting stages, roles, branches, and other parameters in Capistrano files. Tasks can be added for custom actions.
- Deploying runs tasks sequentially like updating servers, publishing, finishing. Rollback has similar reversing tasks. Plugins add features like maintenance modes.
Salt conf 2014 - Using SaltStack in high availability environmentsBenjamin Cane
This document discusses best practices for using SaltStack in high availability environments. It recommends automating processes like system builds, configurations, application installations and updates to replace manual human processes that often cause downtime. Specific techniques covered include using pillars to define server configurations, templates to deploy consistent configuration files, scripts to install third-party applications, and automatically running states on a schedule while staggering restarts across servers. It cautions that automatic state runs may not always be appropriate and recommends using test runs to validate changes.
Vous avez besoin d'une stratégie pour déployer votre infrastructure ? Je vais vous donner une méthode qui vient du monde du développement. Dans le but d'avoir une plate-forme stable.
Décrivez ce dont vous avez besoin dans votre future recette "Je veux un serveur HTTP" est testez avec un outil du type BDD (Développement Piloté par les Fonctionnalités).
Maintenant, vous allez devoir de décrire (avec un langage de plus bas niveau) les prérequis dont vous avez besoin pour cette fonctionnalité (vérifier si NGINX est installé).
Vous basculez donc dans le mode TDD (Développement Piloté par les Tests). Quand votre recette est prête, vous allez l'ajouter à votre gestionnaire de sources. Et votre système d'Intégration Continue va tester votre recette à chaque mise à jour.
Comme pour un système de développement. Je vous l'ai dit “Infrastructure as code”
This document provides instructions for deploying a Rails application using Capistrano. It includes steps to set up Capistrano, configure the deploy.rb file, generate SSH keys, add the deploy key to GitHub, run Capistrano tasks to deploy the application, and make subsequent deploys when code changes. The application is deployed to a server at 192.168.255.54 running Mongrel and uses Git for version control.
This document provides instructions for setting up Google App Engine on a local development environment for PHP developers. It discusses installing Vagrant, VirtualBox, SSH client, and configuring a virtual machine with LAMP server and Google App Engine SDK. It also covers running the development server, creating an app.yaml file, and implementing features like user service, email, memcache, task queues, and cron jobs. The document aims to help PHP developers get started with developing apps on Google App Engine.
This document discusses automating system administration tasks using Chef. It provides an overview of the key components of Chef including the Chef server, roles, environments, nodes, Knife tool, cookbooks, recipes, attributes and templates. Recipes are defined as lists of instructions to install and configure packages/systems, while cookbooks are collections of recipes. Pre-made cookbooks can be found on the Chef supermarket website and customized if needed. The document also discusses tools that can be used with Chef like ChefDK, Knife, Test Kitchen and Berkshelf. It provides an example of defining recipes to deploy different parts of an application like the database, content and web servers. The future of infrastructure as code with Chef provisioning
Performance Tuning Your Puppet Infrastructure - PuppetConf 2014Puppet
The document discusses ways to monitor and tune Puppet infrastructure using the same techniques used for applications. It describes instrumenting the Puppet master and database with New Relic to monitor performance. It also discusses collecting logs and reports from Puppet agents and masters and sending them to Elasticsearch for analysis in Kibana.
SaltConf15 Presentation on Salt Stack High Availability
Links to github repository containing code: https://github.com/wcannon/saltconf2015
Solution 3 has been implemented and is in the github repository.
Puppet Camp London Fall 2015 - Service Discovery and PuppetMarc Cluet
This document discusses service discovery and how it can be implemented using Consul. It begins with an introduction to the presenter and overview of service discovery challenges. The main points are:
- Consul is a service discovery tool that allows services to register themselves and discover other services via API or DNS queries. It supports health checking and secure key-value storage.
- Consul uses agents running on each node that register services and perform health checks. Services can be discovered via the REST API or DNS queries. It provides a strongly consistent key-value store.
- Puppet can integrate with Consul for service discovery via Puppet modules, Hiera backend, or direct API access. This allows dynamically generating configurations from service information in
This document discusses Puppet workflows, including:
1. Basic and end-to-end Puppet workflows involving code repositories, Puppet Masters, agents, and VMs.
2. Options for node classification, certificate exchange, and provisioning VMs in end-to-end workflows.
3. Example workflows involving testing, rapid scaling, and planning considerations like users, timescales and legacy systems.
Learn how to use Capistrano to automate the deployment of your Ruby on Rails applications. Apply best practices and add-ons for customizing Capistrano.
For many years Capistrano has been the defacto deployment tool, but many organisations have yet to realise the benefits of automating their deployment process. Automated Deployments are fast, less error prone, easier to rollback and you can dish out the keys to other team members so anyone can deploy.
During this talk we’ll look at how to “capify” a simple PHP project and deploy it in a few minutes. And, as Capistrano is a “remote server automation and deployment tool”, we’ll also look at some of the other things Capistrano can do for you such as restarting apache or grepping server log (and more). We’ll also take a look at the various plug-ins available and see how easy it can be to write your own.
If you are deploying using ssh / git pull / apache restart? Then it’s time to make a change: automate all the things and live in a world of “repeatable success”.
The Puppet Master on the JVM - PuppetConf 2014Puppet
Puppet Server is a new component of Puppet Enterprise that improves performance, scalability, and availability. It uses a Service-Oriented Architecture and the Trapperkeeper framework, which allows for better extensibility. Puppet Server provides significantly faster catalog compilation times, agent run times, and request response times compared to the previous Apache/Passenger architecture. It can also handle more agents per master as it continues to be optimized.
Async programming: From 0 to task.IsComplete - esDarío Kondratiuk
This document discusses asynchronous (async) programming in C#. It begins by explaining what async programming is and how it allows applications to be more responsive and scalable by freeing threads to handle other requests. It then provides various examples of how to write async methods using the async and await keywords. It discusses important best practices like avoiding async void methods and using ConfigureAwait(false). Finally, it mentions some useful tools for async programming like Task.WhenAll and parallel programming APIs.
Zero Downtime Deployment with Ansible - learn how to provision Linux servers with a web-proxy, a database and automate zero downtime deployment of a Java application to a load balanced environment.
These are the slides from a tutorial held at the Velocity Conference in Barcelona November 19th, 2014.
Git repo: https://github.com/steinim/zero-downtime-ansible
Andrew Betts Web Developer, The Financial Times at Fastly Altitude 2016
Running custom code at the Edge using a standard language is one of the biggest advantages of working with Fastly’s CDN. Andrew gives you a tour of all the problems the Financial Times and Nikkei solve in VCL and how their solutions work.
This document discusses techniques for building scalable websites with Perl, including:
1) Caching at various levels (page, partial page, and database caching) to improve performance and reduce load on application servers.
2) Using job queuing and worker processes to distribute processing-intensive tasks asynchronously instead of blocking web requests.
3) Leveraging caching and queueing libraries like Cache::FastMmap, Memcached, and Spread::Queue to implement caching and job queueing in Perl applications.
SaltConf14 - Ben Cane - Using SaltStack in High Availability EnvironmentsSaltStack
An overview on the benefits and best practices of using SaltStack for consistency and automation in highly available enterprise environments such as financial services.
Learn from Fastly veteran Cassandra Dixon on some of the most common customer issues we see — such as why things aren’t caching, misconfigured origins, issues with intermediary proxies, and VCL snafus — and the best ways to resolve them. We’ll also discuss our unique approach to debugging — using seemingly mundane tools to diagnose issues in creative ways — and how you can apply these methods to your own organization to get the most out of Fastly’s offerings.
This document discusses using Puppet roles and profiles to organize Puppet code into logical layers (roles, profiles, resources) and tie together different modules. Roles apply directly to nodes and may only include profiles. Profiles contain resources and parameters and are applied via roles. Resources are reusable components declared with defines. Data is configured separately in Hiera.
The shift to cloud computing means that organizations are undergoing a major shift as they develop scale-out infrastructure that can respond to apace of business change faster than ever before. Opscode Chef® is an open-source systems integration framework build specifically for
automating the cloud by making it easy to deploy and scale servers and applications throughout your infrastructure. Join us for this session
containing an introduction to Chef including:
An Overview of Chef
The Chef Architecture
Cookbook Components
System Integration
Live demo launching a Java Stack on Amazon EC2, Rackspace, Ubuntu, and
CentOS
[Presented as part of the Open Source Build a Cloud program on 2/29/2012 - http://cloudstack.org/about-cloudstack/cloudstack-events.html?categoryid=6]
Sharing the whole journey experience. Starting with the handover of the keys of the pandora box, wandering around the deep dark forest of uncertainty and instability of the rushed deployed systems. Trying to declutter and reach a stable stage where the order reigns over chaos, where the poor guy can finally sleep at night and the pager eventually goes silent for a while. At the end we'll be reaching the so-desired level of confidence to not be worried about experimenting, changing things and upgrading infrastructure.
This document provides instructions for setting up Google App Engine on a local development environment for PHP developers. It discusses installing Vagrant, VirtualBox, SSH client, and configuring a virtual machine with LAMP server and Google App Engine SDK. It also covers running the development server, creating an app.yaml file, and implementing features like user service, email, memcache, task queues, and cron jobs. The document aims to help PHP developers get started with developing apps on Google App Engine.
This document discusses automating system administration tasks using Chef. It provides an overview of the key components of Chef including the Chef server, roles, environments, nodes, Knife tool, cookbooks, recipes, attributes and templates. Recipes are defined as lists of instructions to install and configure packages/systems, while cookbooks are collections of recipes. Pre-made cookbooks can be found on the Chef supermarket website and customized if needed. The document also discusses tools that can be used with Chef like ChefDK, Knife, Test Kitchen and Berkshelf. It provides an example of defining recipes to deploy different parts of an application like the database, content and web servers. The future of infrastructure as code with Chef provisioning
Performance Tuning Your Puppet Infrastructure - PuppetConf 2014Puppet
The document discusses ways to monitor and tune Puppet infrastructure using the same techniques used for applications. It describes instrumenting the Puppet master and database with New Relic to monitor performance. It also discusses collecting logs and reports from Puppet agents and masters and sending them to Elasticsearch for analysis in Kibana.
SaltConf15 Presentation on Salt Stack High Availability
Links to github repository containing code: https://github.com/wcannon/saltconf2015
Solution 3 has been implemented and is in the github repository.
Puppet Camp London Fall 2015 - Service Discovery and PuppetMarc Cluet
This document discusses service discovery and how it can be implemented using Consul. It begins with an introduction to the presenter and overview of service discovery challenges. The main points are:
- Consul is a service discovery tool that allows services to register themselves and discover other services via API or DNS queries. It supports health checking and secure key-value storage.
- Consul uses agents running on each node that register services and perform health checks. Services can be discovered via the REST API or DNS queries. It provides a strongly consistent key-value store.
- Puppet can integrate with Consul for service discovery via Puppet modules, Hiera backend, or direct API access. This allows dynamically generating configurations from service information in
This document discusses Puppet workflows, including:
1. Basic and end-to-end Puppet workflows involving code repositories, Puppet Masters, agents, and VMs.
2. Options for node classification, certificate exchange, and provisioning VMs in end-to-end workflows.
3. Example workflows involving testing, rapid scaling, and planning considerations like users, timescales and legacy systems.
Learn how to use Capistrano to automate the deployment of your Ruby on Rails applications. Apply best practices and add-ons for customizing Capistrano.
For many years Capistrano has been the defacto deployment tool, but many organisations have yet to realise the benefits of automating their deployment process. Automated Deployments are fast, less error prone, easier to rollback and you can dish out the keys to other team members so anyone can deploy.
During this talk we’ll look at how to “capify” a simple PHP project and deploy it in a few minutes. And, as Capistrano is a “remote server automation and deployment tool”, we’ll also look at some of the other things Capistrano can do for you such as restarting apache or grepping server log (and more). We’ll also take a look at the various plug-ins available and see how easy it can be to write your own.
If you are deploying using ssh / git pull / apache restart? Then it’s time to make a change: automate all the things and live in a world of “repeatable success”.
The Puppet Master on the JVM - PuppetConf 2014Puppet
Puppet Server is a new component of Puppet Enterprise that improves performance, scalability, and availability. It uses a Service-Oriented Architecture and the Trapperkeeper framework, which allows for better extensibility. Puppet Server provides significantly faster catalog compilation times, agent run times, and request response times compared to the previous Apache/Passenger architecture. It can also handle more agents per master as it continues to be optimized.
Async programming: From 0 to task.IsComplete - esDarío Kondratiuk
This document discusses asynchronous (async) programming in C#. It begins by explaining what async programming is and how it allows applications to be more responsive and scalable by freeing threads to handle other requests. It then provides various examples of how to write async methods using the async and await keywords. It discusses important best practices like avoiding async void methods and using ConfigureAwait(false). Finally, it mentions some useful tools for async programming like Task.WhenAll and parallel programming APIs.
Zero Downtime Deployment with Ansible - learn how to provision Linux servers with a web-proxy, a database and automate zero downtime deployment of a Java application to a load balanced environment.
These are the slides from a tutorial held at the Velocity Conference in Barcelona November 19th, 2014.
Git repo: https://github.com/steinim/zero-downtime-ansible
Andrew Betts Web Developer, The Financial Times at Fastly Altitude 2016
Running custom code at the Edge using a standard language is one of the biggest advantages of working with Fastly’s CDN. Andrew gives you a tour of all the problems the Financial Times and Nikkei solve in VCL and how their solutions work.
This document discusses techniques for building scalable websites with Perl, including:
1) Caching at various levels (page, partial page, and database caching) to improve performance and reduce load on application servers.
2) Using job queuing and worker processes to distribute processing-intensive tasks asynchronously instead of blocking web requests.
3) Leveraging caching and queueing libraries like Cache::FastMmap, Memcached, and Spread::Queue to implement caching and job queueing in Perl applications.
SaltConf14 - Ben Cane - Using SaltStack in High Availability EnvironmentsSaltStack
An overview on the benefits and best practices of using SaltStack for consistency and automation in highly available enterprise environments such as financial services.
Learn from Fastly veteran Cassandra Dixon on some of the most common customer issues we see — such as why things aren’t caching, misconfigured origins, issues with intermediary proxies, and VCL snafus — and the best ways to resolve them. We’ll also discuss our unique approach to debugging — using seemingly mundane tools to diagnose issues in creative ways — and how you can apply these methods to your own organization to get the most out of Fastly’s offerings.
This document discusses using Puppet roles and profiles to organize Puppet code into logical layers (roles, profiles, resources) and tie together different modules. Roles apply directly to nodes and may only include profiles. Profiles contain resources and parameters and are applied via roles. Resources are reusable components declared with defines. Data is configured separately in Hiera.
The shift to cloud computing means that organizations are undergoing a major shift as they develop scale-out infrastructure that can respond to apace of business change faster than ever before. Opscode Chef® is an open-source systems integration framework build specifically for
automating the cloud by making it easy to deploy and scale servers and applications throughout your infrastructure. Join us for this session
containing an introduction to Chef including:
An Overview of Chef
The Chef Architecture
Cookbook Components
System Integration
Live demo launching a Java Stack on Amazon EC2, Rackspace, Ubuntu, and
CentOS
[Presented as part of the Open Source Build a Cloud program on 2/29/2012 - http://cloudstack.org/about-cloudstack/cloudstack-events.html?categoryid=6]
Sharing the whole journey experience. Starting with the handover of the keys of the pandora box, wandering around the deep dark forest of uncertainty and instability of the rushed deployed systems. Trying to declutter and reach a stable stage where the order reigns over chaos, where the poor guy can finally sleep at night and the pager eventually goes silent for a while. At the end we'll be reaching the so-desired level of confidence to not be worried about experimenting, changing things and upgrading infrastructure.
MongoDB World 2019: Becoming an Ops Manager Backup Superhero!MongoDB
Oh no! My backups aren't progressing! If something happens in production now, and I don't have current backups, I'll be out of a job for sure!
If these words resonate with you, don’t worry; you’re not the only one! Backup issues are one of the most common topics we deal with in Technical Services. In this talk, we will go through the backup flow, talk about where things might go wrong, and the symptoms you will see in the logs and the UI. We will also talk about other commands you can run to confirm the diagnosis, and how support can assist if you’re still stuck. Finally, we will talk about the new backup architecture in 4.2 and how it simplifies some of these concerns. This session is suitable for those with all levels of Ops Manager experience, but attendees should have a basic understanding of MongoDB’s replication process before attending this session.
After this talk, you will have leveled up your backup superpowers, and can swoop in to save your job (and the day)!
How we deployed Piwik web analytics system to handle a huge amount of unpredicted traffic, adding some cloud and modern scalability techniques. files:https://github.com/lorieri/piwik-presentation
This document discusses profiling PHP applications to improve performance. It recommends profiling during development to identify inefficiencies. The document introduces Xdebug for profiling PHP code and Webgrind, a PHP frontend for visualizing Xdebug profiles. It provides an example of profiling a sample PHP application, identifying issues, making code changes, and verifying performance improvements through re-profiling.
This document provides an overview of active security measures for Joomla sites. It discusses establishing strong foundations by ensuring servers have updated software and proper permissions and ownership of files. It also recommends installing security rules and scripts. For site setup, it advises keeping Joomla and extensions updated and carefully considering installed extensions. It also stresses the importance of password length and complexity. Additional steps include using .htaccess rules to lock down sites, installing armor, performing backups, monitoring file changes and logs, and having a plan for if a site becomes hacked. It concludes by providing resources and discount information.
This document discusses various tips and tricks for optimizing Symfony projects. It covers caching with Doctrine, using the Sentry error monitoring service, queueing emails with Swiftmailer, implementing custom voters for access control, and using process managers like PHP-PM and PHPFastCGI to improve performance by keeping the framework bootstrapped across requests. The document provides code examples and benchmarks to demonstrate how these techniques can enhance a Symfony application.
"13 ways to run web applications on the Internet" Andrii ShumadaFwdays
1. There are 13 ways to launch an app to the internet including using a local machine with port forwarding, a local machine in an office with ngrok or localtunnel, a dedicated server with SFTP or SSH, cloud storage services, git-based static hosting, serverless technologies like AWS Lambda, and container/cluster-based options like Docker swarm, AWS EBS, and Kubernetes.
2. Each option has varying degrees of ease of setup, ease of deployment, scalability, and suitability for frontend versus backend apps. Local development options are easiest to setup but not production ready, while container/cluster options are more complex but very scalable and production ready.
3. The document provides a
The document provides information about a Drupal training session on fixing a broken Drupal site. It includes an agenda for the lab session which involves fixing issues related to site building, security, performance, and content architecture through exercises. Participants will be split into teams and each given a broken Drupal site to work on fixing. Automated tools and techniques for profiling site performance will be demonstrated.
Practical Operation Automation with StackStormShu Sugimoto
Automation is getting more and more important these days, but it is not always easy to achieve, because it requires tremendous effort to convert existing procedures machine-friendly. That often means, you need to change almost everything!
StackStorm (aka st2, https://stackstorm.com/) is an open source IFTTT-ish middleware that ships with powerful workflow engine and unique features called "inquiries".
I'll focus on this workflow engine functionalities of st2 and show how these can ease the "automation" of day to day tasks. The example I'll show in this presentation is the actual workflow that we use at JPNAP, the real world IXP operation.
The document discusses TheSchwartz, an open-source queueing system for reliably processing asynchronous jobs. It describes problems with initial approaches like single daemon queues and multiple daemon solutions. TheSchwartz uses a database to store jobs and worker processes to reliably process them. Workers can handle one or more job types and jobs are pulled from the database to be processed. TheSchwartz has been used to process over 100 jobs per second for LiveJournal.
The 5 most common reasons for a slow WordPress site and how to fix them – ext...Otto Kekäläinen
Presentation given in WP Meetup in October 2019.
Includes fresh new tips from summer/fall 2019!
A Must read for all WordPress site owners and developers.
Roy foubister (hosting high traffic sites on a tight budget)WordCamp Cape Town
The document discusses optimizing a server to handle high traffic loads on a tight budget. It describes how the default LAMP stack configuration is not adequate and leads to crashes under load. It then details several optimizations tried: increasing Apache and MySQL configuration limits, using Apache worker mode, adding OPcache and object caching with W3 Total Cache which improved performance by 500%. It also recommends splitting static and dynamic content using Nginx to further reduce load on Apache. With these optimizations, a single server could reliably handle the load.
Massively Scaled High Performance Web Services with PHPDemin Yin
Over the years, people have questioned if PHP is a good choice for building web services. In this talk, I will share how we use PHP on the backend for Glu Mobile’s flagship mobile game Design Home, enabling it to regularly rank amongst the top free mobile games in the Apple App Store and the Google Play Store. We will deep dive into the thought processes, development, testing, and deployment strategy, showcasing what we have achieved with PHP.
The document provides recommendations for optimizing performance of high traffic web applications, including tuning Apache settings like MaxClients, enabling caching and compression, optimizing MySQL settings like query caching and indexing, improving PHP configurations for errors, sessions and uploads, and using tools to monitor and test performance. It also outlines best practices for page loading like reducing HTTP requests and moving scripts to the bottom.
Puppet Camp NYC 2014: Build a Modern Infrastructure in 45 min!Puppet
The document describes how to build a modern infrastructure using Puppet modules. It discusses setting up MCollective for orchestration, Sensu for monitoring, Logstash for logging, and Jenkins for continuous integration. A Puppet module called moderninfra is demonstrated that defines the architecture and installs/configures all of the required components including RabbitMQ, Elasticsearch, and Kibana. The full infrastructure can then be built out across multiple nodes by writing Hiera data and node definitions.
Nagios Conference 2011 - Nate Broderick - Nagios XI Large Implementation Tips...Nagios
Nate Broderick's presentation on Nagios XI large implementation tips and tricks. The presentation was given during the Nagios World Conference North America held Sept 27-29th, 2011 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Rishabh Dixit is a DevOps Engineer with over 2 years of experience in build engineering, build management, software configuration management, and process automation. He has skills in technologies like Git, Jira, Jenkins, Apache, Tomcat, PostgreSQL, RabbitMQ, Elasticsearch, and Linux. Some of his responsibilities include setting up continuous integration pipelines, managing Jenkins nodes, PostgreSQL administration, Tomcat configuration, load balancing with Nginx and HAProxy, application monitoring with New Relic and PagerDuty, and release support. He is looking to leverage his skills and experience in a challenging DevOps role.
As organizations assess the security of their information systems, the need for automation has become more and more apparent. Not only are organizations attempting to automate their assessments, the need is becoming more pressing to perform assessments centrally against large numbers of enterprise systems. Penetration testers can use this automation to make their post-exploitation efforts more thorough, repeatable, and efficient. Defenders need to understand the techniques attackers are using once an initial compromise has occurred so they can build defenses to stop the attacks. Microsoft's PowerShell scripting language has become the defacto standard for many organizations looking to perform this level of distributed automation. In this presentation James Tarala, of Enclave Security, will describe to students the enterprise capabilities PowerShell offers and show practical examples of how PowerShell can be used to perform large scale penetration tests of Microsoft Windows systems.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
3. Custom Slack Integration ‘slacktee’
Bash script which works like the 'tee' command.
You can integrate any scripts/commands with Slack without any development.
‘tee’ command ‘slacktee’ command
4. Why we created ‘slacktee’
Two reasons:
1. Wanted to see the result of backend tasks on Slack, because we love Slack
2. Most of the tasks are small or one-time, so we didn’t want to spend time to
implement a Slack integration for each task
6. Use Case 1 : Notify errors
Sometimes our database replication is backed up due to long-running queries
from our analytics team.
To detect it, we wrote a tiny script to monitor the replication status.
> php replication_checker.php prod_dbbi
prod_dbbi is 4999 seconds behind Master
>
7. Use Case 1 : Notify errors (Continue)
Before ‘slacktee’, we sent a notification through email
Problem:
Difficult to notice
php replication_checker.php prod_dbbi | mail -s ‘dbbi replication’ ckato@coursehero.com
8. Use Case 1 : Notify errors (Continue)
With ‘slacktee’, we can send a notification on Slack
php replication_checker.php prod_dbbi |
slacktee.sh -a "danger" -c "devops" -u "dbbi replication" -i "siren"
Problem solved!
- Easy to notice
- Custom notification setting allows us to send the notification to mobile
Attachment with ‘danger’ color
Send to #devops channel
Use this username for posting
Use :siren: emoji for icon
9. Use Case 2 : Check the progress of a long running script
One day, we executed a script which fixed missing data in the database. Since
the script processed a lot of documents, it took almost a day to finish.
We executed the script in the background using the 'screen' command, but we
needed to check its progress periodically and monitor for errors.
> php 2016_02_10_update_cfw_doc_pages.php
Script starts at 2016-03-02 12:50:44
There are 53014 missing records that need to be updated
0 records updated
1000 records updated
2000 records updated
3000 records updated
4000 records updated
5000 records updated
Takes 20 - 30 mins
10. Use Case 2 : Check the progress of a long running script (Continue)
Before ‘slacktee’, we had to login to the server and attach the screen each time
Problem :
- VPN into the server and attaching the screen for checking are tedious
- Impossible to notice errors immediately
11. Use Case 2 : Check the progress of a long running script (Continue)
With ‘slacktee’, we can see the progress on Slack
Problem solved!
- No server login required
- Real time monitoring
php 2016_02_10_update_cfw_doc_pages.php 2>&1 |
slacktee.sh -u "2016_02_10_update_cfw_doc_pages.php" -n
Use this username for posting No buffering mode
12. Use Case 3 : Download the result from the server
To investigate permission issues, we executed a 'find' command on our NFS
server and listed the files which have the wrong owner.
Since many millions of files are stored in our NFS, the output of the command
was huge, and we needed to download it to our local PC to check it.
> find . -type f ! -user apache | tee ~/result.txt
./00005063699b18d149beced28b35a7ad70bde9a9.txt
./00003e3f0345bd63cbcdb502b08cb77722112dbd.txt
./0000a7d5cb15b4188232633e798f638df2c33e07-1.txt
./000088e3ca537bef76df9c8d3aea9cb045e65f46.txt
./0000cc6fd2248580cde4a3f4e1ed306812e17b4c-2.txt
./000016f7c58a8802cbbb07a9033b0da41660d4b4.txt
./000088e3ca537bef76df9c8d3aea9cb045e65f46-0.txt
./00006329498e0bb6b4d49019b84d6fd3cf5d9cdb-0.txt
./0000e1cbc26144fcf8d44ba6eb4881affd447a2f-2.txt
About 100 MB
13. Use Case 3 : Download the result from the server (Continue)
Before ‘slacktee’, we had to use ‘scp’ to download the file from the server
Problem :
- Using ‘scp’ is not a pleasant experience (maybe only for me)
- Sharing results with colleagues is not easy
14. Use Case 3 : Download the result from the server (Continue)
With ‘slacktee’, we can download it from Slack
find . -type f ! -user apache | slacktee.sh -u 'Wrong owner check' -f
Use this username for posting File upload mode
Problem solved!
- No ‘scp’ required
- Easy to share
* Since Incoming Webhook doesn’t
support file upload, ‘slacktee’ uses
the user’s token for uploading.
15. Conclusion
Slack integration and ‘slacktee’ have really helped us.
1. More information is available on Slack
2. Non-technical people can get insight from the results too
If you have a script, pipe it to Slack now!
1. Google ‘slacktee’
Easiest way to find ‘slacktee’
2. Click 1st link in the result
(15 - 30 sec)
Before talking about what it means, let me introduce myself and my company ‘Course Hero’ briefly.
I’m Chuck Kato. Engineering Manager at Course Hero.
Course Hero is a mid size EdTech start-up company. We are providing cloud sources materials and online tutoring platform.
We started using Slack from Aug. 2014 in the engineering team and then it became our core communication tool quickly.
(30 sec - 1 min)
Today, I’m introducing our open source custom slack integration ‘slacktee’.
‘slacktee’ is a bash script which works like the 'tee' command.
As you may know, the 'tee' command reads standard input and writes it to standard output and one or more files.
Instead of writing to files, 'slacktee' posts the input to a Slack channel.
So, basically, you can pipe any scripts/commands to Slack without any additional development!
[Show very quick demo]
> echo “Hello everyone!” | slacktee.sh
> ls -l | slacktee.sh
Easy, isn’t it?
(30 sec)
At Course Hero, we execute a bunch of scripts and commands every day to accomplish backend tasks.
Since Slack is our core communication tool, we wanted to see the result of tasks on Slack.
However, even though implementing a custom Slack integration is easy, most of the backend tasks were small or one time, so we couldn't justify the development cost.
For these two reasons, we created 'slacktee'.
If you have the same dilemma, ‘slacktee’ should work for you too.
(5 - 10 sec)
Actually, 'slacktee' changed the way we monitor backend tasks.
I'd like to show you 3 real life use cases today.
(30 sec)
First use case is an error notification.
We are maintaining multiple replication slave databases, but sometimes they are backed up due to long-running queries from our analytics team.
To detect the backup, we wrote a tiny script to monitor the replication status.
Here is the example of the output.
(30 sec)
To send a backup notification to engineering team, we used to use an email.
However, as you can see, it’s difficult to notice and often we overlooked it.
Also, personally, I don’t check inbox frequently, so I cannot notice it in a timely manner.
But, ‘slacktee’ solved this problem.
(30 sec)
With ‘slacktee’, we can send the notification on Slack. Here is the command and example of the notification.
To make the notification prominent, we added a few options, but, basically, we just piped ‘slacktee’ instead of ‘mail’ command.
As you can see, it stands out and it’s difficult to overlook.
Also, the notification is sent to mobile too, if we set up custom notification.
(30 sec)
Let’s move to 2nd use case.
One day, we executed a script which fixed missing data in the database.
But, the script was slow and took almost a day to finish.
So, we executed the script in the background using the ‘screen’ command, but we needed to check its progress periodically and monitor for errors.
(30 sec)
Before ‘slacktee’, we had to login to the server and attached the screen each time when we checked the progress.
To access server, we had to establish VPN session with 2-way authentication.
This monitoring process was really tedious and not mobile friendly.
Also, it was impossible to notice errors immediately, because there was no way to see the errors until we access to the server.
Actually, ‘slacktee’ solved these problems.
(30 sec)
With ‘no buffering’ option, we can send the input to Slack line by line.
So, we can see the progress on Slack in real time without logging into the server.
Since we are redirecting the standard error to the standard input here, we can also notice errors immediately.
This was amazing improvement for us.
(30 sec)
This is the last use case I’m explaining today.
To fix permission issues, we recently executed a ‘find’ command on our NFS server and listed the files which have the wrong owner.
Since many mllions of files are stored in our NFS, the output of the command was huge, and we needed to download it to our local PC to check it.
(30 sec)
Before ‘slacktee’, we needed to ‘scp’ the file from the server.
But, frankly speaking, using ‘scp’ is not a pleasant experience.
We cannot use path auto completion, so we have to remember the file path of the target file correctly.
Also, if the file has been created by different user, we have to change the permission on the server side first.
‘scp’ is a necessary tool, but I’m not a big fan of it.
And, after we downloaded it, we needed to share it with other members.
So we had to email or uploaded it to other place such as Slack. Yeah, Slack.
(30 sec)
Actually, these problems could be solved with ‘slacktee’.
By using slacktee’s file upload mode, we can upload the result to Slack directly.
Now, we don’t have to use ‘scp’ to download the result file from the server. Just click the download link on Slack.
Also, if you’d like to share it with your colleagues, simply share the link or post itself with them!
BTW, slacktee is using Incoming Webhook to post the message to Slack, but it doesn’t support file upload.
So, in the file upload mode’, slacktee uses the user’s token for uploading.
(30 sec)
Slack integration and ‘slacktee’ have really helped us.
After creating ‘slacktee’, more information is available on Slack and it makes our daily job much easier.
Also, non-technical people can get insight from the results.
Sometimes, they point out issues, sometimes, they get an inspiration about new ideas from the information.
This was a positive surprise for engineers.
I believe these things will happen in your team too.
So, if you have a script, pipe it to Slack now.