Need a web server? So did I. But setting everything up by hand is tedious. In this talk, you'll see how to build a load-balanced web server using Ansible, Terraform, and DigitalOcean, a cloud provider aimed at developers. First, you'll see how to build out the servers and load balancer, and then you'll see how to use Ansible Playbooks to install and upload the web site. When we're done, you'll have scripts you can run to set up your own environment.
This beginning terraform workshop will teach you how to safely create and provision Infrastructure as Code (IAC) using Hashicorp Terraform in an AWS environment. In this class you will learn how to setup and install terraform. You will also be given a walkthrough of Terraform fundamentals. You will be lead through the process of deploying a single server, deploying a cluster and setting up a load balancer. You will also learn how to author Terraform Modules, work with Route53 and how to manage DNS.
Requirements. You will need to have an AWS account set up already with Terraform v0.9.3 installed. You will also need to have git install to download the workshop material.
You can find more informaiton on how to install terraform here: https://www.terraform.io/intro/getting-started/install.html. You can sign up for an AWS account here: https://aws.amazon.com/account/
https://github.com/jasonvance/terraform-introduction
How to test infrastructure code: automated testing for Terraform, Kubernetes,...Yevgeniy Brikman
This talk is a step-by-step, live-coding class on how to write automated tests for infrastructure code, including the code you write for use with tools such as Terraform, Kubernetes, Docker, and Packer. Topics covered include unit tests, integration tests, end-to-end tests, test parallelism, retries, error handling, static analysis, and more.
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
An inroduction to Terraform, a tool that helps you deploy and change your infrastructure as code. Given at Rencontres Mondiales du Logiciel libre (RMLL) 2017
This beginning terraform workshop will teach you how to safely create and provision Infrastructure as Code (IAC) using Hashicorp Terraform in an AWS environment. In this class you will learn how to setup and install terraform. You will also be given a walkthrough of Terraform fundamentals. You will be lead through the process of deploying a single server, deploying a cluster and setting up a load balancer. You will also learn how to author Terraform Modules, work with Route53 and how to manage DNS.
Requirements. You will need to have an AWS account set up already with Terraform v0.9.3 installed. You will also need to have git install to download the workshop material.
You can find more informaiton on how to install terraform here: https://www.terraform.io/intro/getting-started/install.html. You can sign up for an AWS account here: https://aws.amazon.com/account/
https://github.com/jasonvance/terraform-introduction
How to test infrastructure code: automated testing for Terraform, Kubernetes,...Yevgeniy Brikman
This talk is a step-by-step, live-coding class on how to write automated tests for infrastructure code, including the code you write for use with tools such as Terraform, Kubernetes, Docker, and Packer. Topics covered include unit tests, integration tests, end-to-end tests, test parallelism, retries, error handling, static analysis, and more.
Using HashiCorp’s Terraform to build your infrastructure on AWS - Pop-up Loft...Amazon Web Services
Using Terraform to automate your infrastructure on AWS. What is Terraform and how is it different from Ansible. How to control cloud deployments using Terraform.
An inroduction to Terraform, a tool that helps you deploy and change your infrastructure as code. Given at Rencontres Mondiales du Logiciel libre (RMLL) 2017
WinOps Conference London 2017 session
Public Cloud IaaS vs traditional on prem and how Hashicorp Terraform is a great tool to configure Azure. Recorded here: https://www.youtube.com/watch?v=LDZXRBBuXCU
Zero Code Multi-Cloud Automation with Ansible and TerraformAvi Networks
Does your automation require more or less work? Avi's take is less. That’s why Avi offers zero-code multi-cloud automation for Day 0 and Day 1+. DevOps and IT teams can achieve self-service application and infrastructure resources provisioning (Day 0) without writing custom scripts per app or per cloud. We will walk through how to leverage Ansible and Terraform to automate tasks throughout the lifecycle of an application (Day 1+) using YAML-based declarative configurations.
Learn how to:
- Achieve efficient, repeatable, and automated app provisioning without writing code
- Use Ansible roles and modules or Terraform providers to easily automate common tasks
- Deploy across multi-cloud environments with consistent experience without customizations
- Gain visibility into network topology and app performance
- Apply closed-loop analytics to drive automation
Watch the full webinar: https://info.avinetworks.com/webinars-ansible-and-terraform-recipes
● Fundamentals
● Key Components
● Best practices
● Spring Boot REST API Deployment
● CI with Ansible
● Ansible for AWS
● Provisioning a Docker Host
● Docker&Ansible
https://github.com/maaydin/ansible-tutorial
Kubernetes Architecture - beyond a black box - Part 1Hao H. Zhang
This is part 1 of my Kubernetes architecture deep-dive slide series.
I have been working with Kubernetes for more than a year, from v1.3.6 to v1.6.7, and I am a CNCF certified Kubernetes administrator. Before I move on to something else, I would like to summarize and share my knowledges and take-aways about Kubernetes, from a software engineer perspective.
This set of slides is a humble dig into one level below your running application in production, revealing how different components of Kubernetes work together to orchestrate containers and present your applications to the rest of the world.
The slides contains 80+ external links to Kubernetes documentations, blog posts, Github issues, discussions, design proposals, pull requests, papers, source code files I went through when I was working with Kubernetes - which I think are valuable for people to understand how Kubernetes works, Kubernetes design philosophies and why these design came into places.
Lessons learned from writing over 300,000 lines of infrastructure codeYevgeniy Brikman
This talk is a concise masterclass on how to write infrastructure code. I share key lessons from the “Infrastructure Cookbook” we developed at Gruntwork while creating and maintaining a library of over 300,000 lines of infrastructure code that’s used in production by hundreds of companies. Come and hear our war stories, laugh about all the mistakes we’ve made along the way, and learn what Terraform, Packer, Docker, and Go look like in the wild.
Microservices, Kubernetes and Istio - A Great Fit!Animesh Singh
Microservices and containers are now influencing application design and deployment patterns. Sixty percent of all new applications will use cloud-enabled continuous delivery microservice architectures and containers. Service discovery, registration, and routing are fundamental tenets of microservices. Kubernetes provides a platform for running microservices. Kubernetes can be used to automate the deployment of Microservices and leverage features such as Kube-DNS, Config Maps, and Ingress service for managing those microservices. This configuration works fine for deployments up to a certain size. However, with complex deployments consisting of a large fleet of microservices, additional features are required to augment Kubernetes.
Best Practices of Infrastructure as Code with TerraformDevOps.com
When your organization is moving to cloud, the infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment, where you can easily adjust to growing demand by spinning up thousands of servers and scaling them down when not in use.
The future of DevOps is infrastructure as code. Infrastructure as code supports the growth of infrastructure and provisioning requests. It treats infrastructure as software: code that can be re-used, tested, automated and version controlled. HashiCorp Terraform adopts infrastructure as code throughout its tool to prevent configuration drift, manage immutable infrastructure and much more!
Join this webinar to learn why Infrastructure as Code is the answer to managing large scale, distributed systems and service-oriented architectures. We will cover key use cases, a demo of how to use Infrastructure as Code to provision your infrastructure and more:
Agenda:
Intro to Infrastructure as Code: Challenges & Use cases
Writing Infrastructure as Code with Terraform
Collaborating with Teams on Infrastructure
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
While many organizations have started to automate their software development processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
This Presentation is an introducing to the IT automation environment, starting from a sys admin point of view.
The purpose of these tools is to help in troubleshooting and handling an heterogeneous it environment to ensure availability and reliability.
WinOps Conference London 2017 session
Public Cloud IaaS vs traditional on prem and how Hashicorp Terraform is a great tool to configure Azure. Recorded here: https://www.youtube.com/watch?v=LDZXRBBuXCU
Zero Code Multi-Cloud Automation with Ansible and TerraformAvi Networks
Does your automation require more or less work? Avi's take is less. That’s why Avi offers zero-code multi-cloud automation for Day 0 and Day 1+. DevOps and IT teams can achieve self-service application and infrastructure resources provisioning (Day 0) without writing custom scripts per app or per cloud. We will walk through how to leverage Ansible and Terraform to automate tasks throughout the lifecycle of an application (Day 1+) using YAML-based declarative configurations.
Learn how to:
- Achieve efficient, repeatable, and automated app provisioning without writing code
- Use Ansible roles and modules or Terraform providers to easily automate common tasks
- Deploy across multi-cloud environments with consistent experience without customizations
- Gain visibility into network topology and app performance
- Apply closed-loop analytics to drive automation
Watch the full webinar: https://info.avinetworks.com/webinars-ansible-and-terraform-recipes
● Fundamentals
● Key Components
● Best practices
● Spring Boot REST API Deployment
● CI with Ansible
● Ansible for AWS
● Provisioning a Docker Host
● Docker&Ansible
https://github.com/maaydin/ansible-tutorial
Kubernetes Architecture - beyond a black box - Part 1Hao H. Zhang
This is part 1 of my Kubernetes architecture deep-dive slide series.
I have been working with Kubernetes for more than a year, from v1.3.6 to v1.6.7, and I am a CNCF certified Kubernetes administrator. Before I move on to something else, I would like to summarize and share my knowledges and take-aways about Kubernetes, from a software engineer perspective.
This set of slides is a humble dig into one level below your running application in production, revealing how different components of Kubernetes work together to orchestrate containers and present your applications to the rest of the world.
The slides contains 80+ external links to Kubernetes documentations, blog posts, Github issues, discussions, design proposals, pull requests, papers, source code files I went through when I was working with Kubernetes - which I think are valuable for people to understand how Kubernetes works, Kubernetes design philosophies and why these design came into places.
Lessons learned from writing over 300,000 lines of infrastructure codeYevgeniy Brikman
This talk is a concise masterclass on how to write infrastructure code. I share key lessons from the “Infrastructure Cookbook” we developed at Gruntwork while creating and maintaining a library of over 300,000 lines of infrastructure code that’s used in production by hundreds of companies. Come and hear our war stories, laugh about all the mistakes we’ve made along the way, and learn what Terraform, Packer, Docker, and Go look like in the wild.
Microservices, Kubernetes and Istio - A Great Fit!Animesh Singh
Microservices and containers are now influencing application design and deployment patterns. Sixty percent of all new applications will use cloud-enabled continuous delivery microservice architectures and containers. Service discovery, registration, and routing are fundamental tenets of microservices. Kubernetes provides a platform for running microservices. Kubernetes can be used to automate the deployment of Microservices and leverage features such as Kube-DNS, Config Maps, and Ingress service for managing those microservices. This configuration works fine for deployments up to a certain size. However, with complex deployments consisting of a large fleet of microservices, additional features are required to augment Kubernetes.
Best Practices of Infrastructure as Code with TerraformDevOps.com
When your organization is moving to cloud, the infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment, where you can easily adjust to growing demand by spinning up thousands of servers and scaling them down when not in use.
The future of DevOps is infrastructure as code. Infrastructure as code supports the growth of infrastructure and provisioning requests. It treats infrastructure as software: code that can be re-used, tested, automated and version controlled. HashiCorp Terraform adopts infrastructure as code throughout its tool to prevent configuration drift, manage immutable infrastructure and much more!
Join this webinar to learn why Infrastructure as Code is the answer to managing large scale, distributed systems and service-oriented architectures. We will cover key use cases, a demo of how to use Infrastructure as Code to provision your infrastructure and more:
Agenda:
Intro to Infrastructure as Code: Challenges & Use cases
Writing Infrastructure as Code with Terraform
Collaborating with Teams on Infrastructure
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
While many organizations have started to automate their software development processes, many still engineer their infrastructure largely by hand. Treating your infrastructure just like any other piece of code creates a “programmable infrastructure” that allows you to take full advantage of the scalability and reliability of the AWS cloud. This session will walk through practical examples of how AWS customers have merged infrastructure configuration with application code to create application-specific infrastructure and a truly unified development lifecycle. You will learn how AWS customers have leveraged tools like CloudFormation, orchestration engines, and source control systems to enable their applications to take full advantage of the scalability and reliability of the AWS cloud, create self-reliant applications, and easily recover when things go seriously wrong with their infrastructure.
This Presentation is an introducing to the IT automation environment, starting from a sys admin point of view.
The purpose of these tools is to help in troubleshooting and handling an heterogeneous it environment to ensure availability and reliability.
Deploying your rails application to a clean ubuntu 10Maurício Linhares
Learn how you can configure a new Ubuntu 10.04 machine to run your rails application with Nginx and Unicorn in a simple way including security setup and monit monitoring.
5/13/13 presentation to Austin DevOps Meetup Group, describing our system for deploying 15 websites and supporting services in multiple languages to bare redhat 6 VMs. All system-wide software is installed using RPMs, and all application software is installed using GIT or Tarball.
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine YardSV Ruby on Rails Meetup
Wesley Beary: Cloud computing scared the crap out of me - the quirks and nightmares
of provisioning computing and storage on AWS, Terremark, Rackspace,
etc - until I took the bull by the horns. Let me now show you how I
tamed that bull.
Learn how to easily get started cloud computing with fog. It gives you
the reins within any Ruby application or script. If you can control
your infrastructure choices, you can make better choices in
development and get what you need in production.
You'll get an overview of fog and concrete examples to give you a head
start on your own provisioning workflow.
Hands On Introduction To Ansible Configuration Management With Ansible Comple...SlideTeam
Hands On Introduction To Ansible Configuration Management With Ansible Complete Deck is designed for the upper and mid-level management. Take advantage of the informative visuals of this PPT slideshow to elucidate the application deployment tool. With the help of our intuitive PowerPoint template deck, explain the advantages of the Ansible automation tool. This viewer-friendly PPT theme is perfect to elaborate on the architecture of Ansible software. This is because of the state-of-the-art diagrams that simplify the explanation. Consolidate the characteristics and capabilities of Ansible applications such as configuration management and cloud provisioning. This PowerPoint presentation features an Ansible installation flowchart for an organization. Employ the neat tabular format to compile the differences between Ansible and Puppet. This will assist your organization to implement Ansible and its configuration in an effective manner. Hit the download icon and begin instant personalization. https://bit.ly/3mLQJtJ
fog or: How I Learned to Stop Worrying and Love the CloudWesley Beary
Learn how to easily get started on cloud computing with fog. If you can control your infrastructure choices, you’ll make better choices in development and get what you need in production. You'll get an overview of fog and concrete examples to give you a head start on your provisioning workflow.
Creating and Deploying Static Sites with HugoBrian Hogan
Most web sites don’t have data that changes, so why power them with a database and take the performance hit? In this talk we’ll explore static site generation using Hugo, an open-source static site generator. You’ll learn how to make a master layout for all pages, and how to use Markdown to create your content pages quickly.
Then we’ll explore how to deploy the site we made to production. We’ll automate the entire process. When you’re done, you’ll be able to build and deploy static web sites quickly with minimal tooling.
Create Development and Production Environments with VagrantBrian Hogan
Need a Linux box to test a Wordpress site or a Windows VM to test a web site on IE 10? Creating a virtual machine to test or deploy your software doesn’t have to be a manual process. Bring one up in seconds with Vagrant, software for creating and managing virtual machines. With Vagrant, you can bring up a new virtual machine with the software you need, share directories, copy files, and configure networking using a friendly DSL. You can even use shell scripts or more powerful provisioning tools to set up your software and install your apps. Whether you need a Windows machine for testing an app, or a full-blown production environment for your apps, Vagrant has you covered.
In this talk you’ll learn to script the creation of multiple local virtual machines. Then you’ll use the same strategy to provision production servers in the cloud.
I work with Vagrant, Terraform, Docker, and other provisioning systems daily and am excited to show others how to bring this into their own workflows.
Docker is an amazing tool, but unless you work with it every day, you're probably left with a ton of questions. What's a container? What's an image? What's the difference between Docker, Machine, Compose, and Swarm? Why the heck should I care? Well, Docker makes it easier than ever to deploy and scale your applications and services. In addition, it lets you simulate your production environment on your local machine without heavy virtual machines. In this talk, we'll explore the basics of Docker, create a custom image for a web application, create a group of containers, and look at how you can put your apps into production on various cloud providers. At the end of the talk, you'll have the knowledge you need to put this to use with your own applications.
Come explore Elm, a functional programming language for making web things. Elm aims to solve some of the same problems that Ember, React, and Angular 2 solve, but in a radically different way. Strong and static typing ensures that data you pass around in your apps really is what you think it is. A simple and tried-and-true architecture makes it easy to understand, and great tooling makes it fun to use.
If you've ever looked into how to create Gems, you've probably seen a bunch of ways to do that. Project generators like Hoe, Jeweler, and the like offer some nice ways to get started, but they may often be overkill for many projects. If you're just starting out, why not learn to do it from scratch?
In this talk, we'll create our own gem from scratch, using only things that are provided by Ruby, its standard library, and RubyGems to craft a simple gem.
You'll learn how to set up a project, how to write and run tests, how to use Rake to quickly build the gem, and even how to create a gem that installs an executable command-line program.
Intro talks never let you learn about the things that make a language truly cool. In this talk we'll discover how advanced features of Ruby help us write cleaner more modular code.
Web Development With Ruby - From Simple To ComplexBrian Hogan
Beyond the massive hype of Ruby on Rails, there's an amazing world of frameworks, DSLs, and libraries that make the Ruby language a compelling choice when working on the web. In this talk, you'll get a chance to see how to use Ruby to quickly build a static web site, create complex stylesheets with ease, build a simple web service, crete a simple Websocket server, and test your existing applications. Finally, you'll see a few of the ways Rails really can make developing complex applications easier, from advanced database querying to rendering views in multiple formats.
Stop Reinventing The Wheel - The Ruby Standard LibraryBrian Hogan
My talk from Ruby Hoedown MMX. We talked about the Ruby standard library and how sometimes we reinvent things when we have perfectly good tools waiting for us to use them.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
3. The Plan
— Introduce Immutable Infrastructure
— Create a Server with Terraform
— Provision the Server with Ansible
— Add Another Server and a Load Balancer
— Review
4. Disclosure and Disclaimer
I am using DigitalOcean in this talk. I work for them.
They're cool.
Want $10 of credit? https://m.do.co/c/1239feef68ae
Also we're hiring. http://do.co/careers
5. If you want to argue or make statements, I'll happily engage
with you after the talk in exchange for a beer
Rules
— This is based on my personal experience.
— If I go too fast, or I made a mistake, speak up.
— Ask questions any time.
— If you want to argue, buy me a beer later.
7. Changing existing servers in production results in servers
that aren't quite the same
This includes security updates! These changes result in
problems that are hard to diagnose and reproduce.
Snowflake servers and Configuration Drift
"Each server becomes unique"
— So!ware updates
— Security patches
— Newer versions installed on some servers
8. Infrastructure as Code
Rotate machines in and out of service.
— Create processes to create new servers quickly
— Use code to destroy them and replace them when
they are out of date.
10. A base setup with some things preconfigured. Your cloud provider has them or you can
make your own. The more complex your image is, the more testing you'll need to do and
the more time it'll take to bring up a new box.
Base Images
— Ready-to-go base OS with user accounts and
services
— Barebones.
— Keep it low-maintenance.
11. Terraform
— Tool to Create and Destroy infrastructure
components.
— Uses "providers" to talk to cloud services
— Define resources with code
— Provider to use, image, size, etc
16. Demo: Create Server with Terraform
— Set up Terraform
— Configure and Install the DigitalOcean provider
— Create a host
17. Set up Terraform
$ mkdir cloud_tutorial
$ cd cloud_tutorial
$ touch provider.tf
18. We have two pieces of data we need to inject. Our DO API key and our fingerprint.
Set environment variables so you keep sensitive info out of your code and scripts.
Environment Variables
API key
$ echo 'export DO_API_KEY=your_digitalocean_api_token' >> ~/.bashrc
Fingerprint
$ echo 'export SSH_FINGERPRINT=your_ssh_key_fingerprint' >> ~/.bashrc
Make sure they saved!
$ . ~/.bashrc
$ echo $DO_API_KEY
$ echo $SSH_FINGERPRINT
20. Install provider
$ terraform init
Initializing provider plugins...
- Checking for available provider plugins
on https://releases.hashicorp.com...
- Downloading plugin for provider
"digitalocean" (0.1.3)...
21. Define a server
touch web-1.tf
resource "digitalocean_droplet" "web-1" {
image = "ubuntu-16-04-x64"
name = "web-1"
region = "nyc3"
monitoring = true
size = "1gb"
ssh_keys = [
"${var.ssh_fingerprint}"
]
}
output "web-1-address" {
value = "${digitalocean_droplet.web-1.ipv4_address}"
}
22. DigitalOcean's API lets you find the images and sizes
available.
Get Images and Sizes from DigitalOcean API
curl -X GET -H "Content-Type: application/json"
-H "Authorization: Bearer $DO_API_KEY"
"https://api.digitalocean.com/v2/images"
Sizes
curl -X GET -H "Content-Type: application/json"
-H "Authorization: Bearer $DO_API_KEY"
"https://api.digitalocean.com/v2/sizes"
23. See what will happen
$ terraform plan
-var "do_api_key=${DO_API_KEY}"
-var "ssh_fingerprint=${SSH_FINGERPRINT}"
26. Ansible lets you define how things should be set up on your servers. It's designed to be idempotent, so you can run the
same script over and over. Ansible will only change what needs changing. If you have more than one machine, you can
run the commands on many machines at once. And you can define roles or use existing roles to add additional
functionality.
Provision Server with Ansible
— Idempotent machine setup
— Define how things should be, not necessarily what to
do
— Supports parallel execution
— Only needs SSH and Python on target machine
— Supports code reuse through roles
27. Provision with Ansible
— Create Ansible configuration
— Create a configuration file
— Create an inventory file listing your servers
— Define a "playbook" of tasks
— Run the playbook.
28. The Inventory
— Lists all the hosts Ansible should work with
— Lets you put them into groups
— Lets you specify per-host or per-group options
(keys, users, etc)
30. Demo: Creating a Web Server with Ansible
— Create a deploy user
— Install Nginx
— Upload a Serve Block (virtual host)
— Create web directory
— Enable server block
— Upload web page
31. Ansible connects to your servers using SSH and uses host key checking. When you first log in to a
remote machine with SSH, the SSH client app will ask if you want to add the server to your "known
hosts." If you have to rebuild your server, or add a new server, you'll get this prompt when Ansible
tries to connect. It's a nice security feature, but you should turn it off. Add this section to the new file:
By default, Ansible makes a new SSH connection for each command it runs. This is slow. As your
playbooks get larger, this will take more time. You can tell Ansible to share SSH connections using
pipelining. However, this requires your servers to disable the requiretty for sudo users.
Create ansible.cfg
touch ansible.cfg
[defaults]
host_key_checking = False
[ssh_connection]
pipelining = True
32. Ansible uses an inventory file to list out the servers. We're going to start with one. First we define a host called web-1 and assign it the IP address of our
machine. We need to tell Ansible what private key file we want to use to connect to the server over SSH, and since we'll use the same one for all our
servers, we'll create a group called servers. We put the web-1 host in the servers group, and then we create variables for the servers. We're using
Ubuntu 16, which only ships with Python3. Ansible uses python2 by default, so we're just telling Ansible to use Python3 for all members of the servers
group.
Creating an Inventory
touch inventory
web-1 ansible_host=xx.xx.xx.xx
[servers]
web-1
[servers:vars]
ansible_private_key_file='/Users/your_username/.ssh/id_rsa'
ansible_python_interpreter=/usr/bin/python3
34. Adding a User
— Use the user module to add the user
— Can only use hashed passwords in playbooks
— Get a hashed password
35. Getting the password with Python
$ pip install passlib
$ python -c "from passlib.hash import sha512_crypt; import getpass;
print sha512_crypt.using(rounds=5000).hash(getpass.getpass())"
(command taken shamelessly from Ansible docs)
36. This sets the username to deploy, sets the password, and adds the user to the sudo group. It also sets up the
shell. The append option says to add the new group, rather than replacing any existing groups. Finally, we're
telling Ansible not to ever change the password on subsequent runs. We want the state to be the same every
time. If we need to change the password, we'll provision a new server from scratch and decommission this one.
Task to Create User
The password is d3ploy
tasks:
- name: Add deploy user and add to sudoers
user:
name: deploy
password: $6$zsQNYitEkWYJzVYj$/6sa8XlOAbfWAtn2S7ww1ok.w1ipqQ1dfHY1Mlo6f9p
/xFsp1sp0N9grxLyN6qMcnlvyx266vbPczJd0EacOC1
groups: sudo
append: true
shell: /bin/bash
update_password: on_create
37. Run the playbook
ansible-playbook -i inventory.txt playbook.yml
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [web-1]
TASK [Add deploy user and add to sudoers] **************************************
changed: [web-1]
PLAY RECAP *********************************************************************
web-1 : ok=2 changed=1 unreachable=0 failed=0
38. On DigitalOcean, once you upload a public key to your account, password logins are
disabled for all your users. The root user already gets your public key added, but subsequent
users need your public key too. Ansible has a module for uploading your public key to a user.
Add public key auth for user
- name: add public key for deploy user
authorized_key:
user: deploy
state: present
key: "{{ lookup('file', '/Users/your_username/.ssh/id_rsa.pub') }}"
39. Since the user is already there, Ansibe won't try creating it
again. But it will add the key:
Apply the change to the server
$ ansible-playbook -i inventory.txt playbook.yml
TASK [Add deploy user and add to sudoers] **************************************
ok: [web-1]
TASK [add public key for deploy user] ******************************************
changed: [web-1]
PLAY RECAP *********************************************************************
web-1 : ok=3 changed=1 unreachable=0 failed=0
40. Adding the Webserver Tasks
— Install package
— Update config file
— Create web directory
— Upload home page
41. We're creating another section in our file that sets a new remote user. Then we define
a new set of tasks, and define a task that uses the apt module. We then add
become: true to tell Ansible it should execute the command with sudo access.
Update Cache
- hosts: all
remote_user: deploy
gather_facts: false
tasks:
- name: Update apt cache
apt: update_cache=yes
become: true
42. In order to use sudo, you have to provide a password. Ansible is non-interactive, so if you
try to run the playbook, it'll stall out and error saying there was no password provided.
You provide the password for sudo access by adding the --ask-become-pass flag.
Run Ansible and apply changes
ansible-playbook -i inventory.txt playbook.yml
--ask-become-pass
SUDO password:
PLAY [all] *********************************************************************
...
TASK [Update apt cache] ********************************************************
changed: [web-1]
...
43. Let's install the Nginx web server on our box and set up a new
default web site. Once again, use the apt module for this.
Installing Software
- name: Install nginx
apt:
name: nginx
state: installed
become: true
44. Now we'll create the new website directory by using the file module to create /var/
www/example.com and make sure it's owned by the deploy user and group. This way we
can manage the content in that directory as the deploy user rather than as the root user.
Create the Web Directory
- name: Create the web directory
file:
path: /var/www/example.com
state: directory
owner: deploy
group: deploy
become: true
45. We need to remove the default site.
Nginx on Ubuntu stores server block configuration files in the /etc/nginx/
sites-available directory. When a site is enabled, a symbolic link is created
from that folder to /etc/nginx/sites-enabled. To disable a site, you remove
the symlink from /etc/nginx/sites-enabled. This makes it easy to enable
and disable configurations as needed.
Disabling the default web site
— Web site definitions are in /etc/nginx/sites_available
— Live sites are in /etc/nginx/sites_enabled
— Live sites are symlinks from sites_available to
sites_enabled
— Remove the symlink to disable a site.
46. We're checking to see if there's no file in the destination. If
it's absent, we're good. If it's not, Ansible will remove it.
Task to remove the default site
- name: Disable `default` site
file:
src: /etc/nginx/sites-available/default
dest: /etc/nginx/sites-enabled/default
state: absent
notify: reload nginx
become: true
47. The notify directive lets us tell Ansible to fire a handler. A handler is a task
that responds to events from other tasks. In this case, we're saying "we've
dropped the default Nginx web site configuration, so reload Nginx's
configuration to make the changes stick.
To make this work, we have to define the handler that explains how this works.
Handlers
notify: reload nginx
50. Templates
— Local files we can upload to the server
— Can use variables to change their contents
— Uses the Jinja language
51. Creating the Server Block with a Template
touch site.conf
server {
listen 80;
listen [::]:80;
root /var/www/example.com/;
index index.html;
server_name example.com
location / {
try_files $uri $uri/ =404;
}
}
52. This task uses the template module, which uploads the template to the location
on the server. Templates can have additional processing instructions which we'll
look at later. Right now we'll just upload the file as-is.
Upload the file to the server
- name: Upload the virtual host
template:
src: site.conf
dest: /etc/nginx/sites-available/example.com
become: true
53. Enable the new host
- name: Enable the new virtual host
file:
src: /etc/nginx/sites-available/example.com
dest: /etc/nginx/sites-enabled/example.com
state: link
become: true
notify: reload nginx
54. Make a home page
touch index.html.j2
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="utf-8">
<title>Welcome</title>
</head>
<body>
<h1>Welcome to my web site</h1>
</body>
</html>
55. This time we don't use become: true because we want the file owned by the deploy
user, and we've already made sure the /var/www/example.com directory is owned by
the deploy user.
Upload the file
- name: Upload the home page
template:
src: index.html
dest: /var/www/example.com
58. The tasks folder contains the task definitions. The handlers folder contains the
definitions for our handlers, and the templates folder holds our template files.
Create this structure:
Anatomy of a Role
▾ role_name/
▾ handlers/
main.yml
▾ tasks/
main.yml
▾ templates/
some_template.j2
59. Create a role for our server
— Create website role
— Move tasks, handlers, and templates out of our
playbook
— Add the role to the playbook
60. Create the Role structure
$ mkdir -p roles/website/{handlers,tasks,templates}
$ touch roles/website/{handlers,tasks}/main.yml
$ mv {index.html,site.conf} roles/website/templates
67. We'll just clone the web-1 definition using sed and
replace all occorrances of web-1 with web-2.
Create a web-2.tf file
sed -e 's/web-1/web-2/g' web-1.tf > web-2.tf
71. Add a Load Balancer
— Floating IP
— Two HAProxy or Nginx instances
— Each instance monitoring the other
— Each instance pointing to web-1 and web-2
OR
— Digital Ocean Load Balancer
72. We define the forwarding rule and a health check, and then
we specify the IDs of the Droplets we want to configure.
Add a DO Load Balancer with Terraform
touch loadbalancer.tr
resource "digitalocean_loadbalancer" "web-lb" {
name = "web-lb"
region = "nyc3"
forwarding_rule {
entry_port = 80
entry_protocol = "http"
target_port = 80
target_protocol = "http"
}
healthcheck {
port = 22
protocol = "tcp"
}
droplet_ids = ["${digitalocean_droplet.web-1.id}","${digitalocean_droplet.web-2.id}" ]
}
73. Show Load Balancer IP
loadbalancer.tf
output "web-lb-address" {
value = "${digitalocean_loadbalancer.web-lb.ip}"
}
74. Apply!
$ terraform apply
-var "do_api_key=${DO_API_KEY}"
-var "ssh_fingerprint=${SSH_FINGERPRINT}"
...
Outputs:
web-1-address = xx.xx.xx.xx
web-2-address = xx.xx.xx.yy
web-lb-address = xx.xx.xx.zz
And you have your infrastructure.
78. Going Forward
— Add more .tf files for your infra
— Add them to your loadbalancer.tf file
— Add new IPs to Inventory
— Provision them with Ansible
— Remove old hosts from loadbalancer when you make config
changes or need security patches
— Investigate Ansible variables to handle domains, user
accounts, passwords, etc.
— Add new IPs to inventory automatically using Terraform's
provisioner
79. Things I learned
— Using other people's Ansible roles is awful
— Build everything from scratch and read the docs
— Ansible module docs are great... if you know what
module you need.
— StackOverflow is full of deprecated syntax. Use the
Ansible Docs!
— Don't be clever. Be explicit. DRY rule isn't always
preferred. Or good.