Node.js is a server-side JavaScript platform for building scalable network applications. It uses non-blocking I/O and event-driven architecture, which makes it very efficient for data-intensive real-time applications that run across distributed devices. Some key features of Node.js include CommonJS modules, child processes, HTTP servers, TCP servers, DNS lookups, file watching and a package management system. Popular applications built with Node.js include web frameworks, real-time applications, crawlers and streaming.
Node.js is a platform for building scalable network applications. It uses Google's V8 JavaScript engine and a non-blocking I/O model. Some key points:
- Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, especially for real-time applications.
- It has a large ecosystem of open source modules. Popular frameworks include Express and Fab.
- While Node.js is very fast for I/O operations, memory usage can grow quickly and scaling to multiple cores requires multiple processes.
- The author argues Node.js is suitable for single-page apps, real-time applications, and crawlers, but
- Node.js is a platform for building scalable network applications. It uses non-blocking I/O and event-driven architecture to handle many connections concurrently using a single-threaded event loop.
- Node.js uses Google's V8 JavaScript engine and provides a module system, I/O bindings, and common protocols to build network programs easily. Popular uses include real-time web applications, file uploading, and streaming.
- While Node.js is ready for many production uses, things like lost stack traces and limited ability to utilize multiple cores present challenges for some workloads. However, an active community provides support through mailing lists, IRC, and over 1,000 modules in its package manager.
Managing modern infrastructure presents many different challenges. While the main operational aspects of infrastructure like durability, availability, scalability, security are very important, there’s also one aspect which should enable and support all the others - automation. Automation is a very abstract word, so the talk will briefly explain what benefits does IaC approach bring to the table and why configuration management (often driven by tools like Ansible, Puppet, Salt, Chef etc.) is just one of many layers in an automated production infrastructure. Then we will walk through the main design goals of an open source IaC tool (Terraform) that enables users to write, plan and apply changes of a production infrastructure in Google Cloud, and explain how to do it.
https://devfest.gdg.org.ua/schedule/day1?sessionId=143
Demo: https://github.com/radeksimko/devfest-ua-2017-talk-demo
Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. It uses non-blocking I/O and event-driven architecture, making it suitable for real-time applications with many concurrent connections. Key features include a module system, asynchronous I/O bindings, common network protocols like HTTP and TCP/IP, and a streaming API. Addons allow extending Node.js with native code modules written in C/C++ for additional functionality.
Node.js is a JavaScript runtime built on Chrome's V8 engine. It allows JavaScript to be run on the server-side. Node.js avoids blocking I/O operations by using non-blocking techniques and event loops. It provides APIs for common tasks like HTTP servers, filesystem access, and more. While still in development, Node.js has found success in building real-time applications and APIs due to its asynchronous and non-blocking architecture.
Declare your infrastructure: InfraKit, LinuxKit and MobyMoby Project
InfraKit is a toolkit for infrastructure orchestration. With an emphasis on immutable infrastructure, it breaks down infrastructure automation and management processes into small, pluggable components. These components work together to actively ensure the infrastructure state matches the user's specifications. InfraKit therefore provides infrastructure support for higher-level container orchestration systems and can make your infrastructure self-managing and self-healing.
Node.js is a server-side JavaScript platform for building scalable network applications. It uses non-blocking I/O and event-driven architecture, which makes it very efficient for data-intensive real-time applications that run across distributed devices. Some key features of Node.js include CommonJS modules, child processes, HTTP servers, TCP servers, DNS lookups, file watching and a package management system. Popular applications built with Node.js include web frameworks, real-time applications, crawlers and streaming.
Node.js is a platform for building scalable network applications. It uses Google's V8 JavaScript engine and a non-blocking I/O model. Some key points:
- Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, especially for real-time applications.
- It has a large ecosystem of open source modules. Popular frameworks include Express and Fab.
- While Node.js is very fast for I/O operations, memory usage can grow quickly and scaling to multiple cores requires multiple processes.
- The author argues Node.js is suitable for single-page apps, real-time applications, and crawlers, but
- Node.js is a platform for building scalable network applications. It uses non-blocking I/O and event-driven architecture to handle many connections concurrently using a single-threaded event loop.
- Node.js uses Google's V8 JavaScript engine and provides a module system, I/O bindings, and common protocols to build network programs easily. Popular uses include real-time web applications, file uploading, and streaming.
- While Node.js is ready for many production uses, things like lost stack traces and limited ability to utilize multiple cores present challenges for some workloads. However, an active community provides support through mailing lists, IRC, and over 1,000 modules in its package manager.
Managing modern infrastructure presents many different challenges. While the main operational aspects of infrastructure like durability, availability, scalability, security are very important, there’s also one aspect which should enable and support all the others - automation. Automation is a very abstract word, so the talk will briefly explain what benefits does IaC approach bring to the table and why configuration management (often driven by tools like Ansible, Puppet, Salt, Chef etc.) is just one of many layers in an automated production infrastructure. Then we will walk through the main design goals of an open source IaC tool (Terraform) that enables users to write, plan and apply changes of a production infrastructure in Google Cloud, and explain how to do it.
https://devfest.gdg.org.ua/schedule/day1?sessionId=143
Demo: https://github.com/radeksimko/devfest-ua-2017-talk-demo
Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. It uses non-blocking I/O and event-driven architecture, making it suitable for real-time applications with many concurrent connections. Key features include a module system, asynchronous I/O bindings, common network protocols like HTTP and TCP/IP, and a streaming API. Addons allow extending Node.js with native code modules written in C/C++ for additional functionality.
Node.js is a JavaScript runtime built on Chrome's V8 engine. It allows JavaScript to be run on the server-side. Node.js avoids blocking I/O operations by using non-blocking techniques and event loops. It provides APIs for common tasks like HTTP servers, filesystem access, and more. While still in development, Node.js has found success in building real-time applications and APIs due to its asynchronous and non-blocking architecture.
Declare your infrastructure: InfraKit, LinuxKit and MobyMoby Project
InfraKit is a toolkit for infrastructure orchestration. With an emphasis on immutable infrastructure, it breaks down infrastructure automation and management processes into small, pluggable components. These components work together to actively ensure the infrastructure state matches the user's specifications. InfraKit therefore provides infrastructure support for higher-level container orchestration systems and can make your infrastructure self-managing and self-healing.
This document provides various SQL queries and UNIX commands that can be used to monitor and troubleshoot Oracle databases and the underlying UNIX environment. Some examples include:
1) A SQL query to find the SQL statement being executed by a specific OS process ID.
2) UNIX commands like ps and prstat to identify the top CPU consuming processes.
3) SQL queries to get session details like username and status for a specific OS process ID.
4) UNIX commands to locate Oracle files, find disk usage, schedule cron jobs, and more.
5) A link provided to a website with additional UNIX commands for database administrators.
Node.js is an asynchronous event-driven JavaScript runtime that aims to build scalable network applications. It uses an event loop model that keeps the process running and prevents blocking behavior, allowing non-blocking I/O operations. This makes Node well-suited for real-time applications that require two-way connections like chat, streaming, and web sockets. The document outlines Node's core components and capabilities like modules, child processes, HTTP and TCP servers, and its future potential like web workers and streams.
The document discusses Dirty, a simple in-memory NoSQL database written for Node.js. Dirty stores data as JSON documents in an append-only log on disk and supports common CRUD operations through a simple JavaScript API. Benchmarks show Dirty can process millions of operations per second but hits scaling limitations with over a million records as it keeps all data in memory. The document explores possibilities for building databases that combine memory and disk storage and support features like replication to scale beyond these limits.
This document provides an introduction to Node.js including its history, uses, advantages, and community. It describes how Node.js uses non-blocking I/O and JavaScript to enable highly scalable applications. Examples show how Node.js can run HTTP servers and handle streaming data faster than traditional blocking architectures. The document recommends Node.js for real-time web applications and advises against using it for hard real-time systems or CPU-intensive tasks. It encourages participation in the growing Node.js community on mailing lists and IRC.
Hadley Wickham is known for authoring 63 R packages, collectively known as the "HadleyVerse". These packages cover a wide range of topics including data import, manipulation, visualization, and developer tools. Separately, Brian Ripley and Dirk Eddelbuettel are also known for authoring R packages, with 26 and 41 packages respectively under their names ("RipleyVerse" and "DirkVerse"). While presented as a lighthearted comparison, Hadley Wickham has authored the largest number of influential R packages that are widely used.
Terraform is an Infrastructure as Code tool for declaratively building and maintaining complex infrastructures on one or more cloud providers/services. But Terraform also supports over 80 non-infrastructure providers! In this demo-driven talk, will dive into the internals of Terraform and see how it works. We will show how Terraform can be used for non-infrastructure use cases by showing examples. We’ll also take a look at on how you can extend Terraform to manage anything with an API.
Alluxio is a data orchestration platform that manages data in HDFS and provides fast data access for compute frameworks. It uses a federated namespace to access data across multiple HDFS clusters and supports many compute frameworks including Spark, TensorFlow, and Flink. Alluxio masters manage metadata and workers cache data in memory to speed up computation.
The Practice of Alluxio in Near Real-Time Data Platform at VIPShop [Chinese]Alluxio, Inc.
Alluxio is a distributed file system that provides fast data access to HDFS files. It uses SSDs to cache frequently accessed data in memory and on disks. The document discusses Alluxio's architecture with over 20 nodes, its metrics for monitoring master and worker RPCs, and recommendations for optimizing Alluxio performance by addressing CPU, IO, and worker issues.
The document summarizes the good, bad, and ugly aspects of using Solr on Docker. The good is the orchestration and ability to dynamically allocate resources which can deliver on the promise of development, testing, and production environments being the same. The bad is that treating instances as cattle rather than pets requires good sizing, configuration, and scaling practices. The ugly is that the ecosystem is still young, leading to exciting bugs as Docker is still the future.
This document provides a summary of a presentation about modern container orchestration with Kubernetes and CoreOS. It discusses what CoreOS is, how to easily set up CoreOS and Kubernetes, machine configuration, distributed configuration with etcd, scheduling and running workloads with Kubernetes, and service discovery using Kubernetes labels. It also briefly mentions CoreOS careers and continuous delivery of the OS.
Automated Hadoop Cluster Construction on EC2Mark Kerzner
This document discusses options for running Hadoop clusters on Amazon EC2, including using tools like Whirr to automate cluster setup, limitations of Whirr, using Amazon EMR, manually setting up clusters, and advanced options like monitoring cluster health. It also provides context on Hadoop, clouds, and related technologies like HBase, Cassandra, and different Hadoop distributions from Cloudera, MapR, and others.
This document provides an overview and introduction to Node.js. It discusses that Node.js is a platform for building scalable network applications using JavaScript and uses non-blocking I/O and event-driven architecture. It was created by Ryan Dahl in 2009 and uses Google's V8 JavaScript engine. Node.js allows building web servers, networking tools and real-time applications easily and efficiently by handling concurrent connections without threads. Some popular frameworks and modules built on Node.js are also mentioned such as Express.js, Socket.IO and over 1600 modules in the npm registry.
The document provides guidance on tuning Apache Spark jobs. It discusses tuning memory and garbage collection, optimizing shuffle operations, increasing parallelism through partitioning, monitoring jobs, and testing Spark applications.
This document introduces mysqlnd_uh, a PHP extension that allows extending the mysqlnd PHP extension. It provides the following key points:
- mysqlnd_uh allows hooking into mysqlnd's plugin architecture to modify its behavior through connection and result proxies. This can be used to add custom logging, input validation, or other preprocessing.
- Examples are given showing how to set a custom timezone for all connections through a connection proxy, and how to replace query results with hardcoded data through a result proxy.
- The document outlines mysqlnd's plugin architecture and which core files can be extended, such as mysqlnd.c and mysqlnd_result.c. It also discusses security considerations for proxies.
This document discusses running your own public OpenStack cloud, nicknamed a "Sausage Cloud". It describes setting up the infrastructure including hardware like servers and networking equipment. It then discusses installing and configuring OpenStack components like Nova, Neutron, Horizon using tools like Kolla and OpenStack-Ansible. Statistics are shown on the cloud resources including flavors available. Potential use cases are mentioned like developing cloud software, running large VMs, or people who think it's a fun project. Running your own cloud is described as not as difficult as some may think thanks to tools that simplify deployment and management.
Warp 10 is a platform for ingesting, storing, and processing time series data. It uses WarpScript, a stack-based scripting language, to manipulate time series data. Developers can ingest sensor data via HTTP/WebSocket, run WarpScript programs to transform the data, and visualize results using widgets like Quantum. Warp 10 can integrate with other tools via APIs and libraries, and can be used for applications like IoT analytics, monitoring, and data processing.
HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle.
https://thinkcloudly.com/
durable_rules is a framework for real-time event processing and inference using a Rete algorithm. It was created based on the author's personal research and can analyze streaming data to infer new information using rules. The document discusses the history of rule-based systems and expert systems, describes how the Rete algorithm works to efficiently match patterns in data, benchmarks durable_rules' performance against alternative approaches, and provides an example of using it to solve a seating arrangement problem based on guest attributes. Future plans include improving performance, supporting richer queries, and potentially developing it as a web service.
Dmitry Spodarets presents on tools and environments for training machine learning models. He discusses results from a survey of data science tools, available computing resources like clouds and containers, and the FlyElephant platform which automates data science workflows and provides ready infrastructure, collaboration tools, and an expert community. FlyElephant offers public and private cloud resources along with HPC clusters and tools to support tasks in Python, R, Java, and other languages.
The document allocates memory for variables used to store information about a data conversion routine. It allocates 1024 bytes each for the conversion and back conversion routines, 256 bytes for a type name, and 4 bytes to store the byte size. It then stores the type name "Asphalt8.IFGIOVANNI" and sets the byte size to 4 and a flag for using floats to 0. The conversion routine XORs and rotates the passed in value, while the back conversion routine reverses these operations to convert the value back.
The document provides an overview of the Interactive Financial eXchange (IFX) standard. It describes IFX as an industry standard for financial data exchange that defines a common business language. The standard is based on service-oriented architecture principles and uses XML. It includes reusable objects and messages that can be used to perform actions on objects. The IFX framework supports routing messages between service providers to enable interoperability across the financial industry.
This document provides various SQL queries and UNIX commands that can be used to monitor and troubleshoot Oracle databases and the underlying UNIX environment. Some examples include:
1) A SQL query to find the SQL statement being executed by a specific OS process ID.
2) UNIX commands like ps and prstat to identify the top CPU consuming processes.
3) SQL queries to get session details like username and status for a specific OS process ID.
4) UNIX commands to locate Oracle files, find disk usage, schedule cron jobs, and more.
5) A link provided to a website with additional UNIX commands for database administrators.
Node.js is an asynchronous event-driven JavaScript runtime that aims to build scalable network applications. It uses an event loop model that keeps the process running and prevents blocking behavior, allowing non-blocking I/O operations. This makes Node well-suited for real-time applications that require two-way connections like chat, streaming, and web sockets. The document outlines Node's core components and capabilities like modules, child processes, HTTP and TCP servers, and its future potential like web workers and streams.
The document discusses Dirty, a simple in-memory NoSQL database written for Node.js. Dirty stores data as JSON documents in an append-only log on disk and supports common CRUD operations through a simple JavaScript API. Benchmarks show Dirty can process millions of operations per second but hits scaling limitations with over a million records as it keeps all data in memory. The document explores possibilities for building databases that combine memory and disk storage and support features like replication to scale beyond these limits.
This document provides an introduction to Node.js including its history, uses, advantages, and community. It describes how Node.js uses non-blocking I/O and JavaScript to enable highly scalable applications. Examples show how Node.js can run HTTP servers and handle streaming data faster than traditional blocking architectures. The document recommends Node.js for real-time web applications and advises against using it for hard real-time systems or CPU-intensive tasks. It encourages participation in the growing Node.js community on mailing lists and IRC.
Hadley Wickham is known for authoring 63 R packages, collectively known as the "HadleyVerse". These packages cover a wide range of topics including data import, manipulation, visualization, and developer tools. Separately, Brian Ripley and Dirk Eddelbuettel are also known for authoring R packages, with 26 and 41 packages respectively under their names ("RipleyVerse" and "DirkVerse"). While presented as a lighthearted comparison, Hadley Wickham has authored the largest number of influential R packages that are widely used.
Terraform is an Infrastructure as Code tool for declaratively building and maintaining complex infrastructures on one or more cloud providers/services. But Terraform also supports over 80 non-infrastructure providers! In this demo-driven talk, will dive into the internals of Terraform and see how it works. We will show how Terraform can be used for non-infrastructure use cases by showing examples. We’ll also take a look at on how you can extend Terraform to manage anything with an API.
Alluxio is a data orchestration platform that manages data in HDFS and provides fast data access for compute frameworks. It uses a federated namespace to access data across multiple HDFS clusters and supports many compute frameworks including Spark, TensorFlow, and Flink. Alluxio masters manage metadata and workers cache data in memory to speed up computation.
The Practice of Alluxio in Near Real-Time Data Platform at VIPShop [Chinese]Alluxio, Inc.
Alluxio is a distributed file system that provides fast data access to HDFS files. It uses SSDs to cache frequently accessed data in memory and on disks. The document discusses Alluxio's architecture with over 20 nodes, its metrics for monitoring master and worker RPCs, and recommendations for optimizing Alluxio performance by addressing CPU, IO, and worker issues.
The document summarizes the good, bad, and ugly aspects of using Solr on Docker. The good is the orchestration and ability to dynamically allocate resources which can deliver on the promise of development, testing, and production environments being the same. The bad is that treating instances as cattle rather than pets requires good sizing, configuration, and scaling practices. The ugly is that the ecosystem is still young, leading to exciting bugs as Docker is still the future.
This document provides a summary of a presentation about modern container orchestration with Kubernetes and CoreOS. It discusses what CoreOS is, how to easily set up CoreOS and Kubernetes, machine configuration, distributed configuration with etcd, scheduling and running workloads with Kubernetes, and service discovery using Kubernetes labels. It also briefly mentions CoreOS careers and continuous delivery of the OS.
Automated Hadoop Cluster Construction on EC2Mark Kerzner
This document discusses options for running Hadoop clusters on Amazon EC2, including using tools like Whirr to automate cluster setup, limitations of Whirr, using Amazon EMR, manually setting up clusters, and advanced options like monitoring cluster health. It also provides context on Hadoop, clouds, and related technologies like HBase, Cassandra, and different Hadoop distributions from Cloudera, MapR, and others.
This document provides an overview and introduction to Node.js. It discusses that Node.js is a platform for building scalable network applications using JavaScript and uses non-blocking I/O and event-driven architecture. It was created by Ryan Dahl in 2009 and uses Google's V8 JavaScript engine. Node.js allows building web servers, networking tools and real-time applications easily and efficiently by handling concurrent connections without threads. Some popular frameworks and modules built on Node.js are also mentioned such as Express.js, Socket.IO and over 1600 modules in the npm registry.
The document provides guidance on tuning Apache Spark jobs. It discusses tuning memory and garbage collection, optimizing shuffle operations, increasing parallelism through partitioning, monitoring jobs, and testing Spark applications.
This document introduces mysqlnd_uh, a PHP extension that allows extending the mysqlnd PHP extension. It provides the following key points:
- mysqlnd_uh allows hooking into mysqlnd's plugin architecture to modify its behavior through connection and result proxies. This can be used to add custom logging, input validation, or other preprocessing.
- Examples are given showing how to set a custom timezone for all connections through a connection proxy, and how to replace query results with hardcoded data through a result proxy.
- The document outlines mysqlnd's plugin architecture and which core files can be extended, such as mysqlnd.c and mysqlnd_result.c. It also discusses security considerations for proxies.
This document discusses running your own public OpenStack cloud, nicknamed a "Sausage Cloud". It describes setting up the infrastructure including hardware like servers and networking equipment. It then discusses installing and configuring OpenStack components like Nova, Neutron, Horizon using tools like Kolla and OpenStack-Ansible. Statistics are shown on the cloud resources including flavors available. Potential use cases are mentioned like developing cloud software, running large VMs, or people who think it's a fun project. Running your own cloud is described as not as difficult as some may think thanks to tools that simplify deployment and management.
Warp 10 is a platform for ingesting, storing, and processing time series data. It uses WarpScript, a stack-based scripting language, to manipulate time series data. Developers can ingest sensor data via HTTP/WebSocket, run WarpScript programs to transform the data, and visualize results using widgets like Quantum. Warp 10 can integrate with other tools via APIs and libraries, and can be used for applications like IoT analytics, monitoring, and data processing.
HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle.
https://thinkcloudly.com/
durable_rules is a framework for real-time event processing and inference using a Rete algorithm. It was created based on the author's personal research and can analyze streaming data to infer new information using rules. The document discusses the history of rule-based systems and expert systems, describes how the Rete algorithm works to efficiently match patterns in data, benchmarks durable_rules' performance against alternative approaches, and provides an example of using it to solve a seating arrangement problem based on guest attributes. Future plans include improving performance, supporting richer queries, and potentially developing it as a web service.
Dmitry Spodarets presents on tools and environments for training machine learning models. He discusses results from a survey of data science tools, available computing resources like clouds and containers, and the FlyElephant platform which automates data science workflows and provides ready infrastructure, collaboration tools, and an expert community. FlyElephant offers public and private cloud resources along with HPC clusters and tools to support tasks in Python, R, Java, and other languages.
The document allocates memory for variables used to store information about a data conversion routine. It allocates 1024 bytes each for the conversion and back conversion routines, 256 bytes for a type name, and 4 bytes to store the byte size. It then stores the type name "Asphalt8.IFGIOVANNI" and sets the byte size to 4 and a flag for using floats to 0. The conversion routine XORs and rotates the passed in value, while the back conversion routine reverses these operations to convert the value back.
The document provides an overview of the Interactive Financial eXchange (IFX) standard. It describes IFX as an industry standard for financial data exchange that defines a common business language. The standard is based on service-oriented architecture principles and uses XML. It includes reusable objects and messages that can be used to perform actions on objects. The IFX framework supports routing messages between service providers to enable interoperability across the financial industry.
Terraform is a tool used by Atlassian for building, changing, and versioning infrastructure safely and efficiently. It manages both popular cloud services and in-house solutions through its infrastructure-as-code approach. Atlassian uses Terraform for its build pipelines via a Python wrapper and fork of Terraform, taking advantage of its modular and extendable design as well as its large, active community for support.
This is the story of a company that had 10s of customers and were facing severe scaling issues. They approached us. They had a good product predicting a few hundred customers within 6 months. VCs went to them. Infrastructure scaling was the only unknown; funding for software-defined data centers. We introduced Terraform for infrastructure creation, Chef for OS hardening, and then Packer for supporting AWS as well as VSphere. Then, after a few more weeks, when there was a need for faster response from the data center, we went into Serf to immediately trigger chef-clients and then to Consul for service monitoring.
Want to describe this journey.
Finally, we did the same exact thing in at a Fortune 500 customer to replace 15 year-old scripts. We will also cover sleek ways of dealing with provisioning in different Availability Zones across various AWS regions with Terraform.
CloudOps' software developer, Patrick Dubé's slides from his talk at Confoo in Montreal about using Hashicorp's Terraform automation tool to treat your infrastructure as code on cloud.ca.
Infrastructure as Code: Introduction to TerraformAlexander Popov
Terraform is infrastructure as code software that allows users to define and provision infrastructure resources. It is similar to tools like Chef, Puppet, Ansible, Vagrant, CloudFormation, and Heat, but aims to be easier to get started with and more declarative. With Terraform, infrastructure is defined using the HashiCorp Configuration Language and provisioned using execution plans generated from those definitions. Key features include modules, provisioners, state management, and parallel resource provisioning.
This document discusses Terraform, an open source tool for building, changing, and versioning infrastructure safely and efficiently. It provides declarative configuration files to manage networks, virtual machines, containers, and other infrastructure resources. The document introduces Terraform and how it works, provides examples of Terraform code and its output, and offers best practices for using Terraform including separating infrastructure code from application code, using modules, and managing state. Terraform allows infrastructure to be treated as code, provides a faster development cycle than other tools like CloudFormation, and helps promote a devOps culture.
This document provides an overview and introduction to Terraform, including:
- Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently across multiple cloud providers and custom solutions.
- It discusses how Terraform compares to other tools like CloudFormation, Puppet, Chef, etc. and highlights some key Terraform facts like its versioning, community, and issue tracking on GitHub.
- The document provides instructions on getting started with Terraform by installing it and describes some common Terraform commands like apply, plan, and refresh.
- Finally, it briefly outlines some key Terraform features and example use cases like cloud app setup, multi
A presentation from Hashiconf 2016.
Terraform is a wonderful tool for describing infrastructure as code. It’s fast, flexible, automatically resolves dependencies, and is rapidly improving.
But in some ways, Terraform is flexible like AWS is flexible. You can do pretty much anything, but it’s also easy to shoot yourself in the foot if you aren’t careful.
In the past year, we’ve started managing thousands of resources with Terraform, allowing a lot more of the dev team to change the underlying infrastructure. During that time, we’ve learned a lot about how to set up our terraform modules so that they are easy to manage and reuse.
This talk will cover how we manage tfstate, separate environments, specific module definitions, and how use terraform to boot new services in production. I’ll also discuss the challenges we’re currently facing, and how we plan to attack them going forward.
This document provides an overview of Terraform, an open-source tool for building, changing, and versioning infrastructure safely and efficiently. It discusses Terraform's core concepts including providers, resources, data sources, and modules. An example demonstrates creating AWS SQS and S3 resources and a Heroku app using Terraform configuration files. The document also covers Terraform's workflow, features like remote state and provisioning, and compares it to similar configuration management tools.
This document discusses the infrastructure provisioning tool Terraform. It can be used to provision resources like EC2 instances, storage, and DNS entries across multiple cloud providers. Terraform uses configuration files to define what infrastructure should be created and maintains state files to track changes. It generates execution plans to determine what changes need to be made and allows applying those changes to create, update or destroy infrastructure.
2016 - IGNITE - Terraform to go from Zero to Prod in less than 1 month and TH...devopsdaysaustin
Ignite Presentation by Satish Kumar
This talk will
- describe how Kasasa used Terraform to build our Enterprise Datawarehouse infrastructure in AWS.
- what we learned as more teams used Terraform.
The document discusses refactoring Terraform configuration files to improve their design. It provides an example of refactoring a "supermarket-terraform" configuration that originally defined AWS resources across multiple files. The refactoring consolidates the configuration into a single file and adds testing using Test Kitchen. It emphasizes starting small by adding tests incrementally and not making changes without tests to avoid introducing errors.
Rediscovering Developer Opportunities in the Philippines by Fred TshidimbaDEVCON
Developers' careers are changing and they must adapt to new trends. Freelance developers previously had steady work but then freemium solutions and automation reduced some jobs. Developers now need new skills like mobility and flexibility to work in different markets opening in Southeast Asia. They must constantly learn new skills through training to meet employers' changing needs and expectations in this new normal for Developer 3.0.
This document summarizes a talk on using Jsonnet, Terraform, and Packer together for infrastructure as code and application configuration management. Jsonnet is introduced as a configuration language that is designed like a programming language, allowing powerful abstractions while maintaining hermetic configurations. The methodology demonstrated generates infrastructure and application components from a single Jsonnet configuration, outputting files for Packer to build machine images and Terraform configuration to deploy the infrastructure. This allows building and updating a cloud application from a single make command for synchronized infrastructure and application configuration.
The document discusses using Terraform to implement infrastructure as code. It describes how Terraform allows building multiple environments like development, test, staging and production in an automated and repeatable way. It also provides code examples to demonstrate how to build a VPC, security group and EC2 instance using Terraform modules to reuse infrastructure components and simplify configuration.
- Terraform allows infrastructure teams to more efficiently and agilely provision resources at scale across multiple production datacenters and regions.
- Key benefits include auto-scaling, self-service provisioning of services like Elasticsearch and Cassandra, and reducing new datacenter provisioning from over 12 months to just 2 months.
- Debugging and managing complex Terraform configurations, especially across modules, can currently be challenging due to limitations in Terraform's data handling and interpolation features.
Packer and TerraForm are fundamental components of Infrastructure as Code. I recently gave a talk at a DevOps meetup, which allowed me the opportunity to discuss the basics of these two tools, and how DevOps teams should be using them
CoreOS is a minimal OS designed to host containers. It uses automatic updates and cluster management via tools like Fleet and etcd. CoreOS clusters are configured in etcd, a highly available key-value store. Services are defined and launched across the cluster using Fleet and systemd unit files. Cloud config handles early initialization and configuration of instances.
Develop and deploy using Hybrid Cloud Strategies confoo2012Combell NV
The document discusses developing and deploying applications using hybrid cloud strategies. It provides examples of using different cloud platforms like Amazon Web Services, Windows Azure, and Orchestra for various infrastructure components including computing, storage, databases, and content delivery. It also discusses strategies for scaling applications on the cloud like using multiple servers, databases, caching, load balancing, and adapting application code.
The document discusses developing and deploying applications using hybrid cloud strategies. It provides an overview of different cloud platforms and services that can be used as part of a hybrid cloud approach, including Amazon Web Services, Windows Azure, and Orchestra. It then discusses various architecture patterns for deploying applications in a hybrid way, such as using a single server setup, separating the database onto its own server, using multiple database servers with replication, deploying multiple web servers behind a load balancer, offloading static files, and implementing auto-scaling and caching.
This document discusses enabling multi-region Cassandra clusters that span heterogeneous data centers using Network Address Translation (NAT) and DNS-based Service Discovery (DNS-SD). It describes how NAT allows sharing a limited number of public IP addresses between private nodes by mapping private ports to public ports. DNS-SD is proposed to advertise the port mappings so nodes can discover each other, with SRV and TXT records storing port and cluster details. Minor modifications to Cassandra and drivers are suggested to lookup ports via DNS-SD during connection establishment.
Aprovisionamiento multi-proveedor con Terraform - Plain Concepts DevOps dayPlain Concepts
La infraestructura como código (IaC) es una de las prácticas relacionadas con la cultura DevOps que está cogiendo más tracción en el desarrollo de software y Terraform es una de las herramientas más recomendadas para ello.
Se suele relacionar sobre todo con la creación de infraestructura en los grandes servicios “Cloud” -AWS, Azure, Google Cloud,…- pero es además algo aplicable a otros aspectos de IT como podrían ser la creación de usuarios en servicios de terceros o propios (Github, bases de datos,…), configuración de dominios (Dyn, GoDaddy,…), configuración de alertas (Grafana, OpsGenie)…
Durante esta sesión se explicará su funcionamiento básico y veremos en directo despliegues en varias de estas plataformas.
This document introduces CoreOS, an open source operating system focused on automation, security, and scalability. It provides automatic updates, uses Docker containers, and includes tools like Etcd for service discovery and configuration. CoreOS is based on Gentoo Linux and uses systemd. It focuses on immutable infrastructure with atomic updates and rollbacks. The document describes CoreOS tools like Etcd, Locksmith, Cloud Config, Flannel and Fleet for cluster management.
You know, for search. Querying 24 Billion Documents in 900msJodok Batlogg
Who doesn't love building high-available, scalable systems holding multiple Terabytes of data? Recently we had the pleasure to crack some tough nuts to solve the problems and we'd love to share our findings designing, building up and operating a 120 Node, 6TB Elasticsearch (and Hadoop) cluster with the community.
Speaker: Jacob Aae Mikkelsen
Once you have successfully developped your application in Grails, Ratpack or your other favorite framework, you would like to see it deployed as fast and painless as possible, right?
This talk will cover some of the supporting cast members of a succesful modern infrastructure, that developers can understand and use efficiently, and with good DevOps practices.
Key elements are
Docker
Infrastructure as Code
Container Orchestration
The demo-goods will hopefully be on our side, as this talk includes quite some live demos!
Introduction to automation in the cloud, why it's needed, what are the tools or ways of working, the processes, the best practises with some examples and takeaways.
The document describes OpenStack Trove, an OpenStack service that provides database as a service functionality. It discusses how Trove allows developers to provision and manage relational and non-relational databases in OpenStack clouds through self-service APIs. The document also provides an overview of how Trove works, how it is used in production environments today, and how users can get started with provisioning and managing databases using the Trove APIs and CLI tools.
Listen up, developers. You are not special. Your infrastructure is not a beautiful and unique snowflake. You have the same tech debt as everyone else. This is a talk about a better way to build and manage infrastructure: Terraform Modules. It goes over how to build infrastructure as code, package that code into reusable modules, design clean and flexible APIs for those modules, write automated tests for the modules, and combine multiple modules into an end-to-end techs tack in minutes.
You can find the video here: https://www.youtube.com/watch?v=LVgP63BkhKQ
Giant Swarm is a company based in Cologne, Germany that builds services on top of CoreOS. CoreOS is a minimal operating system optimized for running containers, with automatic updates and cluster management capabilities. It uses tools like etcd for service discovery and configuration management, Fleet for orchestrating containers across clusters, and Locksmith for coordinated container reboots during OS updates.
This document provides an overview of Puppet, an open source configuration management tool. It discusses key Puppet concepts like infrastructure as code, reproducible setups, and aligned environments. It also describes Puppet's architecture including the Puppet master, agent nodes, catalogs, resources, and the lifecycle of a Puppet run. The Puppet language is declarative and node-based. Resources are defined and organized into classes. Relationships between resources can be specified.
Terraform for azure: the good, the bad and the ugly -Giulio Vian
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. The presenter discusses the good, bad, and ugly aspects of using Terraform with Azure. The good includes its simple configuration language and ability to integrate with Azure and automate deployments. The bad includes limitations in its language and some errors being difficult to debug. The ugly involves challenges around managing state files and keeping infrastructure definitions well organized. Overall, Terraform provides benefits but also requires understanding its quirks and handling state carefully.
Introduction to Docker & CoreOS - Symfony User Group CologneD
This document provides an introduction and overview of Docker and CoreOS. It describes Docker as a tool for isolation processes in lightweight Linux containers and discusses CoreOS, a minimal Linux distribution focused on running modern infrastructure stacks. CoreOS utilizes Docker containers and tools like Etcd for service discovery, Locksmith for updates, Cloud Config for initialization, Flannel for networking, and Fleet for cluster management.
CoreOS, or How I Learned to Stop Worrying and Love SystemdRichard Lister
Ric Lister presents patterns for running Docker in production on CoreOS, including a simple homogeneous operations cluster where sidekick units announce services in etcd and a reverse proxy discovers them, an etcd and workers pattern for low-traffic sites behind a load balancer, and an immutable servers pattern without etcd for high-traffic microservices with strict change control. He also discusses logging to ship container output off hosts, various monitoring options, alternative operating systems like RancherOS and Atomic, and scheduler options like Kubernetes, Mesos, and Deis.
Declarative & workflow based infrastructure with TerraformRadek Simko
Terraform allows users to define infrastructure as code to provision resources across multiple cloud platforms. It aims to describe infrastructure in a configuration file, provision resources efficiently by leveraging APIs, and manage the full lifecycle from creation to deletion. Key features include supporting composability across different infrastructure tiers, using a graph-based approach to parallelize operations for efficiency, and managing state to track resource unique IDs and allow recreating resources. Providers enable connectivity to different cloud APIs while resources define the specific infrastructure components and their properties.
Session talk presented at Innosoft 2022.11.11 University of Sevilla.
Presented the concept of Infrastructure as Core and its practical approach using Hashicorp Terraform a a tool to provision in the cloud. Examples with AWS are provided in a Guthub repository.
Cassandra Summit 2014: Down with Tweaking! Removing Tunable Complexity for Ca...DataStax Academy
Presenters: Don Marti, Glauber Costa, and Dor Laor of Cloudius Systems
The need for performance tuning of the JVM and OS is making administrators the bottleneck for Cassandra deployments--especially in virtual environments. Over the past two years, the OSv project has profiled tuning-sensitive applications with a special focus on Cassandra. Today, many of the important bottlenecks for NoSQL applications are tunable on a conventional OS, but do not require tuning in the OSv environment. OSv gives Cassandra a simpler environment, set up to run one application in a single address space. This talk will cover how to use OSv to improve performance in key areas such as JVM memory allocation and network throughput--without loading up your to-do list with difficult tuning tasks.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
2. TEXT
ABOUT ME
▸ Software engineer, Dev-Ops by chance
▸ Currently at reBuy.de, helping with migration to AWS
▸ Previously - 4 years at Amazon (AWS)
2
3. THE PREMISE
WHAT IS ETCD?
▸ Distributed key-value store
▸ Based on Raft consensus algorithm
▸ Similar to Consul and ZooKeeper
▸ Used for storing state of distributed applications
(Kubernetes, Fleet, CoreUpdate)
▸ Should be treated like a database
▸ Comes bundled with CoreOS
3
4. THE ENVIRONMENT
TYPICAL ETCD DEPLOYMENT
▸ Odd number of instances
▸ Evenly distributed across AZs
▸ Low-latency connectivity between nodes
▸ Persistent storage (EBS)
▸ A way to determine the list of nodes
4
5. THE PROCESS
BOOTSTRAPPING ETCD
▸ Nodes need prior knowledge about all other nodes
▸ The bootstrap phase is a one-off scenario
▸ Has support for discovering nodes (DNS SRV records)
▸ Can use discovery for clients
5
6. THE PROCESS
…ON AWS
▸ Prepare CoreOS configuration (cloud-config)
▸ Launch node instances
▸ Create discovery DNS records
▸ Profit!
6
7. TERRAFORM + CoreOS
FINDING THE NODES
▸ Through DNS SRV records
▸ Route53 private DNS inside VPC
▸ Nodes get a stable hostname
(not ip-172-31-2-219.eu-west-1.compute.internal)
7
8. TERRAFORM + CoreOS
resource "aws_route53_record" "etcd_srv_discover" {
name = "_etcd-server._tcp"
type = "SRV"
records = ["${formatlist("0 0 2380 %s", aws_route53_record.etc_a_nodes.*.fqdn)}"]
ttl = “300"
zone_id = "${aws_route53_zone.etcd_zone.id}"
}
resource "aws_route53_record" "etc_a_nodes" {
count = "${var.node_count}"
type = "A" name = "node-${count.index}"
records = ["${aws_instance.etcd_node.*.private_ip[count.index]}"]
ttl = 300
zone_id = "${aws_route53_zone.etcd_zone.id}"
}
STABLE HOST NAMES
8
$ dig _etcd-server._tcp.cluster.etcd SRV
_etcd-server._tcp.cluster.etcd. 183 IN SRV 0 0 2380 node-0.cluster.etcd.
_etcd-server._tcp.cluster.etcd. 183 IN SRV 0 0 2380 node-1.cluster.etcd.
_etcd-server._tcp.cluster.etcd. 183 IN SRV 0 0 2380 node-2.cluster.etcd.
9. TERRAFORM + COREOS
CONFIGURING CoreOS
▸ Uses own version of cloud-init (subset of cloud-config)
▸ Config as EC2 user-data
▸ Template data-source for user-data
▸ Has to include hostname and DNS domain for discovery
9
11. TERRAFORM + CoreOS
LAUNCH NODES
11
resource "aws_instance" "etcd_node" {
count = "${var.node_count}"
ami = "${data.aws_ami.coreos_ami.id}"
instance_type = "t2.medium"
subnet_id = "${aws_subnet.az_subnet.*.id[count.index]}"
key_name = "${aws_key_pair.ssh-key.id}"
user_data = "${data.template_file.userdata.*.rendered[count.index]}"
}
$ terraform apply
core@node-1 ~ $ etcdctl cluster-health
member 5bea3befcd2b527d is healthy: got healthy result from http://node-2.cluster.etcd:2379
member bfc4d7d3459cc4cb is healthy: got healthy result from http://node-1.cluster.etcd:2379
member d1b3f464b49063ac is healthy: got healthy result from http://node-0.cluster.etcd:2379
cluster is healthy
13. TERRAFORM + CoreOS
THAT'S IT!
Take-aways:
▸ etcd operations are deliberately “manual”
▸ etcd requires a source-of-truth for member list (Terraform)
▸ auto-scaling possible, but discouraged
▸ Route53 useful for service discovery
13