Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Workshop: From Zero to Cluster in the Cloud with Terraform

470 views

Published on

Managing cloud resources can be challenging when their number grows. How about managing deployments that span several clouds?

Join us and learn about Terraform - infrastructure-as-code tool that can manage hybrid configurations spanning different IaaS, PaaS or SaaS providers.

During this practical workshop, we will create fully-blown infrastructure with a set of simple configuration files using Terraform's a declarative syntax.

Published in: Technology
  • Be the first to comment

Workshop: From Zero to Cluster in the Cloud with Terraform

  1. 1. 01
  2. 2. 02
  3. 3. DevFest 03
  4. 4. DevTernity 04
  5. 5. Let's start! 05
  6. 6. Sharing is caring! Our shared clipboard: http://bit.ly/GDG_RIGA_TF 06
  7. 7. What is Terraform? Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.“ 07
  8. 8. Terraform Multi-cloud provisioning tool (AWS, Google Cloud, Azure) Management API aggregator (Kubernetes, Chef, DNSimple) Declarative language (HCL, JSON) • • • 08
  9. 9. Our goal 09
  10. 10. Go! Go! Go! 10
  11. 11. Install Terraform 11
  12. 12. Terraform project *.tf - infrastructre definitions *.tfvars - external variables *.tfstate[.backup] - project state .terraform/ - downloaded provider plugins • • • • 12
  13. 13. Configure provider Create your first Terraform file - 10_providers.tf : provider "aws" { version = "1.0" } provider "template" { version = "1.0" } 01. 02. 03. 04. 05. 06. 13
  14. 14. Init provider(s) Download provider plugins by executing: terraform init01. 14
  15. 15. Planning/applying changes Remember, this command does not create anything: terraform plan But this does: terraform apply 01. 01. 15
  16. 16. Query data Create the 15_data.tf file: data "aws_ami" "workshop_ubuntu_trusty" { most_recent = true filter { name = "name" values = [ "devops-ubuntu-14-04-x64*" ] } owners = [ "self" ] } 01. 02. 03. 04. 05. 06. 07. 08. 16
  17. 17. Query data data "aws_subnet" "workshop_subnet_primary" { cidr_block = "10.1.1.0/24" } data "aws_subnet" "workshop_subnet_secondary" { cidr_block = "10.1.2.0/24" } 01. 02. 03. 04. 05. 06. 17
  18. 18. Query data data "aws_security_group" "workshop_security_group" { name = "workshop_security" } 01. 02. 03. 18
  19. 19. terraform apply 19
  20. 20. Outputs Create the 99_outputs.tf file: output "ami_id" { value = "${data.aws_ami.workshop_ubuntu_trusty.image_id}" } 01. 02. 03. 04. 20
  21. 21. Outputs output "subnet_id" { value = "${data.aws_subnet.workshop_subnet_primary.id}" } 01. 02. 03. 04. 21
  22. 22. Outputs output "security_id" { value = "${data.aws_security_group.workshop_security_group.id}" } 01. 02. 03. 04. 22
  23. 23. terraform refresh 23
  24. 24. terraform output 24
  25. 25. terraform output ami_id 25
  26. 26. Create SSH key ssh-keygen -t rsa -f student.key01. 26
  27. 27. Variables Create the 00_variables.tf file: variable "project_name" { default = "my-cluster" } 01. 02. 03. 27
  28. 28. Create key pair Create the 11_key_pair.tf file: resource "aws_key_pair" "student_key" { key_name = "${var.project_name}-student-key" public_key = "${file("student.pub")}" } 01. 02. 03. 04. 28
  29. 29. Alternative: use random provider "random" { version = "1.0" } 01. 02. 03. 29
  30. 30. terraform init 30
  31. 31. Alternative: use random resource "random_id" "project_id" { byte_length = 8 } resource "random_pet" "project_name" { length = 2 } 01. 02. 03. 04. 05. 06. 31
  32. 32. Alternative: use random ${random_id.project_id.b64_std} ${random_id.project_id.hex} ${random_id.project_id.dec} ${random_id.project_name.id} 01. 02. 03. 04. 32
  33. 33. Create server Create the 25_servers.tf file: resource "aws_instance" "cluster_node" { ami = "${data.aws_ami.workshop_ubuntu_trusty.id}" instance_type = "t2.small" tags { Name = "${var.project_name}-node" } ... 01. 02. 03. 04. 05. 06. 07. 33
  34. 34. Create server ... key_name = "${aws_key_pair.student_key.key_name}" subnet_id = "${data.aws_subnet.workshop_subnet_primary.id}" vpc_security_group_ids = [ "${data.aws_security_group.workshop_security_group.id}" ] } 01. 02. 03. 04. 05. 06. 34
  35. 35. Provision server Add the following snippet within aws_instance resource: connection { user = "ubuntu" private_key = "${file("student.key")}" } 01. 02. 03. 04. 35
  36. 36. Provision server provisioner "remote-exec" { inline = [ "wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch "sudo apt-get update", "sudo apt-get install apt-transport-https", "echo 'deb https://artifacts.elastic.co/packages/5.x/apt stabl ... 01. 02. 03. 04. 05. 06. 07. 36
  37. 37. Provision server ... "sudo apt-get update && sudo apt-get install elasticsearch", "sudo /usr/share/elasticsearch/bin/elasticsearch-plugin instal "sudo service elasticsearch start" ] } 01. 02. 03. 04. 05. 06. 37
  38. 38. terraform apply? 38
  39. 39. Didn't work? 39
  40. 40. Warning! 40
  41. 41. terraform taint 41
  42. 42. terraform apply 42
  43. 43. Add configuration provider "template" { version = "1.0" } 01. 02. 03. 43
  44. 44. terraform init 44
  45. 45. Add configuration data "template_file" "es_config" { template = "${file("elasticsearch.yml.tpl")}" vars { cluster_name = "${var.project_name}" } } 01. 02. 03. 04. 05. 06. 45
  46. 46. Add configuration data "template_file" "jvm_opts" { template = "${file("jvm.options.tpl")}" } 01. 02. 03. 46
  47. 47. Add configuration provisioner "file" { content = "${data.template_file.es_config.rendered}" destination = "/tmp/elasticsearch.yml" } provisioner "file" { content = "${data.template_file.jvm_opts.rendered}" destination = "/tmp/jvm.options" } 01. 02. 03. 04. 05. 06. 07. 08. 47
  48. 48. Provision server provisioner "remote-exec" { inline = [ ..., "sudo cp /tmp/elasticsearch.yml /etc/elasticsearch/elasticsear "sudo cp /tmp/jvm.options /etc/elasticsearch/jvm.options", "sudo service elasticsearch start" ] } 01. 02. 03. 04. 05. 06. 07. 08. 48
  49. 49. terraform taint 49
  50. 50. terraform apply 50
  51. 51. Use count variable "node_count" { default = "2" } 01. 02. 03. 51
  52. 52. Use count resource "aws_instance" "cluster_node" { count = "${var.node_count}" tags { Name = "${var.project_name}-node-${format("%02d", count.index + 1)} } ... 01. 02. 03. 04. 05. 06. 07. 52
  53. 53. terraform apply 53
  54. 54. terraform show 54
  55. 55. terraform console 55
  56. 56. Add outputs output "cluster_ips" { value = "${aws_instance.cluster_node.*.public_ip}" } 01. 02. 03. 56
  57. 57. terraform refresh 57
  58. 58. Add load balancer resource "aws_alb" "cluster_lb" { name = "${var.project_name}-lb" internal = false security_groups = [ "${data.aws_security_group.workshop_security_group.id}" ] subnets = [ "${data.aws_subnet.workshop_subnet_primary.id}", "${data.aws_subnet.workshop_subnet_secondary.id}" ] } 01. 02. 03. 04. 05. 06. 07. 08. 09. 10. 11. 58
  59. 59. Add load balancer resource "aws_alb_target_group" "cluster_target_group" { name = "elasticsearch" port = 9200 protocol = "HTTP" vpc_id = "${data.aws_subnet.workshop_subnet_primary.vpc_id}" } 01. 02. 03. 04. 05. 06. 59
  60. 60. Add load balancer resource "aws_alb_target_group_attachment" "cluster_target" { count = "${var.node_count}" target_group_arn = "${aws_alb_target_group.cluster_target_group.arn}" target_id = "${aws_instance.cluster_node.*.id[count.index]}" port = 9200 } 01. 02. 03. 04. 05. 06. 07. 08. 60
  61. 61. Add load balancer resource "aws_alb_listener" "cluster_front_end" { load_balancer_arn = "${aws_alb.cluster_lb.arn}" port = "9200" protocol = "HTTP" default_action { target_group_arn = "${aws_alb_target_group.cluster_target_group.arn}" type = "forward" } } 01. 02. 03. 04. 05. 06. 07. 08. 09. 10. 61
  62. 62. Configure Kibana resource "aws_instance" "kibana" { ami = "${data.aws_ami.workshop_ubuntu_trusty.id}" instance_type = "t2.small" ... 01. 02. 03. 04. 62
  63. 63. Copy cluster configuration resource "null_resource" "copy_cluster_config" { count = "${var.node_count}" provisioner "local-exec" { command = "scp ..." } depends_on = [ "aws_instance.cluster_node" ] } 01. 02. 03. 04. 05. 06. 07. 63
  64. 64. Final task! 64
  65. 65. terraform destory 65
  66. 66. Solution http://bit.ly/GDG_RIGA_TF_SOLUTION 66
  67. 67. Infrastructure as Code 67
  68. 68. Kief Morris 68
  69. 69. DevTernity 69
  70. 70. That's all! 70
  71. 71. Thank you! 71
  72. 72. 72

×