SlideShare a Scribd company logo
1 of 55
Download to read offline
Case Study: Creating a self-healing MongoDB
Replica Set in GCP using Terraform
June 2019
Stephen Beasey
Enterprise Architecture
Hello!
Stephen Beasey
Google Cloud Cer+fied
Professional Cloud Architect
Humana
Enterprise Architecture team
MongoDB | How the infrastructure heals
Make Some data
for (var i = 1; i <= 25; i++) {db.testData.insert( { x : i } ) }
Sort it
db.testData.find().sort({_id:1})
Check nodes
rs.printSlaveReplicationInfo()
Kill an instance
gcloud compute instances delete <name>
Today:
✔ GCP Demo part I
Cloud basics
The approach
GCP Demo part II
Terraform Tips
The startup script
MongoDB and Terraform | What are we building?
Let’s make sure we’re all on the same page first.
This script will build an unmanaged MongoDB Replica Set or a
single MongoDB node.
The Replica Set is a great backing DB for Mongo Ops Manager.
For more advanced management of MongoDB nodes, it is
recommend that you create an Ops Manager instance, use Ops
Manager to create an agent, then create nodes with that agent
installed.
That said, it’s very easy to repurpose this script so that it creates
nodes to be managed by Ops Manager.
The Cloud | Priori.zing
PETS VS CATTLE
PETS
▪ Keep them
▪ If they get ill, nurse them back to health
▪ They are unique
CATTLE
▪ Rotate them
▪ If they get ill, get another one
▪ They are almost identical
When designing infrastructure, assume that failures will happen and plan accordingly!
The Cloud | Immutable Infrastructure
Immutable Infrastructure
means creating resources
that you are not going to
change.
Immutable Infrastructure
means you can count on
getting the same resource
every time.
Immutable Infrastructure
means we may change
the definition of a
resource, but we wont
change any individual
instance of a resource.
MongoDB | How we want to run MongoDB in the cloud
A region is a specific geographical loca0on where you can run your resources.
MongoDB | How we want to run MongoDB in the cloud
Replace
Identify
Resynch
MongoDB | How we want to run MongoDB in the cloud
MongoDB | How we want to run MongoDB in the cloud
Replace
Resynch
MongoDB | How we want to run MongoDB in the cloud
MongoDB | How we want to run MongoDB in the cloud
Replace
MongoDB | What this template builds
MongoDB | How the infrastructure heals
Managed Instance Group| Balanced Deployment
Why can’t we just use one Managed Instance Group with N x instances?
Managed Instance Group| Balanced Deployment
Balance.
Why can’t we just use one Managed Instance Group with N x instances?
Managed Instance Group| Balanced Deployment
Managed Instance Group| Balanced Deployment
MongoDB | How the infrastructure heals
Check new instance
rs.slaveOk()
db.testData.find().sort({_id:1})
rs.printSlaveReplicationInfo()
MongoDB | Terraform
Google Cloud Platform | Resources
N x Cloud DNS “A” records
N x Compute Disks x 3
N x Compute Instance Templates
N x Managed Instance Groups
DNS Zone and N x DNS ‘A’ records
Project
Network
Release Service Account
Bucket
Number of Nodes (N)
List of Zones
Compute Instance Specifics
Compute Disk Specifics
DNS name
Provide Create
Terraform | 1. Use a Modular approach
Creating Terraform modules allows us to separate code into another template
and refer to that template using a shortcut. This is especially useful for code
that is repeated. Modules are also great for separating code that user can
change from modules that can be locked down to particular properties in a
corporate environment.
module "reservedip" {
source = <path>
rip-name = "${var.rip-name}"
rip-count = "${var.rip-count}"
}
resource "google_compute_address" "static" {
count = "${var.rip-count}"
name = "${var.rip-name}-${count.index}"
address_type = "INTERNAL"
}
output "reservedips" {
value = ["${google_compute_address.static.*.address}"]
}
Terraform | Modules
DNS Zone and ‘A’ records
Compute Instance Templates with assigned
Compute Disks
Managed Instance Group
Mapped labels that can be assigned to resources
Cloud DNS
Floating Storage CIT
Managed Instance Group
Label
Module Resources
Terraform | 1. Use a Modular approach
Using the modular approach, we can pass the output of one module as the input of another.
module "template" {
source = "./ComputeInstanceTemplate”
…
template-count = "${var.usr-node-count}"
template-name = "${var.usr-template-name}"
In the module above, we’re creating Compute Instance Templates. We named the module “template”.
In the module below, we are creating Managed Instance Groups that will use those templates. We reference
the output of the “template” module to get the list of templates.
module "mig" {
source = "./ManagedInstanceGroup”
…
mig-count = "${var.usr-node-count}"
group-manager-name = "${var.usr-group-manager-name}"
base-instance-name = "${var.usr-base-instance-name}"
compute-instance-tpl = [ "${module.template.cit-url}" ]
Terraform | 2. Using List Variables
When Terraform spins up nodes, we
want the nodes evenly distributed
between zones. The best way to
achieve this is by creating a list variable
in Terraform. Terraform is smart
enough to cycle through the list even if
the list on has 3 elements but the user
has selected 7 nodes.
variable "usr-zones" { type = "list" }
# managed instance group
usr-group-manager-name = "mongo-node"
usr-base-instance-name = "mongo-node"
usr-zones = ["us-east1-b","us-east1-c","us-east1-d"]
resource "google_compute_instance_group_manager" "appserver" {
count = "${var.mig-count}"
name = "${var.group-manager-name}-${count.index}"
base_instance_name = "${var.base-instance-name}"
instance_template = "${element("${var.compute-instance-tpl}", count.index)}"
zone = "${element(var.zones, count.index)}"
target_size = "${var.target-size}"
}
Terraform | 3. Passing a Startup Script
We can create a startup script in the form of a shell script saved as a separate
file. The file will need very little in the form of modification from standard bash
syntax for terraform to recognize it.
Within our main.tf, we can pass variable values from terraform to bash.
# find startup script template. pass variables if needed.
data "template_file" "startup-script" {
template = "${file("startup-script.sh")}"
vars {
project = "${var.usr-project-id}"
reservedips = "${join(",", "${module.ipaddr.reservedips}")}"
target-size = "${var.usr-rip-count}"
}
}
Next, we simply pass the contents of the startup script as a
variable to the module that creates the Compute Instance
Template.
Terraform | 3. Passing a Startup Script
# compute instance template
template-count = "${var.usr-rip-count}"
template-name = "${var.usr-template-name}"
template-description= "${var.usr-template-description}"
instance-description = "${var.usr-instance-description}"
machine-type = "${var.usr-machine-type}"
template-ip = [ "${module.ipaddr.reservedips}" ]
startup-script = "${data.template_file.startup-script.rendered}"
keys = "${join(",",keys(module.gcp_label.tags))}"
values = "${join(",",values(module.gcp_label.tags))}"
MongoDB | The Startup Script
Startup Script | 1. Identify
Linux updates
When GCP is creating an instance, some information about the instance is available
by querying the metadata
curl -H "Metadata-Flavor: Google"
http://metadata.google.internal/computeMetadata/v1/instance/
For instance, you can find the IP address of the instance by running
curl -H "Metadata-Flavor: Google"
http://metadata.google.internal/computeMetadata/v1/instance/network-
interfaces/0/ip
Or, you can find Project details. You can find the Project ID by running
curl -H "Metadata-Flavor: Google"
http://metadata.google.internal/computeMetadata/v1/project/project-id
Startup Script | 1. Identify
You can also create custom metadata for the instance in Terraform.
In the template we are using now, I am adding a template-id to instances.
metadata = {
template-id = "${count.index}"
}
Then in the startup script, I check the template-id to decide which instance
is node 0, so that the replica set script is only run on that node.
id=$(curl-H "Metadata-Flavor: Google"
hJp://metadata.google.internal/computeMetadata/v1/instance/aJributes/
template-id)
…
if [ ${target-size}-ge 3 ] && [ $templateid-eq 0 ]; then sleep 30;
/etc/mrepl.sh >> /tmp/bootstrap.log 2>&1; fi
Linux updates
Get the IP address of the instance:
instip=$(curl -H "Metadata-Flavor: Google”
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
IdenHfy the instance:
gcloud dns record-sets transacHon add-z=${project}-zone 
--name=${node-name}$id.${project}.local 
--type=A--Pl=300 $insHp
Get the ID of the instance:
id=$(curl -H "Metadata-Flavor: Google"
http://metadata.google.internal/computeMetadata/v1/instance/attributes/template-id)
Startup Script | 1. Identify
Startup Script | 2. Resynch
The startup script has to be able to
handle two scenarios for our separate
compute disks
1. First 9me running. The disk is blank
and needs to be forma;ed.
2. Instance replaced. The disk is
formatted and has data we need to keep.
Startup Script | 2. Resynch
mkdir /data
if mount /data ; then
echo "disk already forma6ed..................." >> /tmp/bootstrap.log
echo "disk mounted..................." >> /tmp/bootstrap.log
else
echo "forma<ng disk..................." >> /tmp/bootstrap.log
mkfs.xfs /dev/sdb
mount /data
echo "disk mounted..................." >> /tmp/bootstrap.log
fi
Try to mount the drive
It will only work if the drive is already formatted
If it doesn’t work then we know the drive needs
to be formatted and then mounted
MongoDB | 3. Replace
What does it take to replace a node?
A startup script.
• Copy install files
• Install MongoDB
• Configure addi=onal drives
• Find out about the instance from metadata
• Update DNS Alias
• Configure MongoDB parameters
• Create MongoDB Replica Set script
• Start MongoDB
• Run the Replica Set script
Terraform | Startup Script
#!/bin/bash
logger "created in ${project}"
logger "install Stackdriver agents......................."
curl -sSO https://dl.google.com/cloudagents/install-
logging-agent.sh
chmod 500 install-logging-agent.sh
./install-logging-agent.sh
curl -sSO https://dl.google.com/cloudagents/install-
monitoring-agent.sh
chmod 500 install-monitoring-agent.sh
./install-monitoring-agent.sh
yum install -y bind-utils
echo "copy and install mongodb from rpm
file......................"
gsutil -m cp gs://${source-path}/mongodb-org* /root
2>&1
gsutil -m cp gs://${source-path}/mongodb.conf /root
2>&1
sleep 5
rpm -i --nosignature /root/*.rpm 2>&1
echo "Configure non-boot drives......................"
echo '/dev/sdb /data xfs defaults,auto,noatime,noexec 0
0
/dev/sdc /log xfs defaults,auto,noatime,noexec 0 0
/dev/sdd /data/journal xfs defaults,auto,noatime,noexec
0 0' >> /etc/fstab
mkdir /data
if mount /data; then
echo "disk already formatted..................."
echo "data disk mounted..................."
else
echo "formatting disk..................."
mkfs.xfs /dev/sdb
mount /data
echo "data disk mounted..................."
fi
mkdir /log
if mount /log; then
echo "disk already formaPed..................."
echo "log disk mounted..................."
else
echo "formaQng disk..................."
mkfs.xfs /dev/sdc
mount /log
echo "log disk mounted..................."
fi
if mount /data/journal; then
echo "disk already formaPed..................."
echo "journal disk mounted..................."
else
echo "formaQng disk..................."
mkdir /data/journal
mkfs.xfs /dev/sdd
mount /data/journal
echo "journal disk mounted..................."
fi
chown-R mongod:mongod /data /data/journal /log
echo "Configure DNS alias.........................."
id=$(curl-H "Metadata-Flavor: Google"
hPp://metadata.google.internal/computeMetadata/v1/instance/aPributes/template-id)
echo "id=$id"
ins^p=$(curl-H "Metadata-Flavor: Google"
hPp://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
echo "ins^p=$ins^p"
oldip=$(dig +short ${node-name}$id.${project}.local)
echo "oldip=$oldip"
gcloud dns record-sets transac^on start-z=${project}-zone
gcloud dns record-sets transac^on remove-z=${project}-zone--name=${node-
name}$id.${project}.local--type=A--Pl=300 $oldip
gcloud dns record-sets transac^on add-z=${project}-zone--name=${node-
name}$id.${project}.local--type=A--Pl=300 $ins^p
gcloud dns record-sets transac^on execute-z=${project}-zone
echo "Configure mongoDB Parameters...................."
sed-i 's@/var/lib/mongo@/data@g' /etc/mongod.conf
sed-i 's@/var/log/mongodb@/log@g' /etc/mongod.conf
sed-i "s@bindIp: 127.0.0.1@bindIp: 127.0.0.1,$ins^p@g" /etc/mongod.conf
if [ ${node-count}-ge 3 ]; then sed-i 's@#replica^on:@replica^on:'"n"' replSetName:
"rs0"@g' /etc/mongod.conf; fi
echo "Update file limits............"
echo "* soft nofile 64000
* hard nofile 64000
* soft nproc 64000
* hard nproc 64000" > /etc/security/limits.d/90-mongodb.conf
echo "Optimiize read ahead settings...................."
blockdev --setra 0 /dev/sdb
echo 'ACTION=="add|change", KERNEL=="sdb", ATTR{bdi/read_ahead_kb}="0"' >>
/etc/udev/rules.d/85-ebs.rules
if [ ${node-count} -ge 3 ]; then echo "create mongoDB replica set script...................";
echo "
cfg="{
_id: 'rs0',
members: [
replace
]
}"
mongo ${node-name}0.${project}.local:27017 --eval
"JSON.stringify(db.adminCommand({'replSetInitiate' : $cfg}))"
" > /etc/mrepl.sh;
mongostring=""
index=0; for i in {1..${node-count}}; do mongostring=$mongostring" {_id: "$index",
host: '${node-name}"$index".${project}.local:27017'},n" >> /etc/hosts; index=$((
$index + 1 )); done;
mongostring=$${mongostring::-3}
sed -i "s@replace@$mongostring@g" /etc/mrepl.sh;
chmod 500 /etc/mrepl.sh; fi
echo "update selinux for new mongo paths..................."
semanage fcontext -a -t mongod_var_lib_t '/data.*'
chcon -Rv -u system_u -t mongod_var_lib_t '/data'
restorecon -R -v '/data'
semanage fcontext -a -t mongod_log_t '/log.*'
chcon -Rv -u system_u -t mongod_log_t '/log'
restorecon -R -v '/log'
semanage fcontext -a -t mongod_var_lib_t '/data/journal.*'
chcon -h -u system_u -t mongod_var_lib_t '/data/journal'
restorecon -R -v '/data/journal'
echo "start mongoDB..................."
service mongod start 2>&1
if [ ${node-count} -ge 3 ]; then /etc/mrepl.sh 2>&1; fi
echo "end of startup script..................."
Startup Script | Replace
#!/bin/bash
logger "created in ${project}"
logger "install Stackdriver agents......................."
curl -sSO https://dl.google.com/cloudagents/install-
logging-agent.sh
chmod 500 install-logging-agent.sh
./install-logging-agent.sh
curl -sSO https://dl.google.com/cloudagents/install-
monitoring-agent.sh
chmod 500 install-monitoring-agent.sh
./install-monitoring-agent.sh
yum install -y bind-utils
echo "copy and install mongodb from rpm
file......................"
gsutil -m cp gs://${source-path}/mongodb-org* /root
2>&1
gsutil -m cp gs://${source-path}/mongodb.conf /root
2>&1
sleep 5
rpm -i --nosignature /root/*.rpm 2>&1
echo "Configure non-boot drives......................"
echo '/dev/sdb /data xfs defaults,auto,noatime,noexec 0
0
/dev/sdc /log xfs defaults,auto,noatime,noexec 0 0
/dev/sdd /data/journal xfs defaults,auto,noatime,noexec
0 0' >> /etc/fstab
mkdir /data
if mount /data; then
echo "disk already formatted..................."
echo "data disk mounted..................."
else
echo "formatting disk..................."
mkfs.xfs /dev/sdb
mount /data
echo "data disk mounted..................."
fi
mkdir /log
if mount /log; then
echo "disk already formatted..................."
echo "log disk mounted..................."
else
echo "formatting disk..................."
mkfs.xfs /dev/sdc
mount /log
echo "log disk mounted..................."
fi
if mount /data/journal; then
echo "disk already formatted..................."
echo "journal disk mounted..................."
else
echo "formatting disk..................."
mkdir /data/journal
mkfs.xfs /dev/sdd
mount /data/journal
echo "journal disk mounted..................."
fi
chown -R mongod:mongod /data /data/journal /log
echo "Configure DNS alias.........................."
id=$(curl -H "Metadata-Flavor: Google"
http://metadata.google.internal/computeMetadata/v1/instance/attributes/template-id)
echo "id=$id"
instip=$(curl -H "Metadata-Flavor: Google"
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
echo "instip=$instip"
oldip=$(dig +short ${node-name}$id.${project}.local)
echo "oldip=$oldip"
gcloud dns record-sets transaction start -z=${project}-zone
gcloud dns record-sets transaction remove -z=${project}-zone --name=${node-
name}$id.${project}.local --type=A --ttl=300 $oldip
gcloud dns record-sets transaction add -z=${project}-zone --name=${node-
name}$id.${project}.local --type=A --ttl=300 $instip
gcloud dns record-sets transaction execute -z=${project}-zone
echo "Configure mongoDB Parameters...................."
sed -i 's@/var/lib/mongo@/data@g' /etc/mongod.conf
sed -i 's@/var/log/mongodb@/log@g' /etc/mongod.conf
sed -i "s@bindIp: 127.0.0.1@bindIp: 127.0.0.1,$instip@g" /etc/mongod.conf
if [ ${node-count} -ge 3 ]; then sed -i 's@#replication:@replication:'"n"' replSetName:
"rs0"@g' /etc/mongod.conf; fi
echo "Update file limits............"
echo "* soi nofile 64000
* hard nofile 64000
* soi nproc 64000
* hard nproc 64000" > /etc/security/limits.d/90-mongodb.conf
echo "Opmmiize read ahead senngs...................."
blockdev--setra 0 /dev/sdb
echo 'ACTION=="add|change", KERNEL=="sdb", ATTR{bdi/read_ahead_kb}="0"' >>
/etc/udev/rules.d/85-ebs.rules
if [ ${node-count}-ge 3 ]; then echo "create mongoDB replica set script...................";
echo "
cfg="{
_id: 'rs0',
members: [
replace
]
}"
mongo ${node-name}0.${project}.local:27017--eval
"JSON.stringify(db.adminCommand({'replSetInimate' : $cfg}))"
" > /etc/mrepl.sh;
mongostring=""
index=0; for i in {1..${node-count}}; do mongostring=$mongostring" {_id: "$index",
host: '${node-name}"$index".${project}.local:27017'},n" >> /etc/hosts; index=$((
$index + 1 )); done;
mongostring=$${mongostring::-3}
sed-i "s@replace@$mongostring@g" /etc/mrepl.sh;
chmod 500 /etc/mrepl.sh; fi
echo "update selinux for new mongo paths..................."
semanage fcontext-a-t mongod_var_lib_t '/data.*'
chcon-Rv-u system_u-t mongod_var_lib_t '/data'
restorecon-R-v '/data'
semanage fcontext-a-t mongod_log_t '/log.*'
chcon-Rv-u system_u-t mongod_log_t '/log'
restorecon-R-v '/log'
semanage fcontext-a-t mongod_var_lib_t '/data/journal.*'
chcon-h-u system_u-t mongod_var_lib_t '/data/journal'
restorecon-R-v '/data/journal'
echo "start mongoDB..................."
service mongod start 2>&1
if [ ${node-count}-ge 3 ]; then /etc/mrepl.sh 2>&1; fi
echo "end of startup script..................."
Stackdriver
Copy and
install RPM
Format
drives
Configure
Mongo
Start
Replica Set
What we did Today:
GCP Demo
Cloud basics
The approach and Why
Terraform Tips
The startup script:
Identify, Resynch and Replace
Questions???

More Related Content

What's hot

Geospatial Advancements in Elasticsearch
Geospatial Advancements in ElasticsearchGeospatial Advancements in Elasticsearch
Geospatial Advancements in ElasticsearchElasticsearch
 
[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경
[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경
[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경NAVER Engineering
 
Redis and its Scaling and Obersvability
Redis and its Scaling and ObersvabilityRedis and its Scaling and Obersvability
Redis and its Scaling and ObersvabilityAbhishekDubey902839
 
ClickHouse Deep Dive, by Aleksei Milovidov
ClickHouse Deep Dive, by Aleksei MilovidovClickHouse Deep Dive, by Aleksei Milovidov
ClickHouse Deep Dive, by Aleksei MilovidovAltinity Ltd
 
Golang - Overview of Go (golang) Language
Golang - Overview of Go (golang) LanguageGolang - Overview of Go (golang) Language
Golang - Overview of Go (golang) LanguageAniruddha Chakrabarti
 
Real-time analytics with Druid at Appsflyer
Real-time analytics with Druid at AppsflyerReal-time analytics with Druid at Appsflyer
Real-time analytics with Druid at AppsflyerMichael Spector
 
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
 
Doctrine ORM Internals. UnitOfWork
Doctrine ORM Internals. UnitOfWorkDoctrine ORM Internals. UnitOfWork
Doctrine ORM Internals. UnitOfWorkIllia Antypenko
 
Flink on Kubernetes operator
Flink on Kubernetes operatorFlink on Kubernetes operator
Flink on Kubernetes operatorEui Heo
 
Management Zabbix with Terraform
Management Zabbix with TerraformManagement Zabbix with Terraform
Management Zabbix with TerraformAécio Pires
 
Integrating microservices with apache camel on kubernetes
Integrating microservices with apache camel on kubernetesIntegrating microservices with apache camel on kubernetes
Integrating microservices with apache camel on kubernetesClaus Ibsen
 
Celery - A Distributed Task Queue
Celery - A Distributed Task QueueCelery - A Distributed Task Queue
Celery - A Distributed Task QueueDuy Do
 
게임서버 구축 방법비교 : GBaaS vs. Self-hosting
게임서버 구축 방법비교 : GBaaS vs. Self-hosting게임서버 구축 방법비교 : GBaaS vs. Self-hosting
게임서버 구축 방법비교 : GBaaS vs. Self-hostingiFunFactory Inc.
 
AOS Lab 2: Hello, xv6!
AOS Lab 2: Hello, xv6!AOS Lab 2: Hello, xv6!
AOS Lab 2: Hello, xv6!Zubair Nabi
 

What's hot (20)

Geospatial Advancements in Elasticsearch
Geospatial Advancements in ElasticsearchGeospatial Advancements in Elasticsearch
Geospatial Advancements in Elasticsearch
 
[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경
[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경
[네이버오픈소스세미나] Contribution, 전쟁의 서막 : Apache OpenWhisk 성능 개선 - 김동경
 
MongoDB Sharding Fundamentals
MongoDB Sharding Fundamentals MongoDB Sharding Fundamentals
MongoDB Sharding Fundamentals
 
Redis and its Scaling and Obersvability
Redis and its Scaling and ObersvabilityRedis and its Scaling and Obersvability
Redis and its Scaling and Obersvability
 
Mongo DB 102
Mongo DB 102Mongo DB 102
Mongo DB 102
 
ClickHouse Deep Dive, by Aleksei Milovidov
ClickHouse Deep Dive, by Aleksei MilovidovClickHouse Deep Dive, by Aleksei Milovidov
ClickHouse Deep Dive, by Aleksei Milovidov
 
Golang - Overview of Go (golang) Language
Golang - Overview of Go (golang) LanguageGolang - Overview of Go (golang) Language
Golang - Overview of Go (golang) Language
 
Real-time analytics with Druid at Appsflyer
Real-time analytics with Druid at AppsflyerReal-time analytics with Druid at Appsflyer
Real-time analytics with Druid at Appsflyer
 
Node.js
Node.jsNode.js
Node.js
 
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
 
Doctrine ORM Internals. UnitOfWork
Doctrine ORM Internals. UnitOfWorkDoctrine ORM Internals. UnitOfWork
Doctrine ORM Internals. UnitOfWork
 
MongoDB
MongoDBMongoDB
MongoDB
 
Flink on Kubernetes operator
Flink on Kubernetes operatorFlink on Kubernetes operator
Flink on Kubernetes operator
 
ElasticSearch
ElasticSearchElasticSearch
ElasticSearch
 
Management Zabbix with Terraform
Management Zabbix with TerraformManagement Zabbix with Terraform
Management Zabbix with Terraform
 
Express js
Express jsExpress js
Express js
 
Integrating microservices with apache camel on kubernetes
Integrating microservices with apache camel on kubernetesIntegrating microservices with apache camel on kubernetes
Integrating microservices with apache camel on kubernetes
 
Celery - A Distributed Task Queue
Celery - A Distributed Task QueueCelery - A Distributed Task Queue
Celery - A Distributed Task Queue
 
게임서버 구축 방법비교 : GBaaS vs. Self-hosting
게임서버 구축 방법비교 : GBaaS vs. Self-hosting게임서버 구축 방법비교 : GBaaS vs. Self-hosting
게임서버 구축 방법비교 : GBaaS vs. Self-hosting
 
AOS Lab 2: Hello, xv6!
AOS Lab 2: Hello, xv6!AOS Lab 2: Hello, xv6!
AOS Lab 2: Hello, xv6!
 

Similar to MongoDB World 2019: Creating a Self-healing MongoDB Replica Set on GCP Compute Engine Resources using Terraform

Dive into DevOps | March, Building with Terraform, Volodymyr Tsap
Dive into DevOps | March, Building with Terraform, Volodymyr TsapDive into DevOps | March, Building with Terraform, Volodymyr Tsap
Dive into DevOps | March, Building with Terraform, Volodymyr TsapProvectus
 
Burn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websitesBurn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websitesLindsay Holmwood
 
Hybrid Cloud PHPUK2012
Hybrid Cloud PHPUK2012Hybrid Cloud PHPUK2012
Hybrid Cloud PHPUK2012Combell NV
 
Parse cloud code
Parse cloud codeParse cloud code
Parse cloud code維佋 唐
 
Hadoop Integration in Cassandra
Hadoop Integration in CassandraHadoop Integration in Cassandra
Hadoop Integration in CassandraJairam Chandar
 
Automation with Ansible and Containers
Automation with Ansible and ContainersAutomation with Ansible and Containers
Automation with Ansible and ContainersRodolfo Carvalho
 
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...NETWAYS
 
Cloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the CloudCloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the Cloudpetriojala123
 
mongodb-introduction
mongodb-introductionmongodb-introduction
mongodb-introductionTse-Ching Ho
 
Azure machine learning service
Azure machine learning serviceAzure machine learning service
Azure machine learning serviceRuth Yakubu
 
Power shell examples_v4
Power shell examples_v4Power shell examples_v4
Power shell examples_v4JoeDinaso
 
Groovy On Trading Desk (2010)
Groovy On Trading Desk (2010)Groovy On Trading Desk (2010)
Groovy On Trading Desk (2010)Jonathan Felch
 
Reusable, composable, battle-tested Terraform modules
Reusable, composable, battle-tested Terraform modulesReusable, composable, battle-tested Terraform modules
Reusable, composable, battle-tested Terraform modulesYevgeniy Brikman
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaAOE
 
How to develop Big Data Pipelines for Hadoop, by Costin Leau
How to develop Big Data Pipelines for Hadoop, by Costin LeauHow to develop Big Data Pipelines for Hadoop, by Costin Leau
How to develop Big Data Pipelines for Hadoop, by Costin LeauCodemotion
 

Similar to MongoDB World 2019: Creating a Self-healing MongoDB Replica Set on GCP Compute Engine Resources using Terraform (20)

Dive into DevOps | March, Building with Terraform, Volodymyr Tsap
Dive into DevOps | March, Building with Terraform, Volodymyr TsapDive into DevOps | March, Building with Terraform, Volodymyr Tsap
Dive into DevOps | March, Building with Terraform, Volodymyr Tsap
 
Amazon elastic map reduce
Amazon elastic map reduceAmazon elastic map reduce
Amazon elastic map reduce
 
Burn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websitesBurn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websites
 
Hybrid Cloud PHPUK2012
Hybrid Cloud PHPUK2012Hybrid Cloud PHPUK2012
Hybrid Cloud PHPUK2012
 
Parse cloud code
Parse cloud codeParse cloud code
Parse cloud code
 
London HUG 12/4
London HUG 12/4London HUG 12/4
London HUG 12/4
 
Hadoop Integration in Cassandra
Hadoop Integration in CassandraHadoop Integration in Cassandra
Hadoop Integration in Cassandra
 
Automation with Ansible and Containers
Automation with Ansible and ContainersAutomation with Ansible and Containers
Automation with Ansible and Containers
 
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
OSDC 2015: Mitchell Hashimoto | Automating the Modern Datacenter, Development...
 
Kraken at DevCon TLV
Kraken at DevCon TLVKraken at DevCon TLV
Kraken at DevCon TLV
 
TIAD : Automating the modern datacenter
TIAD : Automating the modern datacenterTIAD : Automating the modern datacenter
TIAD : Automating the modern datacenter
 
Cloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the CloudCloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the Cloud
 
mongodb-introduction
mongodb-introductionmongodb-introduction
mongodb-introduction
 
Azure machine learning service
Azure machine learning serviceAzure machine learning service
Azure machine learning service
 
Deploying Machine Learning Models to Production
Deploying Machine Learning Models to ProductionDeploying Machine Learning Models to Production
Deploying Machine Learning Models to Production
 
Power shell examples_v4
Power shell examples_v4Power shell examples_v4
Power shell examples_v4
 
Groovy On Trading Desk (2010)
Groovy On Trading Desk (2010)Groovy On Trading Desk (2010)
Groovy On Trading Desk (2010)
 
Reusable, composable, battle-tested Terraform modules
Reusable, composable, battle-tested Terraform modulesReusable, composable, battle-tested Terraform modules
Reusable, composable, battle-tested Terraform modules
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS Lambda
 
How to develop Big Data Pipelines for Hadoop, by Costin Leau
How to develop Big Data Pipelines for Hadoop, by Costin LeauHow to develop Big Data Pipelines for Hadoop, by Costin Leau
How to develop Big Data Pipelines for Hadoop, by Costin Leau
 

More from MongoDB

MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
 
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
 
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
 
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
 
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
 
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
 
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 MongoDB SoCal 2020: MongoDB Atlas Jump Start MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB SoCal 2020: MongoDB Atlas Jump StartMongoDB
 
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
 
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
 
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
 
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
 
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
 
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
 
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
 
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
 
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
 
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
 
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
 

More from MongoDB (20)

MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
 
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
 
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
 
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
 
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
 
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
 
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 MongoDB SoCal 2020: MongoDB Atlas Jump Start MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
 
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
 
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
 
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
 
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
 
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
 
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
 
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
 
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
 
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
 
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
 
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
 
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
 

Recently uploaded

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 

Recently uploaded (20)

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 

MongoDB World 2019: Creating a Self-healing MongoDB Replica Set on GCP Compute Engine Resources using Terraform

  • 1. Case Study: Creating a self-healing MongoDB Replica Set in GCP using Terraform June 2019 Stephen Beasey Enterprise Architecture
  • 2. Hello! Stephen Beasey Google Cloud Cer+fied Professional Cloud Architect Humana Enterprise Architecture team
  • 3. MongoDB | How the infrastructure heals Make Some data for (var i = 1; i <= 25; i++) {db.testData.insert( { x : i } ) } Sort it db.testData.find().sort({_id:1}) Check nodes rs.printSlaveReplicationInfo() Kill an instance gcloud compute instances delete <name>
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13. Today: ✔ GCP Demo part I Cloud basics The approach GCP Demo part II Terraform Tips The startup script
  • 14. MongoDB and Terraform | What are we building? Let’s make sure we’re all on the same page first. This script will build an unmanaged MongoDB Replica Set or a single MongoDB node. The Replica Set is a great backing DB for Mongo Ops Manager. For more advanced management of MongoDB nodes, it is recommend that you create an Ops Manager instance, use Ops Manager to create an agent, then create nodes with that agent installed. That said, it’s very easy to repurpose this script so that it creates nodes to be managed by Ops Manager.
  • 15. The Cloud | Priori.zing PETS VS CATTLE PETS ▪ Keep them ▪ If they get ill, nurse them back to health ▪ They are unique CATTLE ▪ Rotate them ▪ If they get ill, get another one ▪ They are almost identical When designing infrastructure, assume that failures will happen and plan accordingly!
  • 16. The Cloud | Immutable Infrastructure Immutable Infrastructure means creating resources that you are not going to change. Immutable Infrastructure means you can count on getting the same resource every time. Immutable Infrastructure means we may change the definition of a resource, but we wont change any individual instance of a resource.
  • 17. MongoDB | How we want to run MongoDB in the cloud A region is a specific geographical loca0on where you can run your resources.
  • 18. MongoDB | How we want to run MongoDB in the cloud Replace Identify Resynch
  • 19. MongoDB | How we want to run MongoDB in the cloud
  • 20. MongoDB | How we want to run MongoDB in the cloud Replace Resynch
  • 21. MongoDB | How we want to run MongoDB in the cloud
  • 22. MongoDB | How we want to run MongoDB in the cloud Replace
  • 23. MongoDB | What this template builds
  • 24. MongoDB | How the infrastructure heals
  • 25. Managed Instance Group| Balanced Deployment Why can’t we just use one Managed Instance Group with N x instances?
  • 26. Managed Instance Group| Balanced Deployment Balance. Why can’t we just use one Managed Instance Group with N x instances?
  • 27. Managed Instance Group| Balanced Deployment
  • 28. Managed Instance Group| Balanced Deployment
  • 29. MongoDB | How the infrastructure heals Check new instance rs.slaveOk() db.testData.find().sort({_id:1}) rs.printSlaveReplicationInfo()
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 38. Google Cloud Platform | Resources N x Cloud DNS “A” records N x Compute Disks x 3 N x Compute Instance Templates N x Managed Instance Groups DNS Zone and N x DNS ‘A’ records Project Network Release Service Account Bucket Number of Nodes (N) List of Zones Compute Instance Specifics Compute Disk Specifics DNS name Provide Create
  • 39. Terraform | 1. Use a Modular approach Creating Terraform modules allows us to separate code into another template and refer to that template using a shortcut. This is especially useful for code that is repeated. Modules are also great for separating code that user can change from modules that can be locked down to particular properties in a corporate environment. module "reservedip" { source = <path> rip-name = "${var.rip-name}" rip-count = "${var.rip-count}" } resource "google_compute_address" "static" { count = "${var.rip-count}" name = "${var.rip-name}-${count.index}" address_type = "INTERNAL" } output "reservedips" { value = ["${google_compute_address.static.*.address}"] }
  • 40. Terraform | Modules DNS Zone and ‘A’ records Compute Instance Templates with assigned Compute Disks Managed Instance Group Mapped labels that can be assigned to resources Cloud DNS Floating Storage CIT Managed Instance Group Label Module Resources
  • 41. Terraform | 1. Use a Modular approach Using the modular approach, we can pass the output of one module as the input of another. module "template" { source = "./ComputeInstanceTemplate” … template-count = "${var.usr-node-count}" template-name = "${var.usr-template-name}" In the module above, we’re creating Compute Instance Templates. We named the module “template”. In the module below, we are creating Managed Instance Groups that will use those templates. We reference the output of the “template” module to get the list of templates. module "mig" { source = "./ManagedInstanceGroup” … mig-count = "${var.usr-node-count}" group-manager-name = "${var.usr-group-manager-name}" base-instance-name = "${var.usr-base-instance-name}" compute-instance-tpl = [ "${module.template.cit-url}" ]
  • 42. Terraform | 2. Using List Variables When Terraform spins up nodes, we want the nodes evenly distributed between zones. The best way to achieve this is by creating a list variable in Terraform. Terraform is smart enough to cycle through the list even if the list on has 3 elements but the user has selected 7 nodes. variable "usr-zones" { type = "list" } # managed instance group usr-group-manager-name = "mongo-node" usr-base-instance-name = "mongo-node" usr-zones = ["us-east1-b","us-east1-c","us-east1-d"] resource "google_compute_instance_group_manager" "appserver" { count = "${var.mig-count}" name = "${var.group-manager-name}-${count.index}" base_instance_name = "${var.base-instance-name}" instance_template = "${element("${var.compute-instance-tpl}", count.index)}" zone = "${element(var.zones, count.index)}" target_size = "${var.target-size}" }
  • 43. Terraform | 3. Passing a Startup Script We can create a startup script in the form of a shell script saved as a separate file. The file will need very little in the form of modification from standard bash syntax for terraform to recognize it. Within our main.tf, we can pass variable values from terraform to bash. # find startup script template. pass variables if needed. data "template_file" "startup-script" { template = "${file("startup-script.sh")}" vars { project = "${var.usr-project-id}" reservedips = "${join(",", "${module.ipaddr.reservedips}")}" target-size = "${var.usr-rip-count}" } }
  • 44. Next, we simply pass the contents of the startup script as a variable to the module that creates the Compute Instance Template. Terraform | 3. Passing a Startup Script # compute instance template template-count = "${var.usr-rip-count}" template-name = "${var.usr-template-name}" template-description= "${var.usr-template-description}" instance-description = "${var.usr-instance-description}" machine-type = "${var.usr-machine-type}" template-ip = [ "${module.ipaddr.reservedips}" ] startup-script = "${data.template_file.startup-script.rendered}" keys = "${join(",",keys(module.gcp_label.tags))}" values = "${join(",",values(module.gcp_label.tags))}"
  • 45. MongoDB | The Startup Script
  • 46. Startup Script | 1. Identify Linux updates When GCP is creating an instance, some information about the instance is available by querying the metadata curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/ For instance, you can find the IP address of the instance by running curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network- interfaces/0/ip Or, you can find Project details. You can find the Project ID by running curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/project/project-id
  • 47. Startup Script | 1. Identify You can also create custom metadata for the instance in Terraform. In the template we are using now, I am adding a template-id to instances. metadata = { template-id = "${count.index}" } Then in the startup script, I check the template-id to decide which instance is node 0, so that the replica set script is only run on that node. id=$(curl-H "Metadata-Flavor: Google" hJp://metadata.google.internal/computeMetadata/v1/instance/aJributes/ template-id) … if [ ${target-size}-ge 3 ] && [ $templateid-eq 0 ]; then sleep 30; /etc/mrepl.sh >> /tmp/bootstrap.log 2>&1; fi
  • 48. Linux updates Get the IP address of the instance: instip=$(curl -H "Metadata-Flavor: Google” http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) IdenHfy the instance: gcloud dns record-sets transacHon add-z=${project}-zone --name=${node-name}$id.${project}.local --type=A--Pl=300 $insHp Get the ID of the instance: id=$(curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/template-id) Startup Script | 1. Identify
  • 49. Startup Script | 2. Resynch The startup script has to be able to handle two scenarios for our separate compute disks 1. First 9me running. The disk is blank and needs to be forma;ed. 2. Instance replaced. The disk is formatted and has data we need to keep.
  • 50. Startup Script | 2. Resynch mkdir /data if mount /data ; then echo "disk already forma6ed..................." >> /tmp/bootstrap.log echo "disk mounted..................." >> /tmp/bootstrap.log else echo "forma<ng disk..................." >> /tmp/bootstrap.log mkfs.xfs /dev/sdb mount /data echo "disk mounted..................." >> /tmp/bootstrap.log fi Try to mount the drive It will only work if the drive is already formatted If it doesn’t work then we know the drive needs to be formatted and then mounted
  • 51. MongoDB | 3. Replace What does it take to replace a node? A startup script. • Copy install files • Install MongoDB • Configure addi=onal drives • Find out about the instance from metadata • Update DNS Alias • Configure MongoDB parameters • Create MongoDB Replica Set script • Start MongoDB • Run the Replica Set script
  • 52. Terraform | Startup Script #!/bin/bash logger "created in ${project}" logger "install Stackdriver agents......................." curl -sSO https://dl.google.com/cloudagents/install- logging-agent.sh chmod 500 install-logging-agent.sh ./install-logging-agent.sh curl -sSO https://dl.google.com/cloudagents/install- monitoring-agent.sh chmod 500 install-monitoring-agent.sh ./install-monitoring-agent.sh yum install -y bind-utils echo "copy and install mongodb from rpm file......................" gsutil -m cp gs://${source-path}/mongodb-org* /root 2>&1 gsutil -m cp gs://${source-path}/mongodb.conf /root 2>&1 sleep 5 rpm -i --nosignature /root/*.rpm 2>&1 echo "Configure non-boot drives......................" echo '/dev/sdb /data xfs defaults,auto,noatime,noexec 0 0 /dev/sdc /log xfs defaults,auto,noatime,noexec 0 0 /dev/sdd /data/journal xfs defaults,auto,noatime,noexec 0 0' >> /etc/fstab mkdir /data if mount /data; then echo "disk already formatted..................." echo "data disk mounted..................." else echo "formatting disk..................." mkfs.xfs /dev/sdb mount /data echo "data disk mounted..................." fi mkdir /log if mount /log; then echo "disk already formaPed..................." echo "log disk mounted..................." else echo "formaQng disk..................." mkfs.xfs /dev/sdc mount /log echo "log disk mounted..................." fi if mount /data/journal; then echo "disk already formaPed..................." echo "journal disk mounted..................." else echo "formaQng disk..................." mkdir /data/journal mkfs.xfs /dev/sdd mount /data/journal echo "journal disk mounted..................." fi chown-R mongod:mongod /data /data/journal /log echo "Configure DNS alias.........................." id=$(curl-H "Metadata-Flavor: Google" hPp://metadata.google.internal/computeMetadata/v1/instance/aPributes/template-id) echo "id=$id" ins^p=$(curl-H "Metadata-Flavor: Google" hPp://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) echo "ins^p=$ins^p" oldip=$(dig +short ${node-name}$id.${project}.local) echo "oldip=$oldip" gcloud dns record-sets transac^on start-z=${project}-zone gcloud dns record-sets transac^on remove-z=${project}-zone--name=${node- name}$id.${project}.local--type=A--Pl=300 $oldip gcloud dns record-sets transac^on add-z=${project}-zone--name=${node- name}$id.${project}.local--type=A--Pl=300 $ins^p gcloud dns record-sets transac^on execute-z=${project}-zone echo "Configure mongoDB Parameters...................." sed-i 's@/var/lib/mongo@/data@g' /etc/mongod.conf sed-i 's@/var/log/mongodb@/log@g' /etc/mongod.conf sed-i "s@bindIp: 127.0.0.1@bindIp: 127.0.0.1,$ins^p@g" /etc/mongod.conf if [ ${node-count}-ge 3 ]; then sed-i 's@#replica^on:@replica^on:'"n"' replSetName: "rs0"@g' /etc/mongod.conf; fi echo "Update file limits............" echo "* soft nofile 64000 * hard nofile 64000 * soft nproc 64000 * hard nproc 64000" > /etc/security/limits.d/90-mongodb.conf echo "Optimiize read ahead settings...................." blockdev --setra 0 /dev/sdb echo 'ACTION=="add|change", KERNEL=="sdb", ATTR{bdi/read_ahead_kb}="0"' >> /etc/udev/rules.d/85-ebs.rules if [ ${node-count} -ge 3 ]; then echo "create mongoDB replica set script..................."; echo " cfg="{ _id: 'rs0', members: [ replace ] }" mongo ${node-name}0.${project}.local:27017 --eval "JSON.stringify(db.adminCommand({'replSetInitiate' : $cfg}))" " > /etc/mrepl.sh; mongostring="" index=0; for i in {1..${node-count}}; do mongostring=$mongostring" {_id: "$index", host: '${node-name}"$index".${project}.local:27017'},n" >> /etc/hosts; index=$(( $index + 1 )); done; mongostring=$${mongostring::-3} sed -i "s@replace@$mongostring@g" /etc/mrepl.sh; chmod 500 /etc/mrepl.sh; fi echo "update selinux for new mongo paths..................." semanage fcontext -a -t mongod_var_lib_t '/data.*' chcon -Rv -u system_u -t mongod_var_lib_t '/data' restorecon -R -v '/data' semanage fcontext -a -t mongod_log_t '/log.*' chcon -Rv -u system_u -t mongod_log_t '/log' restorecon -R -v '/log' semanage fcontext -a -t mongod_var_lib_t '/data/journal.*' chcon -h -u system_u -t mongod_var_lib_t '/data/journal' restorecon -R -v '/data/journal' echo "start mongoDB..................." service mongod start 2>&1 if [ ${node-count} -ge 3 ]; then /etc/mrepl.sh 2>&1; fi echo "end of startup script..................."
  • 53. Startup Script | Replace #!/bin/bash logger "created in ${project}" logger "install Stackdriver agents......................." curl -sSO https://dl.google.com/cloudagents/install- logging-agent.sh chmod 500 install-logging-agent.sh ./install-logging-agent.sh curl -sSO https://dl.google.com/cloudagents/install- monitoring-agent.sh chmod 500 install-monitoring-agent.sh ./install-monitoring-agent.sh yum install -y bind-utils echo "copy and install mongodb from rpm file......................" gsutil -m cp gs://${source-path}/mongodb-org* /root 2>&1 gsutil -m cp gs://${source-path}/mongodb.conf /root 2>&1 sleep 5 rpm -i --nosignature /root/*.rpm 2>&1 echo "Configure non-boot drives......................" echo '/dev/sdb /data xfs defaults,auto,noatime,noexec 0 0 /dev/sdc /log xfs defaults,auto,noatime,noexec 0 0 /dev/sdd /data/journal xfs defaults,auto,noatime,noexec 0 0' >> /etc/fstab mkdir /data if mount /data; then echo "disk already formatted..................." echo "data disk mounted..................." else echo "formatting disk..................." mkfs.xfs /dev/sdb mount /data echo "data disk mounted..................." fi mkdir /log if mount /log; then echo "disk already formatted..................." echo "log disk mounted..................." else echo "formatting disk..................." mkfs.xfs /dev/sdc mount /log echo "log disk mounted..................." fi if mount /data/journal; then echo "disk already formatted..................." echo "journal disk mounted..................." else echo "formatting disk..................." mkdir /data/journal mkfs.xfs /dev/sdd mount /data/journal echo "journal disk mounted..................." fi chown -R mongod:mongod /data /data/journal /log echo "Configure DNS alias.........................." id=$(curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/template-id) echo "id=$id" instip=$(curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) echo "instip=$instip" oldip=$(dig +short ${node-name}$id.${project}.local) echo "oldip=$oldip" gcloud dns record-sets transaction start -z=${project}-zone gcloud dns record-sets transaction remove -z=${project}-zone --name=${node- name}$id.${project}.local --type=A --ttl=300 $oldip gcloud dns record-sets transaction add -z=${project}-zone --name=${node- name}$id.${project}.local --type=A --ttl=300 $instip gcloud dns record-sets transaction execute -z=${project}-zone echo "Configure mongoDB Parameters...................." sed -i 's@/var/lib/mongo@/data@g' /etc/mongod.conf sed -i 's@/var/log/mongodb@/log@g' /etc/mongod.conf sed -i "s@bindIp: 127.0.0.1@bindIp: 127.0.0.1,$instip@g" /etc/mongod.conf if [ ${node-count} -ge 3 ]; then sed -i 's@#replication:@replication:'"n"' replSetName: "rs0"@g' /etc/mongod.conf; fi echo "Update file limits............" echo "* soi nofile 64000 * hard nofile 64000 * soi nproc 64000 * hard nproc 64000" > /etc/security/limits.d/90-mongodb.conf echo "Opmmiize read ahead senngs...................." blockdev--setra 0 /dev/sdb echo 'ACTION=="add|change", KERNEL=="sdb", ATTR{bdi/read_ahead_kb}="0"' >> /etc/udev/rules.d/85-ebs.rules if [ ${node-count}-ge 3 ]; then echo "create mongoDB replica set script..................."; echo " cfg="{ _id: 'rs0', members: [ replace ] }" mongo ${node-name}0.${project}.local:27017--eval "JSON.stringify(db.adminCommand({'replSetInimate' : $cfg}))" " > /etc/mrepl.sh; mongostring="" index=0; for i in {1..${node-count}}; do mongostring=$mongostring" {_id: "$index", host: '${node-name}"$index".${project}.local:27017'},n" >> /etc/hosts; index=$(( $index + 1 )); done; mongostring=$${mongostring::-3} sed-i "s@replace@$mongostring@g" /etc/mrepl.sh; chmod 500 /etc/mrepl.sh; fi echo "update selinux for new mongo paths..................." semanage fcontext-a-t mongod_var_lib_t '/data.*' chcon-Rv-u system_u-t mongod_var_lib_t '/data' restorecon-R-v '/data' semanage fcontext-a-t mongod_log_t '/log.*' chcon-Rv-u system_u-t mongod_log_t '/log' restorecon-R-v '/log' semanage fcontext-a-t mongod_var_lib_t '/data/journal.*' chcon-h-u system_u-t mongod_var_lib_t '/data/journal' restorecon-R-v '/data/journal' echo "start mongoDB..................." service mongod start 2>&1 if [ ${node-count}-ge 3 ]; then /etc/mrepl.sh 2>&1; fi echo "end of startup script..................." Stackdriver Copy and install RPM Format drives Configure Mongo Start Replica Set
  • 54. What we did Today: GCP Demo Cloud basics The approach and Why Terraform Tips The startup script: Identify, Resynch and Replace