GKE allows users to deploy and manage containerized applications on Google Cloud. It provides scalability, availability, and integration with services like Calico for network policies. The document demonstrates deploying a Node.js app to GKE including building a Docker image, creating deployment and service files, and implementing CI/CD using Cloud Build to automate building and deploying on container cluster updates.
Google kubernetes Engine with Google CI/CD Implementation
1. Google Kubernetes Engine (GKE)
Topics we will be covering in the presentation are:
• What is GKE
• Why GKE is different from other cloud
• How to implement an application in GKE (A Demo)
• CI CD implémentation on GKE
2. 1. What is GKE?
Google Kubernetes Engine (GKE) is a management and orchestration system for Docker
container and container clusters that run within Google's public cloud services.
Google Kubernetes Engine is based on Kubernetes, Google's open source container
management system.
We use Google Kubernetes commands and resources to deploy and manage your
applications, perform administration tasks and set policies, and monitor the health of your
deployed workloads.
Some main features of GKE :
Create or resize Docker container clusters
Create container pods, replication controllers, jobs, services or load balancers
Update and upgrade container clusters
Automatic scaling of your cluster's node instance count
3. 2. Why GKE is different from other cloud
Differences on the basis of below parameters
GKE AKS EKS
Scalability GKE provide the ability to add
different ‘nodepools’ (or
‘nodegroups’) which allows
different machine types to join
the worker pool.
In AKS you can only scale up to
similar nodes.
In GKE addition of a nodepool is a
single step process, in EKS you
need to additionally connect the
new nodepool to the cluster
manually.
Availability GKE provides high availability of
their clusters in two modes: multi-
zonal and regional. In multi-zonal
mode, there is only one master
node but there can be worker
nodes in different zones. In
regional mode, the master nodes
are also spread across all the
zones to provide even better HA.
AKS does not have high availability
for their master nodes, as of date.
The worker nodes are part of
Availability Zones so they provide
HA
EKS also provides HA master and
worker nodes spread across
multiple availability zones very
similar to GKE’s regional mode
Add-ons GKE provides support for Calico as
its network plugin which
enables Network Policies to be
defined for inter-pod
communication
AKS supports Network Policies
through kube-router project which
has to be installed manually
EKS also provides Calico
integration though it has to be
setup manually. Network policies
are crucial for securing the
platform especially in a multi-
tenant environment.
Pricing Short-term (100 hrs)/per
month
40$ 60$ 50$ + 20$* (* master)
5. Application Deployment in GKE
In our demo we are going to deploy a Node.js application
on our GKE cluster. The flow will be like:
We will take the application source from GitHub.
We will create a Dockerfile, which will copy all the important libraries and
other important packages used for running the application, with that we will
build a docker image, later which will be used at the time of deployment
We will create a Deployment.yaml file for deployment purpose
We will create a Service.yaml file for exposing our application with outer
world.
6. Fetching the code from GitHub
Go the GKE cluster that you have created and do
#git clone https://github.com/Piyushkamboj/NodeJS-GKE.git
Front End of Application will look like:
7. DockerFile
We will Build Image using Docker build command:
#docker build –t (path of dockerfile)
After Successfully creation of Docker image, we will push the image to google container
registry using below command:
#docker tag (source_image) gcr.io/testproject-piyush/nodejsapp:v1
#docker push gcr.io/testproject-piyush/nodejsapp:v1
Note : Instead gcr.io we can also use : us.gcr.io , eu.gcr.io , asia.gcr.io Depends on which location our
registry exist.
9. Creating deployment and exposing the application
We will use the deployment.yaml and service.yaml file we have created earlier
Note: run the below command in the location where above two files are present
#kubectl apply -f deployment.yaml
#kubectl apply -f service.yaml
11. CI/CD implementation on GKE
We will learn how to create CI/CD pipeline and thus deploying the build(Docker
image) on Google k8s cluster.
The steps in whole process would be:
Creating a Dockerfile which will include all the libraries and dependencies to run the application
in dockerised form. The Dockerfile must be in the root directory of application.
Creating Deployment.yaml and service.yaml file. It will includes the steps of our deployment and
exposure of created image on k8s cluster
Creating a cloudbuild.yaml file, the cloudbuild.yaml file will perform the complete task.
Setting up IAM for cloud builder so that it has permissions to deploy the images in GKE.
Setup build trigger.
12. Creation of Dockerfile and Deployment.yaml File
As we have created Docker and deployment file earlier so we will just push both of
the file to the source code repository (GitHub) and we will synch google cloud
source repository with GitHub
13. Create Cloudbuild.yaml file
Cloudbuild.yaml file with the help of Build trigger will automate
everything which we were doing manually. From creating a docker file
to deploying it to the GKE cluster.
14.
15.
16. Set up IAM
We will setup IAM for cloud builder so that it has permissions to deploy the images in
GKE.
To do this, navigate to IAM & Admin →IAM on the side menu.
Here, you would see an entry in the format :-
“<project-number>@cloudbuild.gserviceaccount.com”
with the role, Cloud Build Service Account. Edit the role and add Kubernetes Engine
Admin role to it as well.
17. Setup Build Trigger
In the Cloudbuild section there is one option to setup build trigger by clicking it we
have to fill some details as below
18. Run the created Trigger
After running check the history in cloudbuild section