-----------------------------------------------------------------------------
--------------------------------------------------------------------------------
--
****A two-step load-balancer setup
****EFK means Fluentd, Elasticsearch, and Kibana.
-------------------------------------------------
Note:kubernates version 1.8...
***Node:Nods contain the pods.
Def:A node may be a VM or physical machine, depending on the cluster. Each node
contains the services necessary to run pods and is managed by
the master components. The services on a node include the container runtime,
kubelet and kube-proxy
***Pods:poda are the part of the Node.
Def:A pod (as in a pod of whales or pea pod) is a group of one or more
containers (such as Docker containers), with shared storage/network, and a
specification
for how to run the containers. A pod s contents are always co-located and co-
scheduled, and run in a shared context.
***Uses of pods::*********************************************************
--------------
Pods can be used to host vertically integrated application stacks (e.g. LAMP),
but their primary motivation is to support co-located, co-managed helper
programs, such as:
-content management systems, file and data loaders, local cache managers, etc.
-log and checkpoint backup, compression, rotation, snapshotting, etc.
-data change watchers, log tailers, logging and monitoring adapters, event
publishers, etc.
-proxies, bridges, and adapters
-controllers, managers, configurators, and updater
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
I0:Data stores and Kubernetes to make the data recovery/persistentce?
*********************
.
********************------------------------------------------------------------
--------
Ans(Solution):A volume can be backed by a variety of implementations, including
files on the host machines, AWS Elastic Block Store (EBS), and nfs.
So when we restart the kubernates cluster data storage for the DB(mysql/MangoDB)
will always persist.
I1.How to access the deployed application from the Internet?
********************************************************************************
********************
Ans(Solution):
****I2.Load balancing with Kubernetes?
*************************************************************
How to do load-balanceing in kubernates?
-------
--------------------------------------
Ans(Solution):We have to configure a load balancer such as HAProxy or NGINX in
front of the Kubernetes cluster***************************************.
-We started running our Kubernetes clusters inside a VPN(Virtual private
network) on AWS(dynamic AWS IP addresses on the DNS level) and using an AWS
Elastic
Load Balancer to route external web traffic to an internal HAProxy cluster.
****************
-HAProxy(handle multiple vhosts) is configured with a  back end  for each
Kubernetes service, which proxies traffic to individual pods.****************
contains contains
point to
Https->AWS ELB------------>Virtual Private Network------------>HAProxy----------
>Service, under Load balancer Node(service)----------->2 pods at dynamic ip
addresses.
Figure: Diagram of our two-step process for load balancing
------------------------------------------------------------
Note:In any case, we needed a mechanism to dynamically reconfigure the load
balancer (HAProxy, in our case) when new Kubernetes services are created.
The Kubernetes community is currently working on a feature called ingress. It
will make it possible to configure an external load balancer directly from
Kubernetes.
*I2.1:Configuring load balancing:load-balancer configurations can be stored in
etcd.We can use a tool called confd to watch configuration changes in etcd and
generate a new HAProxy configuration file based on a template.
When a new service is added to Kubernetes, we add a new configuration to etcd,
which results in a new configuration file for HAProxy.
I3:How to make the deployments automatic?
*********************************************************Using Webhook
-----------------------
Ans(Solution):With the Deployer in place, we are able to hook up deployments to
a build pipeline in
jenkins.------------------------------------------------------------------------
-----------------------
Our build server after a successful build,
push a new Docker image to a registroy: such as Docker Hub. Then the build
server can invoke the Deployer automatically to deploy the new
version to a test environment. The same image can be promoted to production by
triggering the Deployer on the production environment.
builds image webhook deploys
Build Server-------------->Docker Hub--------------->Deployer-----------------
>Kubernetes
Figure above: Our automated container deployment pipeline::
-----------------------------------------------------------
I4:How should I handle my data stores(MongoDB,MySql) with Kubernetes? or How
to persist data in cash of restart or crash.******************************
Ans(Solution):Using the volumes(AWS's elastic Block storage(EBS) and nfs) to
work with persistent data,we can persist data in the worst case of crash or
resatrt
-------
Note:We always want the data to be persistent.Out of the box, containers lose
their data when they restart. This is fine for stateless components, but not for
a
persistent data store
I5:How to solve the Replication issues or workloads or making the tunned
application?*******************************//Resolve the Replication issues?
-------------------------------
Ans(Solution):Each node in the data store s cluster must be backed by a
different volumes(Aws like EBS:Elastic block storage for file storage).
Note:Scaling (adding/removing nodes) is an expensive operation for most data
stores as well.
********I6.How to make data stores (require precise configuration )to get the
clustering up and running?...................................
--------------------------------------------------------------------------------
---------------
Ans(Solution):using the set up a kubectl & then Monitoring, logging using the
tools like Prometheus, EFK stack.
******I7.How To Setup Prometheus Monitoring On Kubernetes Cluster?
******************************************************
---------------------------------------------------------
Ans(Solution):
1.Get the git clone link
git clone https://github.com/bibinwilson/kubernetes-prometheus
2.Connect To The Cluster:Connect to Wer Kubernetes cluster and set up the proxy
for accessing the Kubernetes dashboard.
3.Create A Namespace:
$kubectl create namespace monitoring
A. Create a file named clusterRole.yaml and copy the content of this file  >
ClusterRole Config
B. Create the role using the following command.
$kubectl create -f clusterRole.yaml
4.Create A Prometheus Deployment:****************************
4.1: Create a file named prometheus-deployment.yaml and copy the following
contents onto the file.***
4.2: Create a deployment on monitoring namespace using the above file.***
$kubectl create -f prometheus-deployment.yaml --namespace=monitoring
4.3:kubectl get deployments --namespace=monitoring***
4.4:Connecting To Prometheus**********************
-Using Kubectl port forwarding.
$kubectl get pods --namespace=monitoring
NAME READY STATUS RESTARTS AGE
prometheus-monitoring-3331088907-hm5n1 1/1 Running 0
-Execute the following command with Wer pod name to access Prometheus from
localhost port 8080.
$kubectl port-forward prometheus-monitoring-3331088907-hm5n1 8080:909
-Now, if We access http://localhost:8080 on Wer browser, We will get the
Prometheus home page
-Exposing the Prometheus deployment as a service with NodePort or a Load
Balancer.
*****I8:***Scaling an application::***************************************
----------------------------
Earlier we created a Deployment, and then exposed it publicly via a Service. The
Deployment created only one Pod for running
our application. When traffic increases, we will need to scale the application
to keep up with user demand.
Scaling is accomplished by changing the number of replicas in a Deployment
Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes
with available resources. Scaling will increase the number of Pods to
the new desired state. Kubernetes also supports autoscaling of Pods
*****I9:Slow scheduling in kubernates version
1.8************************************************************//increasing Slow
scheduling in kubernates
--------------------------------------------------------------------------------
-----------------
Ans(Solution):
-1.we have to install six Kubernates VM virtual machines on each physical
server. Each virtual machine is an isolated kubernetes node with kubelet,
kube-proxy, docker-daemon, and other software installed.
-2.we ve to cut the number of pods that can be scheduled to the same node down
to 60.So we would have we have 360 pods per physical server without
any scheduling problems.(pods are part of the node).
*****I10.CPU resources limits
overhead***************************************************//overcome cpu
overhead in kubernates.?
--------------------------------------------------------------------------------
---------
Ans(Solution): CPU limit is to ensure better performance,Iff we set it at 1,5x
the app s normal load or higher.
*****I11.Zero-downtime in kubernates vai deploying with
ingress***********************************************//Overcome kubernates
downtime.
------------------------------------------
Ans(Solution):In the dev and production clusters, we have to use native nginx-
based ingress controller to Overcome kubernates downtime.
1.A dedicated logic can be implemented in the service to asynchronously switch
to the sleep mode for several seconds and only then stop
accepting new connections.
2.it can be done with a specialized kubernetes pod pre stop hook. The process
will receive SIGTERM signal only when the pre stop hook has been completed.
-----------------
We can adjust the time based on Wer specific case.
*****I12.Network issues(like high latency and package
drops)******************************************************************//How to
overcome Network issues....
---------------------------------------------------
Ans(Solution):
1.Assign interrupts to CPU cores. It helps to allocate several physical cores to
network operations without competing with other processes.
Increase RX ring buffer.
2.If We use KVM virtualization for kubernetes nodes, check Wer network adapter.
We use virtio, it performs well.
3.Increase the size of network buffers in linux core, number of queues in NIC,
and set up coalesce.
******I13.Local development
performance*********************************************************************
***************//How to increase development performance
-----------------------------
Case:We have several hundred developers on our team, the majority of them are
running Mac OS and some are working in Linux (Arch, ubuntu, etc.). We still have
a large
monolithic app interacting with microservices?
Ans(Solution):
-For local development, we use the same tools as for production: kubernetes for
container orchestration and helm for environment management and deploy.
That s why we have to chose minikube- an easy one-click installation tool
for local single-node kubernetes clusters with cross platform support.
---------
-website if still taking about 5 seconds to load.Then,the root cause of the
problem was in the stat syscalls whose performance was very poor. //**Increase
performance..
Then,NFS will solve this issues. We have to mount the host directory onto the VM
using NFS when starting minikube and then would really good FS call performance.
-We have to switch from Vbox+vboxfs(virtual box) to a different hypervisor and
mount system. It will free up Wer resources and speed up the local environment.
----------------------------
Note:Despite all its issues, kubernetes provides fast and reliable deploy in any
environment and is highly flexible in building a microservice architecture.
******I14.deployment
downtime***************************************************************//How to
overcome deployment downtime...
-----------------------------------
Ans(Solution):We can overcome deployment downtime using the Helm...
-Helm is a package manager for kubernetes. We use helm for service deployment.
It has many functions, such as environment management,
waiting for resources during deployment, computing the deployment result, and
many others.
-So the easiest way to upgrade the tiller is to deploy it to a different
kubernetes namespace and add reference to the older tiller configmaps.
Then We can easily migrate to the new version of the tiller by configuring the
tiller-namespace option in helm client.
So in the worst case,if something goes wrong, We can simply change the tiller-
namespace in the client without tiller rollback.
Note:Architecturally, Helm is built of two binaries: helm client and tiller
server. All release data is stored in kubernetes configmaps
-We have to switch from Vbox+vboxfs(virtual box) to a different hypervisor and
mount system. It will free up Wer resources and speed up the local environment.
----------------------------
Note:Despite all its issues, kubernetes provides fast and reliable deploy in any
environment and is highly flexible in building a microservice architecture.
******I14.deployment
downtime***************************************************************//How to
overcome deployment downtime...
-----------------------------------
Ans(Solution):We can overcome deployment downtime using the Helm...
-Helm is a package manager for kubernetes. We use helm for service deployment.
It has many functions, such as environment management,
waiting for resources during deployment, computing the deployment result, and
many others.
-So the easiest way to upgrade the tiller is to deploy it to a different
kubernetes namespace and add reference to the older tiller configmaps.
Then We can easily migrate to the new version of the tiller by configuring the
tiller-namespace option in helm client.
So in the worst case,if something goes wrong, We can simply change the tiller-
namespace in the client without tiller rollback.
Note:Architecturally, Helm is built of two binaries: helm client and tiller
server. All release data is stored in kubernetes configmaps

Live issues resolution on Kubernates Cluster

  • 1.
    ----------------------------------------------------------------------------- -------------------------------------------------------------------------------- -- ****A two-step load-balancersetup ****EFK means Fluentd, Elasticsearch, and Kibana. ------------------------------------------------- Note:kubernates version 1.8... ***Node:Nods contain the pods. Def:A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy ***Pods:poda are the part of the Node. Def:A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers. A pod s contents are always co-located and co- scheduled, and run in a shared context. ***Uses of pods::********************************************************* -------------- Pods can be used to host vertically integrated application stacks (e.g. LAMP), but their primary motivation is to support co-located, co-managed helper programs, such as: -content management systems, file and data loaders, local cache managers, etc. -log and checkpoint backup, compression, rotation, snapshotting, etc. -data change watchers, log tailers, logging and monitoring adapters, event publishers, etc. -proxies, bridges, and adapters -controllers, managers, configurators, and updater -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- I0:Data stores and Kubernetes to make the data recovery/persistentce? ********************* . ********************------------------------------------------------------------ -------- Ans(Solution):A volume can be backed by a variety of implementations, including files on the host machines, AWS Elastic Block Store (EBS), and nfs. So when we restart the kubernates cluster data storage for the DB(mysql/MangoDB) will always persist. I1.How to access the deployed application from the Internet? ******************************************************************************** ******************** Ans(Solution): ****I2.Load balancing with Kubernetes? ************************************************************* How to do load-balanceing in kubernates? ------- -------------------------------------- Ans(Solution):We have to configure a load balancer such as HAProxy or NGINX in front of the Kubernetes cluster***************************************. -We started running our Kubernetes clusters inside a VPN(Virtual private network) on AWS(dynamic AWS IP addresses on the DNS level) and using an AWS Elastic Load Balancer to route external web traffic to an internal HAProxy cluster. **************** -HAProxy(handle multiple vhosts) is configured with a  back end  for each
  • 2.
    Kubernetes service, whichproxies traffic to individual pods.**************** contains contains point to Https->AWS ELB------------>Virtual Private Network------------>HAProxy---------- >Service, under Load balancer Node(service)----------->2 pods at dynamic ip addresses. Figure: Diagram of our two-step process for load balancing ------------------------------------------------------------ Note:In any case, we needed a mechanism to dynamically reconfigure the load balancer (HAProxy, in our case) when new Kubernetes services are created. The Kubernetes community is currently working on a feature called ingress. It will make it possible to configure an external load balancer directly from Kubernetes. *I2.1:Configuring load balancing:load-balancer configurations can be stored in etcd.We can use a tool called confd to watch configuration changes in etcd and generate a new HAProxy configuration file based on a template. When a new service is added to Kubernetes, we add a new configuration to etcd, which results in a new configuration file for HAProxy. I3:How to make the deployments automatic? *********************************************************Using Webhook ----------------------- Ans(Solution):With the Deployer in place, we are able to hook up deployments to a build pipeline in jenkins.------------------------------------------------------------------------ ----------------------- Our build server after a successful build, push a new Docker image to a registroy: such as Docker Hub. Then the build server can invoke the Deployer automatically to deploy the new version to a test environment. The same image can be promoted to production by triggering the Deployer on the production environment. builds image webhook deploys Build Server-------------->Docker Hub--------------->Deployer----------------- >Kubernetes Figure above: Our automated container deployment pipeline:: ----------------------------------------------------------- I4:How should I handle my data stores(MongoDB,MySql) with Kubernetes? or How to persist data in cash of restart or crash.****************************** Ans(Solution):Using the volumes(AWS's elastic Block storage(EBS) and nfs) to work with persistent data,we can persist data in the worst case of crash or resatrt ------- Note:We always want the data to be persistent.Out of the box, containers lose their data when they restart. This is fine for stateless components, but not for a persistent data store I5:How to solve the Replication issues or workloads or making the tunned application?*******************************//Resolve the Replication issues? ------------------------------- Ans(Solution):Each node in the data store s cluster must be backed by a different volumes(Aws like EBS:Elastic block storage for file storage).
  • 3.
    Note:Scaling (adding/removing nodes)is an expensive operation for most data stores as well. ********I6.How to make data stores (require precise configuration )to get the clustering up and running?................................... -------------------------------------------------------------------------------- --------------- Ans(Solution):using the set up a kubectl & then Monitoring, logging using the tools like Prometheus, EFK stack. ******I7.How To Setup Prometheus Monitoring On Kubernetes Cluster? ****************************************************** --------------------------------------------------------- Ans(Solution): 1.Get the git clone link git clone https://github.com/bibinwilson/kubernetes-prometheus 2.Connect To The Cluster:Connect to Wer Kubernetes cluster and set up the proxy for accessing the Kubernetes dashboard. 3.Create A Namespace: $kubectl create namespace monitoring A. Create a file named clusterRole.yaml and copy the content of this file  > ClusterRole Config B. Create the role using the following command. $kubectl create -f clusterRole.yaml 4.Create A Prometheus Deployment:**************************** 4.1: Create a file named prometheus-deployment.yaml and copy the following contents onto the file.*** 4.2: Create a deployment on monitoring namespace using the above file.*** $kubectl create -f prometheus-deployment.yaml --namespace=monitoring 4.3:kubectl get deployments --namespace=monitoring*** 4.4:Connecting To Prometheus********************** -Using Kubectl port forwarding. $kubectl get pods --namespace=monitoring NAME READY STATUS RESTARTS AGE prometheus-monitoring-3331088907-hm5n1 1/1 Running 0 -Execute the following command with Wer pod name to access Prometheus from localhost port 8080. $kubectl port-forward prometheus-monitoring-3331088907-hm5n1 8080:909 -Now, if We access http://localhost:8080 on Wer browser, We will get the Prometheus home page -Exposing the Prometheus deployment as a service with NodePort or a Load Balancer. *****I8:***Scaling an application::*************************************** ---------------------------- Earlier we created a Deployment, and then exposed it publicly via a Service. The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand. Scaling is accomplished by changing the number of replicas in a Deployment Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports autoscaling of Pods *****I9:Slow scheduling in kubernates version 1.8************************************************************//increasing Slow
  • 4.
    scheduling in kubernates -------------------------------------------------------------------------------- ----------------- Ans(Solution): -1.wehave to install six Kubernates VM virtual machines on each physical server. Each virtual machine is an isolated kubernetes node with kubelet, kube-proxy, docker-daemon, and other software installed. -2.we ve to cut the number of pods that can be scheduled to the same node down to 60.So we would have we have 360 pods per physical server without any scheduling problems.(pods are part of the node). *****I10.CPU resources limits overhead***************************************************//overcome cpu overhead in kubernates.? -------------------------------------------------------------------------------- --------- Ans(Solution): CPU limit is to ensure better performance,Iff we set it at 1,5x the app s normal load or higher. *****I11.Zero-downtime in kubernates vai deploying with ingress***********************************************//Overcome kubernates downtime. ------------------------------------------ Ans(Solution):In the dev and production clusters, we have to use native nginx- based ingress controller to Overcome kubernates downtime. 1.A dedicated logic can be implemented in the service to asynchronously switch to the sleep mode for several seconds and only then stop accepting new connections. 2.it can be done with a specialized kubernetes pod pre stop hook. The process will receive SIGTERM signal only when the pre stop hook has been completed. ----------------- We can adjust the time based on Wer specific case. *****I12.Network issues(like high latency and package drops)******************************************************************//How to overcome Network issues.... --------------------------------------------------- Ans(Solution): 1.Assign interrupts to CPU cores. It helps to allocate several physical cores to network operations without competing with other processes. Increase RX ring buffer. 2.If We use KVM virtualization for kubernetes nodes, check Wer network adapter. We use virtio, it performs well. 3.Increase the size of network buffers in linux core, number of queues in NIC, and set up coalesce. ******I13.Local development performance********************************************************************* ***************//How to increase development performance ----------------------------- Case:We have several hundred developers on our team, the majority of them are running Mac OS and some are working in Linux (Arch, ubuntu, etc.). We still have a large monolithic app interacting with microservices? Ans(Solution): -For local development, we use the same tools as for production: kubernetes for container orchestration and helm for environment management and deploy. That s why we have to chose minikube- an easy one-click installation tool for local single-node kubernetes clusters with cross platform support. --------- -website if still taking about 5 seconds to load.Then,the root cause of the problem was in the stat syscalls whose performance was very poor. //**Increase performance.. Then,NFS will solve this issues. We have to mount the host directory onto the VM using NFS when starting minikube and then would really good FS call performance.
  • 5.
    -We have toswitch from Vbox+vboxfs(virtual box) to a different hypervisor and mount system. It will free up Wer resources and speed up the local environment. ---------------------------- Note:Despite all its issues, kubernetes provides fast and reliable deploy in any environment and is highly flexible in building a microservice architecture. ******I14.deployment downtime***************************************************************//How to overcome deployment downtime... ----------------------------------- Ans(Solution):We can overcome deployment downtime using the Helm... -Helm is a package manager for kubernetes. We use helm for service deployment. It has many functions, such as environment management, waiting for resources during deployment, computing the deployment result, and many others. -So the easiest way to upgrade the tiller is to deploy it to a different kubernetes namespace and add reference to the older tiller configmaps. Then We can easily migrate to the new version of the tiller by configuring the tiller-namespace option in helm client. So in the worst case,if something goes wrong, We can simply change the tiller- namespace in the client without tiller rollback. Note:Architecturally, Helm is built of two binaries: helm client and tiller server. All release data is stored in kubernetes configmaps
  • 6.
    -We have toswitch from Vbox+vboxfs(virtual box) to a different hypervisor and mount system. It will free up Wer resources and speed up the local environment. ---------------------------- Note:Despite all its issues, kubernetes provides fast and reliable deploy in any environment and is highly flexible in building a microservice architecture. ******I14.deployment downtime***************************************************************//How to overcome deployment downtime... ----------------------------------- Ans(Solution):We can overcome deployment downtime using the Helm... -Helm is a package manager for kubernetes. We use helm for service deployment. It has many functions, such as environment management, waiting for resources during deployment, computing the deployment result, and many others. -So the easiest way to upgrade the tiller is to deploy it to a different kubernetes namespace and add reference to the older tiller configmaps. Then We can easily migrate to the new version of the tiller by configuring the tiller-namespace option in helm client. So in the worst case,if something goes wrong, We can simply change the tiller- namespace in the client without tiller rollback. Note:Architecturally, Helm is built of two binaries: helm client and tiller server. All release data is stored in kubernetes configmaps