Presenter: Veer Muchandi
Title: Principal Architect - Container Solutions
Social Handle: @VeerMuchandi
Blogs: https://blog.openshift.com/author/veermuchandi/
Agenda
Why Containers for Microservices?
Value of Container Platform
How containers run on a K8S/OpenShift cluster?
Structure of a “Containerized Microservice” on OpenShift
An example
Other OOTB features from a Container Platform
–Martin Fowler
“building applications as suites of services. As well as the fact that
services are independently deployable and scalable, each service also
provides a firm module boundary, even allowing for different services to
be written in different programming languages. They can also be
managed by different teams”
Microservices
Why Containers for Microservices?
A Typical Monolith
Multiple business functions all
bundled up into a single large
monolith
Acknowledgements: Borrowed a few conventions from here
http://martinfowler.com/articles/microservices.html
Deployed as a single large
deployment unit on the host.
- hard to change
- hard to manage
- causes slow cycles
So we want to break it up..
So we want to refactor each business function as
an independent microservice.
So how does a microservice run on a host?
Microservices are typically very
small.
Even the smallest sized VM in
your enterprise may be too big.
With a single microservice per
host, we will end up wasting a
lot of resources.
So should we run a bunch of
them on the box?
Well.. now all our eggs are in the
same basket!!
Hmm.. let’s see.
How about? …..
Mixing different microservices
on the same host.
Can I do this?
Wait.. Microservices are
Polyglot.
Each language has its own
libraries, dependencies.
Phew.. how do we deal with
this on one host?
Welcome to the World of Containers!!
Multiple containers run on a host.
Containers share kernel on the Host
Docker containers have layers
infrastructure-as-code by default!!
It is just not your application but all
dependencies included.
Let’s understand containers
Container based CD
Containers are Portable
So do you want to burst your microservices to other datacenters or cloud to meet your
demands??
Containers run the same way across the datacenters.. Portability comes with the container format
Polyglot
Again multiple containers run
on a host.
Since dependencies are
bundled in, each container
can implement its own
technologies, without affecting
other containers on the host.
Containers are naturally
Polyglot.
Speed of Deployment
VMs take minutes to come
up
Container spin up fast .. in
seconds..!!!
Containers Scale up fast and Scale down fast
Microservices need to scale
up quickly. Containers provide
that OOTB.
Application Upgrades, Security fixes, Middleware, BaseOS Upgrades
Upgrades are quick
They won’t affect other
containers as the changes
are local to a container
Easy to rollback.. Just bring
up the previous container
version!!
Meets this need...
“Microservices can be
changed quickly without
affecting others!!!”
Container Registry
PUSH PULL
As a bonus, you get a repository to store your ready
to run Microservices. Just push to or pull from the
repo.
Value of a Container Platform
Operator
scaling
logs
application
health
metrics
highavailability
Load
balancing utilization
persistence
Operations Dilemma
Containers are easy to create and share.. Right?
Let’s give one to Operations to run it at scale.
Welcome to
the
Enterprise Ready
Openshift/K8S runs containers
in Pods. Pod is a wrapper
Each pod gets an IP address.
Container adopts Pod’s IP.
10.0.0.1
Pods
10.1.0.2
Some pods may have more
than one container.. that’s a
special case though!!
10.0.0.1
All the containers in a pod
die along with a pod.
Usually these containers are
dependent like a master and
slave or side-car pattern
And they have a very tight
married relationship
Containers in Pods
When you scale up your
application, you are scaling up
pods.
Each Pod has its own IP.
10.1.0.110.0.0.410.0.0.1
Pod Scaling
Nodes are the application hosts that make up a
Openshift/K8S cluster. They run docker and Openshift.
Master controls where the pods are deployed on the
nodes, and ensures cluster health.
Nodes
When you scale up, pods are distributed across nodes
following scheduler policies defined by the administrator.
So even if a node fails, the application is still
available
High Availability
Not just that, if a pod dies for some reason, another pod
will come in its place
Health Management
Pods can be front-ended by a Service.
Service is a proxy.. Every node knows
about it. Service gets an IP
Service knows which pods to frontend
based on the labels.
Flexibility of architecture with Openshift/ K8S Services
Clients can talk to the service. Service
redirects the requests to the pods.
Service also gets a DNS Name
Client can discover service… built in
service discovery!!
Built-in Service Discovery
Accessing your Application
When you want to expose a service
externally eg: access via browser using a
URL, you create a “Route”
Route gets added to a HAProxy LB.
You can configure your F5 as well as LB.
Structuring Containerized Microservices
on OpenShift
Refer: http://martinfowler.com/articles/microservices.html
Refactoring Monolith to Microservices
Database can be part of a microservice or it can be a separate
microservice. Separation allows you to scale microservices independently.
Microservice Structure on OpenShift/K8S
Microservice1 is made up of two k8s services/tiers. Each tier scales
independently although part of the same microservice. Microservice1 is
exposed via route. Hence can be used by External clients such as a
browser.
Microservice2 is
an internal service
only usable by
other
microservices
running on the
cluster, as it does
not have a route.
Route
Code:
https://github.com/debianmaster/microservices-on-openshift
Video blogs:
https://blog.openshift.com/building-and-running-micro-servic
es-on-openshift-part-i/
https://blog.openshift.com/building-and-running-micro-servic
es-on-openshift-part-ii/
https://blog.openshift.com/building-and-running-microservice
s-on-openshift-part-iii/
https://blog.openshift.com/building-and-running-microservice
s-on-openshift-part-iv/
A Microservices
example
OpenShift Templates
OpenShift Templates
enable easy deployment
of suite of
microservices.
You can also define the
number of pods to run
for each microservice
and their resource
requirements.
More OOTB features for Microservices
You can run many microservices on a cluster.. Don’t need
to get more infra until you really exhaust the resources
Utilization and Density
Idling
If certain pods are not used you can idle them and free
up resources.
They will be reinstated automatically when the service is
accessed
You get Metrics, APM, Distributed Tracing with Hawkular
EFK stack for Log consolidation
Metrics, Logging
Tracing and APM
https://github.com/hawkular/hawkular-apm
Zero Downtime Deployments
Blue/Green, Canary Deployments, AB testing.. Out of the
box.
Language Agnostic, No Code Intrusion
API Gateway upcoming image on OpenShift
RedHat SSO based on KeyCloak
Fuse and AMQ supported
Middleware supported
Many supported OSS technologies
Jenkins CI/CD pipelines all run on OpenShift as containers
With all such features built into a Container Platform, it becomes a
true Microservices Platform.
Questions?
Thank you
Backup Slides
OpenShift is Enterprise Ready K8S
http://www.levvel.io/blog-post/differences-between-kubern
etes-and-openshift

Deploying Microservices as Containers