Cloud-Native Concepts
May 30th, 2018
440xfaster
Image Credit : Rémi Jacquaint, Unsplash
2017 State of DevOps report
3
Businesses that deliver faster, win!
Dev -> QA -> Ops
Infrastructure as Pets
Configuration Management
Monoliths in VMs
Disparate & Complex
Management
DevOps
Infrastructure as Cattle
Immutable images
Microservices in Containers
Adaptive & Integrated
Management
4
• Cloud-native Technologies
• Containers
• Microservices
• Kubernetes for Ops
• Kubernetes for Devs
Agenda
5
• Founder and CEO at Nirmata
• Developing large-scale distributed
systems since the early 90’s
(Go, Java, JS, C++)
• Centralized management for
complex systems
About me
CKA-1700-0169-0100
Cloud Native Technologies
7
Infrastructure impacts Applications
Mainframes Client -Server Cloud
Batch / Procedural Object-Oriented Microservices
Infrastructure
Applications
Infrastructure
Applications
8
What is cloud native?
Cloud native applications are designed to run
efficiently run on cloud infrastructure services.
9
• Operability
expose control over application lifecycle management
• Observability
provide signals for state, health, and performance
• Elasticity
grow and shrink fit available resources and to meet demand
• Resilience
Fast automatic recovery from failures
• Agility
Fast deployment, iteration, and reconfiguration
Cloud Native - Key characteristics
Source: Cloud Native Computing Foundation; cncf.io
10
1.Container packaged. Running applications and processes in software
containers as an isolated unit of application deployment, and as a mechanism to achieve
high levels of resource isolation. Improves overall developer experience, fosters code
and component reuse and simplify operations for cloud native applications.
2.Dynamically managed. Actively scheduled and actively managed by a
central orchestrating process. Radically improve machine efficiency and resource
utilization while reducing the cost associated with maintenance and operations.
3.Microservices oriented. Loosely coupled with dependencies explicitly
described (e.g. through service endpoints). Significantly increase the overall agility
and maintainability of applications. The foundation will shape the evolution of the
technology to advance the state of the art for application management, and to make the
technology ubiquitous and easily available through reliable interfaces.
Cloud Native – how to get there
Source: Cloud Native Computing Foundation; cncf.io
Containers
12
1. An image
• Typically built by a Dev or a CI/CD pipeline
• Self-contained
• Has a unique ID (digest)
• Stored in a central registry
2. A runtime
• Image
• Run command
• Env variables
• Network and storage
Containers are 2 things
13
Open Container Intiative (OCI)
• OCI Runtime Spec
how to run containers
• OCI Image Format Spec
how to build container images
14
Docker Lifecycle
For a comprehensive CLI reference: http://docs.docker.com/reference/commandline/cli/
15
1. Immutable and portable
application images
2. Separation of concerns
between Ops and Dev
3. Microservices-style
applications
Why Containers?
Image Credit: https://www.suse.com/solutions/devops/
16
• A container is executed from
an image
• Images have layers
• Each layer has a unique ID
• Layers are cached and reused
across container images
• Images are built using a
specification (Dockerfile)
Docker Images
https://www.infoworld.com/article/3077875/linux
/containers-101-docker-fundamentals.html
17
• Specifies what goes in a container image and
how it executes
Dockerfile
FROM alpine:latest
RUN apk add --update bash && rm -rf /var/cache/apk/*
RUN apk add --update curl && rm -rf /var/cache/apk/*
COPY bin /opt/bin
RUN chmod a+x /opt/bin/*
ENV PATH=$PATH:/opt/bin/
ENTRYPOINT [ "/opt/bin/run.sh" ]
18
Docker file - Tomcat Example
FROM tomcat:8.0-jre8-alpine
LABEL Author jim@nirmata.com
ADD build/libs/hello-1.0.0-SNAPSHOT.war /usr/local/tomcat/webapps/hello.war
EXPOSE 8080
CMD ["catalina.sh", "run"]
19
• Namespaces
• Isolated view of the system provided by the kernel
• Different types: NET, PID, MNT, UTS, IPC, USER
• Control groups
• Manages resource usage for a set or processes
• CPU, memory, disk, network I/O
• Union file systems
Containers build on (Linux) OS primitives
20
Docker Components
https://docs.docker.com/engine/docker-overview/#docker-architecture
21
Container runtimes
https://docs.docker.com/engine/docker-overview/#docker-engine
• Image
• Networks
• Volumes
22
Docker Commands
• docker ps
• docker images
• docker rm
• docker rmi
• docker inspect
• docker exec
• docker system prune
Microservices
24
25
An early motivation for Microservices was scalability
* The Art of Scalability; AKF Scale Cube
http://akfpartners.com/growth-blog/scale-cube
26
X-Axis Scaling
scale by replicating the entire application
Client Load Balancer Application
Application
Application
27
Y-Axis Scaling
scale by splitting the application
Client Load Balancer
Customers
Service
Catalog
Service
Orders
Service
/catalog
/customers
/orders
28
Z-Axis Scaling
Client Load Balancer
Catalog
Service 3
Catalog
Service 1
Catalog
Service 3
/catalog [A – I]
/catalog [J – R]
/catalog [S – Z]
scale by splitting the data
29
Microservices combines X-Y axis scaling
scale by splitting the application and replicating services
Client Load Balancer
Service
/catalog
/customers
/orders
Catalog
Service
Customers
Service
Orders
Service
30
Microservices combines X-Y axis scaling
scale by splitting the application and replicating services
Best scalability
Best efficiency
Client Load Balancer
Service
/catalog
/customers
/orders
Catalog
Service
Customers
Service
Orders
Service
31
1. Componetization via services
2. Organized around business capabilities
3. Products not projects
4. Smart endpoints and dumb pipes
5. Decentralized Governance
6. Decentralized Data Management
7. Infrastructure Automation
8. Design for failure
9. Evolutionary Design
Microservices: Characteristics
Martin Fowler - GOTO Berlin 2014Martin Fowler
32
Microservices: 5 Constraints
1. Elastic
scales up or down independently of other services
2. Resilient
services provide fault isolation boundaries
3. Composable
uniform APIs for each service
4. Minimal
highly cohesive set of entities
5. Complete
loosely coupled with other services
33
Microservices ++ Microservices --
• Improved fault isolation
• Improved scalability
• Small code set is easier to learn and
manage
• Small autonomous teams with clear
ownership
• Can choose, evolve, and experiment
with, best technologies for each
service
• Enables continuous delivery
• Distributed systems are hard!
• Increased operational complexity
• Increased latencies
Kubernetes
35
Kubernetes is an open-source system for
automating deployment, scaling, and
management of containerized applications.
36
• Originally developed by Google, based on 15 years of
experience running production containerized
workloads.
Kubernetes
• Now governed by the CNCF
• Community-driven
• Robust, scalable, and
extensible
37
38
Kubernetes Architecture
• Nodes
• Master
• Worker
• Components
• Add-ons
• Cloud Provider
• Networking
• Storage
39
Pods
• Pods are the basic unit of
deployment and management
• A Pod can contain multiple
containers
• All containers in a Pod have
the same network and storage
https://kubernetes.io/docs/concepts/workloads/pods/pod/
Kubernetes for Ops
41
Master nodes run Kubernetes components
• kube-apiserver
front-end for the Kubernetes control-plane
• etcd
datastore for the cluster
• kube-controller-manager
controllers for routine cluster tasks
• cloud-controller-manager
controllers specific to cloud providers
• kube-scheduler
assigns Pods to nodes
42
Worker Nodes
• Kubelet
manages pods, executes liveness probes, reports pod and node status.
• kube-proxy
network proxy; performs connection forwarding
• docker / rkt / containerd
the container engine
• Add-ons
Dashboard, DNS, etc.
• Your application pods
43
K8s networking follows 3 principles:
1. All containers can communicate with all other
containers without NAT
2. All nodes can communicate with all
containers (and vice-versa) without NAT
3. The IP that a container sees itself as is the
same IP that others see it as
Networking
44
• Each pod gets its own IP address
• CNI (Container Network Interface) is the plugin
model used by the Kubelet to invoke the networking
implementation
• CNI plugins: Calico, Contiv, Flannel , GCE, …
Kubernetes Networking
45
• Pods can contain one or more Volumes
Volume types: emptyDir, hostPath, persistentVolumeClaim, secret,
awsElasticBlockStore, AzureDiskVolume, …
• A PersistentVolumeClaim (PVC) requests a
PersistentVolume (PV) that can be dynamically provisioned.
• Admins can configure StorageClasses for PVCs,
like “bronze”, “silver”, or “gold”.
• A storage class is configured with a Provisioner,
like AzureDisk.
Storage
Kubernetes for Devs
47
Dude, where’s my app?
48
Pods
• Contains
• One or more Containers
• One or more PVCs
• Other constructs
• nodeSelector
• affinity
• serviceAccountName
• secrets
• initContainers
Pod
Container
Secrets
Persistent
Volume Claim
49
• Pods can be managed individually, but don’t do this!
• Pods lifecycles are best managed using one of:
• Deployments
• StatefulSets
• DaemonSets
• Less often used:
• ReplicaSets (Deployments manage ReplicaSets)
• Jobs (short-lived run-to-completion tasks)
Managing Pods
50
• Deployments automatically create
(and delete) ReplicaSets
• Rollout: a new ReplicaSet is created and scaled up. The
existing ReplicaSet is scaled down.
• Rollback: only impacts the Pod template. Can rollback
to a specific revision ID.
• Rolling upgrade strategy tunables:
• maxUnavailable
• maxSurge
Deployments
Pod Template
Deployment
Replica Set
51
• Pods with stable identities
• names, network, storage
• Ordered creation, updates, scaling, and deletion
• Pods are created, and named, in order from {0…N-1}
• Use for clustered apps that use client-side identities
• ZooKeeperAddresses: “zoo-1:2181, zoo-2:2181, zoo-3:2181”
Stateful Sets
52
• Pods that need to run on all Nodes
i.e. “one per node”
• Useful for monitoring & security agents, log
daemons, etc.
• A node selector can be used to target a subset of
nodes
Daemon Sets
53
• Service
• provides load-balancing. Addressed via IP (cluster IP) or a
DNS name.
• Network Policy
• manages routing rules across pods (east-west traffic.)
• Ingress
• manages external routes to services (north-south traffic.) An
Ingress Controller does the load-balancing. Ingress Resources
specify the rules.
Networking your app
54
Configuring storage for your app
Source: Steve Watt, Red Hat
55
Pod
Network Policy
Ingress
Service
Deployment
or Stateful Set
Pod
PVC
Secret
Config Map
Managing K8s Manifests
Dev Ops
Environment Specific Configurations
Summary
57
http://www.cncf.io
58
Enterprises are adopting containers
for agility, portability, and increased
efficiencies
Kubernetes is the best solution to
orchestrate containers at scale
Application Management tools like
Nirmata can manage Kubernetes
clusters and workloads on any cloud
Cloud-Native enables Enterprise Agility
Thank-you!
https://try.nirmata.io/
Azure meetup   cloud native concepts - may 28th 2018

Azure meetup cloud native concepts - may 28th 2018

  • 1.
  • 2.
    440xfaster Image Credit :Rémi Jacquaint, Unsplash 2017 State of DevOps report
  • 3.
    3 Businesses that deliverfaster, win! Dev -> QA -> Ops Infrastructure as Pets Configuration Management Monoliths in VMs Disparate & Complex Management DevOps Infrastructure as Cattle Immutable images Microservices in Containers Adaptive & Integrated Management
  • 4.
    4 • Cloud-native Technologies •Containers • Microservices • Kubernetes for Ops • Kubernetes for Devs Agenda
  • 5.
    5 • Founder andCEO at Nirmata • Developing large-scale distributed systems since the early 90’s (Go, Java, JS, C++) • Centralized management for complex systems About me CKA-1700-0169-0100
  • 6.
  • 7.
    7 Infrastructure impacts Applications MainframesClient -Server Cloud Batch / Procedural Object-Oriented Microservices Infrastructure Applications Infrastructure Applications
  • 8.
    8 What is cloudnative? Cloud native applications are designed to run efficiently run on cloud infrastructure services.
  • 9.
    9 • Operability expose controlover application lifecycle management • Observability provide signals for state, health, and performance • Elasticity grow and shrink fit available resources and to meet demand • Resilience Fast automatic recovery from failures • Agility Fast deployment, iteration, and reconfiguration Cloud Native - Key characteristics Source: Cloud Native Computing Foundation; cncf.io
  • 10.
    10 1.Container packaged. Runningapplications and processes in software containers as an isolated unit of application deployment, and as a mechanism to achieve high levels of resource isolation. Improves overall developer experience, fosters code and component reuse and simplify operations for cloud native applications. 2.Dynamically managed. Actively scheduled and actively managed by a central orchestrating process. Radically improve machine efficiency and resource utilization while reducing the cost associated with maintenance and operations. 3.Microservices oriented. Loosely coupled with dependencies explicitly described (e.g. through service endpoints). Significantly increase the overall agility and maintainability of applications. The foundation will shape the evolution of the technology to advance the state of the art for application management, and to make the technology ubiquitous and easily available through reliable interfaces. Cloud Native – how to get there Source: Cloud Native Computing Foundation; cncf.io
  • 11.
  • 12.
    12 1. An image •Typically built by a Dev or a CI/CD pipeline • Self-contained • Has a unique ID (digest) • Stored in a central registry 2. A runtime • Image • Run command • Env variables • Network and storage Containers are 2 things
  • 13.
    13 Open Container Intiative(OCI) • OCI Runtime Spec how to run containers • OCI Image Format Spec how to build container images
  • 14.
    14 Docker Lifecycle For acomprehensive CLI reference: http://docs.docker.com/reference/commandline/cli/
  • 15.
    15 1. Immutable andportable application images 2. Separation of concerns between Ops and Dev 3. Microservices-style applications Why Containers? Image Credit: https://www.suse.com/solutions/devops/
  • 16.
    16 • A containeris executed from an image • Images have layers • Each layer has a unique ID • Layers are cached and reused across container images • Images are built using a specification (Dockerfile) Docker Images https://www.infoworld.com/article/3077875/linux /containers-101-docker-fundamentals.html
  • 17.
    17 • Specifies whatgoes in a container image and how it executes Dockerfile FROM alpine:latest RUN apk add --update bash && rm -rf /var/cache/apk/* RUN apk add --update curl && rm -rf /var/cache/apk/* COPY bin /opt/bin RUN chmod a+x /opt/bin/* ENV PATH=$PATH:/opt/bin/ ENTRYPOINT [ "/opt/bin/run.sh" ]
  • 18.
    18 Docker file -Tomcat Example FROM tomcat:8.0-jre8-alpine LABEL Author jim@nirmata.com ADD build/libs/hello-1.0.0-SNAPSHOT.war /usr/local/tomcat/webapps/hello.war EXPOSE 8080 CMD ["catalina.sh", "run"]
  • 19.
    19 • Namespaces • Isolatedview of the system provided by the kernel • Different types: NET, PID, MNT, UTS, IPC, USER • Control groups • Manages resource usage for a set or processes • CPU, memory, disk, network I/O • Union file systems Containers build on (Linux) OS primitives
  • 20.
  • 21.
  • 22.
    22 Docker Commands • dockerps • docker images • docker rm • docker rmi • docker inspect • docker exec • docker system prune
  • 23.
  • 24.
  • 25.
    25 An early motivationfor Microservices was scalability * The Art of Scalability; AKF Scale Cube http://akfpartners.com/growth-blog/scale-cube
  • 26.
    26 X-Axis Scaling scale byreplicating the entire application Client Load Balancer Application Application Application
  • 27.
    27 Y-Axis Scaling scale bysplitting the application Client Load Balancer Customers Service Catalog Service Orders Service /catalog /customers /orders
  • 28.
    28 Z-Axis Scaling Client LoadBalancer Catalog Service 3 Catalog Service 1 Catalog Service 3 /catalog [A – I] /catalog [J – R] /catalog [S – Z] scale by splitting the data
  • 29.
    29 Microservices combines X-Yaxis scaling scale by splitting the application and replicating services Client Load Balancer Service /catalog /customers /orders Catalog Service Customers Service Orders Service
  • 30.
    30 Microservices combines X-Yaxis scaling scale by splitting the application and replicating services Best scalability Best efficiency Client Load Balancer Service /catalog /customers /orders Catalog Service Customers Service Orders Service
  • 31.
    31 1. Componetization viaservices 2. Organized around business capabilities 3. Products not projects 4. Smart endpoints and dumb pipes 5. Decentralized Governance 6. Decentralized Data Management 7. Infrastructure Automation 8. Design for failure 9. Evolutionary Design Microservices: Characteristics Martin Fowler - GOTO Berlin 2014Martin Fowler
  • 32.
    32 Microservices: 5 Constraints 1.Elastic scales up or down independently of other services 2. Resilient services provide fault isolation boundaries 3. Composable uniform APIs for each service 4. Minimal highly cohesive set of entities 5. Complete loosely coupled with other services
  • 33.
    33 Microservices ++ Microservices-- • Improved fault isolation • Improved scalability • Small code set is easier to learn and manage • Small autonomous teams with clear ownership • Can choose, evolve, and experiment with, best technologies for each service • Enables continuous delivery • Distributed systems are hard! • Increased operational complexity • Increased latencies
  • 34.
  • 35.
    35 Kubernetes is anopen-source system for automating deployment, scaling, and management of containerized applications.
  • 36.
    36 • Originally developedby Google, based on 15 years of experience running production containerized workloads. Kubernetes • Now governed by the CNCF • Community-driven • Robust, scalable, and extensible
  • 37.
  • 38.
    38 Kubernetes Architecture • Nodes •Master • Worker • Components • Add-ons • Cloud Provider • Networking • Storage
  • 39.
    39 Pods • Pods arethe basic unit of deployment and management • A Pod can contain multiple containers • All containers in a Pod have the same network and storage https://kubernetes.io/docs/concepts/workloads/pods/pod/
  • 40.
  • 41.
    41 Master nodes runKubernetes components • kube-apiserver front-end for the Kubernetes control-plane • etcd datastore for the cluster • kube-controller-manager controllers for routine cluster tasks • cloud-controller-manager controllers specific to cloud providers • kube-scheduler assigns Pods to nodes
  • 42.
    42 Worker Nodes • Kubelet managespods, executes liveness probes, reports pod and node status. • kube-proxy network proxy; performs connection forwarding • docker / rkt / containerd the container engine • Add-ons Dashboard, DNS, etc. • Your application pods
  • 43.
    43 K8s networking follows3 principles: 1. All containers can communicate with all other containers without NAT 2. All nodes can communicate with all containers (and vice-versa) without NAT 3. The IP that a container sees itself as is the same IP that others see it as Networking
  • 44.
    44 • Each podgets its own IP address • CNI (Container Network Interface) is the plugin model used by the Kubelet to invoke the networking implementation • CNI plugins: Calico, Contiv, Flannel , GCE, … Kubernetes Networking
  • 45.
    45 • Pods cancontain one or more Volumes Volume types: emptyDir, hostPath, persistentVolumeClaim, secret, awsElasticBlockStore, AzureDiskVolume, … • A PersistentVolumeClaim (PVC) requests a PersistentVolume (PV) that can be dynamically provisioned. • Admins can configure StorageClasses for PVCs, like “bronze”, “silver”, or “gold”. • A storage class is configured with a Provisioner, like AzureDisk. Storage
  • 46.
  • 47.
  • 48.
    48 Pods • Contains • Oneor more Containers • One or more PVCs • Other constructs • nodeSelector • affinity • serviceAccountName • secrets • initContainers Pod Container Secrets Persistent Volume Claim
  • 49.
    49 • Pods canbe managed individually, but don’t do this! • Pods lifecycles are best managed using one of: • Deployments • StatefulSets • DaemonSets • Less often used: • ReplicaSets (Deployments manage ReplicaSets) • Jobs (short-lived run-to-completion tasks) Managing Pods
  • 50.
    50 • Deployments automaticallycreate (and delete) ReplicaSets • Rollout: a new ReplicaSet is created and scaled up. The existing ReplicaSet is scaled down. • Rollback: only impacts the Pod template. Can rollback to a specific revision ID. • Rolling upgrade strategy tunables: • maxUnavailable • maxSurge Deployments Pod Template Deployment Replica Set
  • 51.
    51 • Pods withstable identities • names, network, storage • Ordered creation, updates, scaling, and deletion • Pods are created, and named, in order from {0…N-1} • Use for clustered apps that use client-side identities • ZooKeeperAddresses: “zoo-1:2181, zoo-2:2181, zoo-3:2181” Stateful Sets
  • 52.
    52 • Pods thatneed to run on all Nodes i.e. “one per node” • Useful for monitoring & security agents, log daemons, etc. • A node selector can be used to target a subset of nodes Daemon Sets
  • 53.
    53 • Service • providesload-balancing. Addressed via IP (cluster IP) or a DNS name. • Network Policy • manages routing rules across pods (east-west traffic.) • Ingress • manages external routes to services (north-south traffic.) An Ingress Controller does the load-balancing. Ingress Resources specify the rules. Networking your app
  • 54.
    54 Configuring storage foryour app Source: Steve Watt, Red Hat
  • 55.
    55 Pod Network Policy Ingress Service Deployment or StatefulSet Pod PVC Secret Config Map Managing K8s Manifests Dev Ops Environment Specific Configurations
  • 56.
  • 57.
  • 58.
    58 Enterprises are adoptingcontainers for agility, portability, and increased efficiencies Kubernetes is the best solution to orchestrate containers at scale Application Management tools like Nirmata can manage Kubernetes clusters and workloads on any cloud Cloud-Native enables Enterprise Agility
  • 59.