Kubernetes can orchestrate and manage container workloads through components like Pods, Deployments, DaemonSets, and StatefulSets. It schedules containers across a cluster based on resource needs and availability. Services enable discovery and network access to Pods, while ConfigMaps and Secrets allow injecting configuration and credentials into applications.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
In this meetup, Liran Cohen, Cloud platform & DevOps Team Leader, will talk about some of Kubernetes key concepts. We will learn about the architecture of the system; the different resources available in the system; the problems it’s trying to solve, and the model that it uses to manage containerized application deployments.
Overview of kubernetes and its use as a DevOps cluster management framework.
Problems with deployment via kube-up.sh and improving kubernetes on AWS via custom cloud formation template.
In this meetup, Liran Cohen, Cloud platform & DevOps Team Leader, will talk about some of Kubernetes key concepts. We will learn about the architecture of the system; the different resources available in the system; the problems it’s trying to solve, and the model that it uses to manage containerized application deployments.
Monitoring, Logging and Tracing on KubernetesMartin Etmajer
In this presentation, I'll describe a variety of tools, like the Kubernetes Dashboard, Heapster, Grafana, Fluentd, Elasticsearch, Kibana, Jolokia and OpenTracing to bring Monitoring, Logging and Tracing to the Kubernetes container platform.
A small introduction to get started on Kubernetes as a user. This explains the main concepts like pod, deployment and services and gives some hints to help you use kubectl command.
These slides were presented in Grenoble Docker meetup in November 2017.
This presentation is to help you understand https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/ without having to read all the concepts in a number of Kubernetes documents.
Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.
In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Kubernetes Basis: Pods, Deployments, and ServicesJian-Kai Wang
Kubernetes is a container management platform and empowers the scalability to the container. In this repository, we address the issues of how to use Kubernetes with real cases. We start from the basic objects in Kubernetes, Pods, deployments, and Services. This repository is also a tutorial for those with advanced containerization skills trying to step into the Kubernetes. We also provide several YAML examples for those looking for quickly deploying services. Please enjoy it and let's start the journey to Kubernetes.
Monitoring, Logging and Tracing on KubernetesMartin Etmajer
In this presentation, I'll describe a variety of tools, like the Kubernetes Dashboard, Heapster, Grafana, Fluentd, Elasticsearch, Kibana, Jolokia and OpenTracing to bring Monitoring, Logging and Tracing to the Kubernetes container platform.
A small introduction to get started on Kubernetes as a user. This explains the main concepts like pod, deployment and services and gives some hints to help you use kubectl command.
These slides were presented in Grenoble Docker meetup in November 2017.
This presentation is to help you understand https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/ without having to read all the concepts in a number of Kubernetes documents.
Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.
In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Kubernetes Basis: Pods, Deployments, and ServicesJian-Kai Wang
Kubernetes is a container management platform and empowers the scalability to the container. In this repository, we address the issues of how to use Kubernetes with real cases. We start from the basic objects in Kubernetes, Pods, deployments, and Services. This repository is also a tutorial for those with advanced containerization skills trying to step into the Kubernetes. We also provide several YAML examples for those looking for quickly deploying services. Please enjoy it and let's start the journey to Kubernetes.
Organizations continue to adopt Solr because of its ability to scale to meet even the most demanding workflows. Recently, LucidWorks has been leading the effort to identify, measure, and expand the limits of Solr. As part of this effort, we've learned a few things along the way that should prove useful for any organization wanting to scale Solr. Attendees will come away with a better understanding of how sharding and replication impact performance. Also, no benchmark is useful without being repeatable; Tim will also cover how to perform similar tests using the Solr-Scale-Toolkit in Amazon EC2.
An Introduction to Using PostgreSQL with Docker & KubernetesJonathan Katz
The maturation of containerization platforms has changed how people think about creating development environments and has eliminated many inefficiencies for deploying applications. These concept and technologies have made its way into the PostgreSQL ecosystem as well, and tools such as Docker and Kubernetes have enabled teams to run their own “database-as-a-service” on the infrastructure of their choosing.
In this talk, we will cover the following:
- Why containers are important and what they mean for PostgreSQL
- Setting up and managing a PostgreSQL container
- Extending your setup with a pgadmin4 container
- Container orchestration: What this means, and how to use Kubernetes to leverage database-as-a-service with PostgreSQL
- Trends in the container world and how it will affect PostgreSQL
These slides were used during a technical session for the Cloud-Native El Salvador community. It covers the basic Kubernetes components, some installers and main Kubernetes resources. For the demo, it was used the capabilites provided by the Horizontal Pod Autoscaler.
Using PostgreSQL With Docker & Kubernetes - July 2018Jonathan Katz
The maturation of containerization platforms has changed how people think about creating development environments and has eliminated many inefficiencies for deploying applications. These concept and technologies have made its way into the PostgreSQL ecosystem as well, and tools such as Docker and Kubernetes have enabled teams to run their own “database-as-a-service” on the infrastructure of their choosing.
In this talk, we will cover the following:
- Why containers are important and what they mean for PostgreSQL
- Setting up and managing a PostgreSQL along with pgadmin4 and monitoring
- Running PostgreSQL on Kubernetes with a Demo
- Trends in the container world and how it will affect PostgreSQL
Presented at All Thing Open RTP Meetup
Presented by Brent Laster
Abstract: Kubernetes is the leading way to run and manage your containerized workloads across any cloud or on-premises environment. It provides an automated, reliable way to execute the services, deployments, etc. that make up your application. But what happens when running those doesn’t go as you’d expect, or the system isn’t happy with what you’re trying to get to run? How do you figure out what’s going wrong, track down the root causes, figure out a solution, and get things working again?
In this hands-on three-hour workshop, we’ll look at some basic and advanced ways to debug problems that you may run into with Kubernetes. You’ll learn techniques from basic ways to zero in on root cause to log analysis to using advanced tools such as creating your own debug containers. Armed with these skills, you’ll be in a position to deal with day-to-day issues with running workloads in Kubernetes and keep them from becoming disruptions and/or show-stoppers.
Kubernetes intro public - kubernetes user group 4-21-2015reallavalamp
Kubernetes Introduction - talk given by Daniel Smith at Kubenetes User Group meetup #2 in Mountain View on 4/21/2015.
Explains the basic concepts and principles of the Kubernetes container orchestration system.
Charts for the presentation at the OpenStack Summit at Barcelona, October 2016. The video is available at:
https://www.openstack.org/videos/video/toward-10000-containers-on-openstack
CloudZone's Meetup at Google offices, 20.08.2018
Covering Google Cloud Platform Kubernetes Engine in Depth, including networking, compute, storage, monitoring & logging
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
2. Kubernetes
• Created by Google Borg/Omega team
• Founded and operated by CNCF (Linux Foundation)
• Container orchestration, scheduling and management
• One of the most popular open source project in the world
10. Take Aways
• Independent control loops
• loosely coupled
• high performance
• easy to customize and extend
• “Watch” object change
• Decide next step based on state change
• not edge driven (event), level driven (state)
12. Co-scheduling
• Tow containers:
• App: generate log files
• LogCollector: read and redirect logs to storage
• Request MEM:
• App: 1G
• LogCollector: 0.5G
• Available MEM:
• Node_A: 1.25G
• Node_B: 2G
• What happens if App is scheduled to Node_A first?
13. Pod
• Deeply coupled containers
• Atomic scheduling/placement unit
• Shared namespace
• network, IPC etc
• Shared volume
• Process group in container cloud
14. Why co-scheduling?
• It’s about using container in right way:
• Lesson learnt from Borg: “workloads tend to have tight relationship”
15. Ensure Container Order
• Decouple web server
and application
• war file container
• tomcat container
16. • Wrong!
Multiple Apps in One Container?
Master Pod
kube-apiserver
kube-scheduler
controller-manager
⽇日志看不不到
是否running没法判断
运维操作困难
出错定位麻烦,不不知道是哪个挂了了,频繁登陆容器器
17. Copy Files from One to Another?
• Wrong!
Master Pod
kube-apiserver
kube-scheduler
controller-manager
/etc/kubernetes/ssl
18. Connect to Peer Container thru IP?
• Wrong!
Master Pod
kube-apiserver
kube-scheduler
controller-manager
network namespace
19. So this is Pod
• Design pattern in container world
• decoupling
• reuse & refactoring
• Describe more real-world workloads by container
• e.g. ML
• Parameter server and trainer in same Pod
22. Resource Model
• Compressible resources
• Hold no state
• Can be taken away very quickly
• “Merely” cause slowness when revoked
• e.g. CPU
• Non-compressible resources
• Hold state
• Are slower to be taken away
• Can fail to be revoked
• e.g. Memory, disk space
Kubernetes (and Docker) can only handle CPU & Memory
Don’t handle things like memory bandwidth, disk time,
cache, network bandwidth, ... (yet)
23. Resource Model
• Request: amount of a resource allowed
to be used, with a strong guarantee of
availability
• CPU (seconds/second), RAM (bytes)
• Scheduler will not over-commit
requests
• Limit: max amount of a resource that
can be used, regardless of guarantees
• scheduler ignores limits
• Mapping to Docker
• —cpu-shares=requests.cpu
• —cpu-quota=limits.cpu
• —cpu-period=100ms
• —memory=limits.memory
24. QoS Tiers and Eviction
• Guaranteed
• limits is set for all resources, all containers
• limits == requests (if set)
• Be killed until they exceed their limits
• or if the system is under memory pressure and there are no lower priority containers that can be killed.
• Burstable
• requests is set for one or more resources, one or more containers
• limits (if set) != requests
• killed once they exceed their requests and no Best-Effort pods exist when system under memory pressure
• Best-Effort
• requests and limits are not set for all of the resources, all containers
• First to get killed if the system runs out of memory
25. Scheduler
• Predicates
• NoDiskConflict
• NoVolumeZoneConflict
• PodFitsResources
• PodFitsHostPorts
• MatchNodeSelector
• MaxEBSVolumeCount
• MaxGCEPDVolumeCount
• CheckNodeMemoryPressure
• eviction, QoS tiers
• CheckNodeDiskPressure
• Priorities
• LeastRequestedPriority
• BalancedResourceAllocation
• SelectorSpreadPriority
• CalculateAntiAffinityPriority
• ImageLocalityPriority
• NodeAffinityPriority
• Design tips:
• watch and sync podQueue
• schedule based on cached info
• optimistically bind
• predicates is paralleled between
nodes
• priorities are paralleled between
functions in Map-Reduce way
28. Deployment
• Replicas with control
• Bring up a Replica Set and Pods.
• Check the status of a Deployment.
• Update that Deployment (e.g. new image, labels).
• Rollback to an earlier Deployment revision.
• Pause and resume a Deployment.
29. Create
• ReplicaSet
• Next generation of ReplicaController
• —record: record command in the annotation of ‘nginx-deployment’
30. Check
• DESIRED: .spec.replicas
• CURRENT: .status.replicas
• UP-TO-DATE: contains the latest pod template
• AVAILABLE: pod status is ready (running)
31. Update
• kubectl set image
• will change container image
• kubectl edit
• open an editor and modify
your deployment yaml
• RollingUpdateStrategy
• 1 max unavailable
• 1 max surge
• can also be percentage
• Does not kill old Pods until a sufficient
number of new Pods have come up
• Does not create new Pods until a
sufficient number of old Pods have
been killed.
trigger
32. Update Process
• The update process is coordinated by Deployment
Controller
• Create: Replica Set (nginx-deployment-2035384211) and scaled it up to 3 replicas directly.
• Update:
• created a new Replica Set (nginx-deployment-1564180365) and scaled it up to 1
• scaled down the old Replica Set to 2
• continued scaling up and down the new and the old Replica Set, with the same rolling update
strategy.
• Finally, 3 available replicas in the new Replica Set, and the old Replica Set is scaled down to 0.
38. Horizontal Pod Autoscaling
• Tips
• Scale out/in
• TriggeredScaleUp (GCE, AWS, will add more)
• Support for custom metrics
39. Custom Metrics
• Endpoint (Location to collect metrics from)
• Name of metric
• Type (Counter, Gauge, ...)
• Data Type (int, float)
• Units (kbps, seconds, count)
• Polling Frequency
• Regexps (Regular expressions to specify
which metrics to collect and how to parse
them)
• The metric will be added to pod as
ConfigMap volume
Prometheus
Nginx
45. Downward Api
• Get these inside your pod as
ENV or volume
• The pod’s name
• The pod’s namespace
• The pod’s IP
• A container’s cpu limit
• A container’s cpu request
• A container’s memory limit
• A container’s memory request
47. Service
• The unified portal of replica Pods
• Portal IP:Port
• External load balancer
• GCE
• AWS
• HAproxy
• Nginx
• OpenStack LB
48. Service Implementation
Tip: ipvs solution works in nat mode which is the same with this iptables way
$ iptables-save | grep my-service
-A KUBE-SERVICES -d 10.0.0.116/32 -p tcp -m comment --comment "default/my-service: cluster IP" -m tcp --dport 8001 -j KUBE-SVC-KEAUNL7HVWWSEZA6
-A KUBE-SVC-KEAUNL7HVWWSEZA6 -m comment --comment "default/my-service:" --mode random -j KUBE-SEP-6XXFWO3KTRMPKCHZ
-A KUBE-SVC-KEAUNL7HVWWSEZA6 -m comment --comment "default/my-service:" --mode random -j KUBE-SEP-57KPRZ3JQVENLNBRZ
-A KUBE-SEP-6XXFWO3KTRMPKCHZ -p tcp -m comment --comment "default/my-service:" -m tcp -j DNAT --to-destination 172.17.0.2:80
-A KUBE-SEP-57KPRZ3JQVENLNBRZ -p tcp -m comment --comment "default/my-service:" -m tcp -j DNAT --to-destination 172.17.0.3:80
49. Publishing Services
• Use Service.Type=NodePort
• <node_ip>:<node_port>
• External IP
• IPs route to one or more cluster nodes (e.g. floating IP)
• Use external LoadBalancer
• Require support from IaaS (GCE, AWS, OpenStack)
• Deploy a service-loadbalancer (e.g. HAproxy)
• Official guide: https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
50. Ingress
• The next generation external Service load
balancer
• Deployed as a Pod on dedicated Node
(with external network)
• Implementation
• Nginx, HAproxy, GCE L7
• External access for service
• SSL support for service
• …
s1http://foo.bar.com <IP_of_Ingress_node>
http://foo.bar.com/foo
53. StatefulSet: “clustered applications”
• Ordinal index
• startup/teardown ordering
• Stable hostname
• Stable storage
• linked to the ordinal & hostname
• Databases like MySQL or PostgreSQL
• single instance attached to a persistent volume at any time
• Clustered software like Zookeeper, Etcd, or Elasticsearch, Cassandra
• stable membership.
Update StatefulSet:
Scale: create/delete one by one
Scale in: will not delete old persistent volume
56. One Pod One IP
• Network sharing is important for affiliate
containers
• Not all containers need independent
network
• Network implementation for pod is
totally the same as for single container
Pod
Infra
container
Container A Container B
--net=container:pause
/proc/{pid}/ns/net -> net:[4026532483]
57. Kubernetes uses CNI
• CNI plugin
• e.g. Calico, Flannel etc
• The kubelet cni flags:
• --network-plugin=cni
• --network-plugin-dir=/etc/cni/net.d
• CNI is very simple
1.Kubelet creates a network namespace for Pod
2.Kubelet invokes CNI plugin to configure the NS (interface
name, IP, MAC, gateway, bridge name …)
3.Infra container in Pod join this network namespace
58. Tips
• host < calico(bgp) < calico(ipip) = flannel(vxlan) = docker(vxlan) < flannel(udp) < weave(udp)
• Test graph comes from: http://cmgs.me/life/docker-network-cloud
Calico Flannel Weave Docker Overlay Network
Network Model Pure Layer-3 Solution VxLAN or UDP Channel VxLAN or UDP Channel VxLAN
63. Persistent Volumes
• -v host_path:container_path
1.Attach networked storage to host path
1. mounted to host_path
2.Mount host path as container volume
1. bind mount container_path with host_path
3. Independent volume control loop
64. Officially Supported PVs
• GCEPersistentDisk
• AWSElasticBlockStore
• AzureFile
• FC (Fibre Channel)
• NFS
• iSCSI
• RBD (Ceph Block Device)
• CephFS
• Cinder (OpenStack block storage)
• Glusterfs
• VsphereVolume
• HostPath (single node testing only)
• more than 20+
• Write your own volume plugin: FlexVolume
1. Implement 10 methods
2. Put binary/shell in plugin directory
• example: LVM as k8s volume
65. Production ENV Volume Model
Persistent Volumes
PersistentVolumeClaims Pod
Host
path
networked
storage
Pod Pod
mountPath mountPath
Key point: 职责分离
66. PV & PVC
• System Admin:
• $ kubectl create -f nfs-pv.yaml
• create a volume with access mode, capacity, recycling mode
• Dev:
• $ kubectl create -f pv-claim.yaml
• request a volume with access mode, resource, selector
• $ kubectl create -f pod.yaml
67. More …
• GC
• Health check
• Container lifecycle hook
• Jobs (batch)
• Pod affinity and binding
• Dynamic provisioning
• Rescheduling
• CronJob
• Logging and monitoring
• Network policy
• Federation
• Container capabilities
• Resource quotas
• Security context
• Security polices
• GPU scheduling
68. Summary
• Q: Where are all these control panel ideas come from?
• A: Kubernetes = “Borg” + “Container”
• Kubernetes is a set of methodology for using containers based on past
10+ yr’s exp in Google Inc.
• “不不要摸着⽯石头过河”
• Kubernetes is a container centric DevOps/Workload orchestration system
• Not a “CI/CD”, “Micro-service” focused container cloud
69. Growing Adopters
• Public Cloud
• AWS
• Microsoft Azure (acquired Deis)
• Google Cloud
• 腾讯云
• 百度AI
• 阿⾥里里云
Enterprise Users