Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Effective Kubernetes - Is Kubernetes the new Linux? Is the new Application Server?

99 views

Published on

I will tell you two stories about two different implementations of Kubernetes. One from Fashion mobile ecomerce. One from a Fintech. Kubernetes is not a silver bullet. But damn close ;).

Published in: Technology
  • Be the first to comment

Effective Kubernetes - Is Kubernetes the new Linux? Is the new Application Server?

  1. 1. EFFECTIVE PLATFORM BUILDING WITH KUBERNETES. IS K8S NEW LINUX? Wojciech Barczynski - SMACC.io | Hypatos.ai Wrzesień 2018
  2. 2. WOJCIECH BARCZYŃSKI Lead So ware Engineer & System Engineer Interests: working so ware Hobby: teaching so ware engineering
  3. 3. BACKGROUND ML FinTech ➡ micro-services and k8s Before: 1 z 10 Indonesian mobile e-commerce (Rocket Internet) Spent 3.5y with Openstack, 1000+ nodes, 21 data centers I do not like INFRA :D
  4. 4. STORY Lyke - [12.2016 - 07.2017] SMACC - [10.2017 - present]
  5. 5. KUBERNETES
  6. 6. WHY? Admistracja jest trudna i kosztowna Virtualne Maszyny, ansible, salt, etc. Za dużo ruchomych części Nie kończąca się standaryzacja
  7. 7. MIKROSERWISY AAA!
  8. 8. WHY? Cloud is not so cheap - $$$
  9. 9. IMAGINE do not need to think about IaaS no login on a VM less gold plating your CI / CD ... DC as a black box
  10. 10. KUBERNETES Container management Service and application mindset Simple Semantic* Independent from IaaS provider
  11. 11. KUBERNETES Batteries for your 12factory apps Service discovery, meta-data support Utilize resources to nearly 100%
  12. 12. KUBERNETES Kubernetes Node Node Node Node App Ingress Controller Repository make docker_push; kubectl create -f app-srv-dpl.yaml
  13. 13. SCALE UP! SCALE DOWN! Kubernetes Node Node Node Node App Ingress Controller App Appscale 3x kubectl --replicas=3 -f app-srv-dpl.yaml
  14. 14. SCALE UP! SCALE DOWN! Kubernetes Node Node Node Node Ingress Controller scale 1x App kubectl --replicas=1 -f app-srv-dpl.yaml
  15. 15. ROLLING UPDATES! Kubernetes Node Node Node Node App Ingress Controller App App kubectl set image deployment/app app=app:v2.0.0
  16. 16. ROLLING UPDATES! Kubernetes Node Node Node Node App Ingress Controller App App
  17. 17. ROLLING UPDATES! Kubernetes Node Node Node Node App Ingress Controller App App
  18. 18. RESISTANCE! Kubernetes Node Node Node Node App Ingress Controller App App
  19. 19. RESISTANCE! Kubernetes Node Node Node Node App Ingress Controller App App
  20. 20. RESISTANCE! Kubernetes Node Node Node Node App Ingress Controller App App
  21. 21. HOW GET USER REQUESTS? API BACKOFFICE 1 DATA WEB ADMIN BACKOFFICE 2 BACKOFFICE 3 API.DOMAIN.COM DOMAIN.COM/WEB BACKOFFICE.DOMAIN.COM ORCHESTRATOR PRIVATE NETWORKINTERNET API LISTEN (DOCKER, SWARM, MESOS...) Ingress Controller
  22. 22. INGRESS Pattern Target App Service api.smacc.io/v1/users users-v1 api.smacc.io/v2/users users-v2 smacc.io web
  23. 23. LOAD BALANCING Kubernetes Worker Kubernetes Worker Kubernetes Worker Node Port 30000 Node Node Kubernetes Worker Node user-232 <<Requests>> B users Port 30000 Port 30000 Port 30000 Load Balancer user-12F user-32F
  24. 24. SERVICE DISCOVERY names in DNS: curl labels: name=value annotations: prometheus.io/scrape: "true" http://users/list
  25. 25. SERVICE DISCOVERY loosely couple components auto-wiring with logging and monitoring
  26. 26. DROP-IN traefik / Ingress / Envoy prometheus audit checks ...
  27. 27. THE BEST PART All live in git: all in Yaml integration with monitoring, alarming integration with ingress-controller ... Devs can forget about infrastructure... almost DevOps Culture Dream!
  28. 28. LYKE Now JollyChic Indonesia E-commerce Mobile-only 50k+ users 2M downloads Top 10 Fashion Apps w Google Play Store http://www.news.getlyke.com/single- post/2016/12/02/Introducing-the- New-Beautiful-LYKE
  29. 29. GOOD PARTS Fast Growth A/B Testing Data-driven Product Manager, UI Designer, Mobile Dev, and tester - one body
  30. 30. CHALLENGES 50+ VMs in Amazon, 1 VM - 1 App, idle machine Puppet, hilarious (manual) deployment process Fear Forgotten components sometimes performance issues
  31. 31. APPROACH 1. Simplify infrastructure 2. Change the Development practices 3. Change the work organization see: Conway's law
  32. 32. SIMPLIFY 1. Kubernetes with Google Kubernetes Engine 2. Terraform for all new
  33. 33. SIMPLIFY 1. Prometheus, AlertManager, and Grafana 2. Elasticsearch-Fluentd-Kibana 3. Google Identity-Aware-Proxy to protect all dev dashboards 4. 3rd party SaaS: statuscake and opsgenie
  34. 34. CONTINUOUS DEPLOYMENT branch-based: master staging production repo independent
  35. 35. TRAVISCI 1. Tests 2. Build docker 3. Deploy to Google Container Registry 4. Deploy to k8s only new docker 5. no config applied
  36. 36. GIT REPO |- tools | |- kube-service.yaml | - kube-deployment.yaml | |- Dockerfile |- VERSION - Makefile
  37. 37. Makefile Copy&Paste from the project to project SERVICE_NAME=v-connector GCP_DOCKER_REGISTRY=eu.gcr.io test: test_short test_integration run_local: docker_build: docker_push kube_create_config: kube_apply: kube_deploy:
  38. 38. 1. CLEAN UP Single script for repo - Makefile [1] Resurrect the README [1] With zsh or bash auto-completion plug-in in your terminal.
  39. 39. 2. GET BACK ALL THE KNOWLEDGE Puppet, Chef, ... ➡ Dockerfile Check the instances ➡ Dockerfile, README.rst Nagios, ... ➡ README.rst, checks/
  40. 40. 3. INTRODUCE RUN_LOCAL make run_local A nice section on how to run in README.rst Use: docker-compose The most crucial point.
  41. 41. 4. GET TO KUBERNETES make kube_create_config make kube_apply Generate the yaml files if your envs differ
  42. 42. 5. CONTINUOUS DEPLOYMENT Travis: use the same Makefile as a developer
  43. 43. 6. KEEP IT RUNNING Bridge the new with old: Use external services in Kubernetes Optional: Expose k8s in the Legacy [1] [1] feeding K8S events to HashiCorp consul
  44. 44. Bridge the new with old Service <External> Rabbit Prometheus Exporter RabbitMQ Google AWS Kubernetes EC2 http://rabbitmq:15672 IP 9090 Monitor legacy with new stack
  45. 45. Architecture During Migration Staging K8S 6 Production K8S 10 Google LB Google LB Live 30+ Staging 10+ Google AWS Cloudfront (Live/Staging) api.xstaging-api.x Google LBGoogle LBELB Google LBGoogle LBELB VPN VPN lyke.x staging-lyke.x Prometheus EFK - 2 ElasticSearch 4 nodes Lambdas
  46. 46. 7. INTRODUCE SMOKE-TEST TARGET_URL=127.0.0 make smoke_test TARGET_URL=api.example.com/users make smoke_test
  47. 47. 8. MOVE TO MICRO-SERVICES To offload the biggest components: Keep the lights on New functionality delegated to micro-services
  48. 48. 9. SERVICE SELF-CONSCIOUSNESS Add to old services: 1. metrics/ 2. health/ 3. info/
  49. 49. 10. GET PERFORMANCE TESTING introduce wrk for evaluating performance load test the real system
  50. 50. WHAT WORKED 1. C&P Makefile and k8s between repos 2. Separate deployments a good transition strategy
  51. 51. WHAT DID NOT WORK 1. Too many PoC, should cut them to 2 weeks max 2. Do it with smaller chunks 3. Alert rules too hard to write 4. Push back to k8s yaml [*] With coaching, I thought, it is OK
  52. 52. DO DIFFERENT 1. Move dev and staging data immediately 2. Let devs know it is a transition stage 3. Teach earlier about resources 4. EFK could wait 5. World-stop for a paid-XXX% weekend for migration
  53. 53. STORY Legacy on AWS, experiments with AWS ECS :/ Self-hosted K8S on ProfitBricks Get to Microso ScaleUp, welcome Azure Luckily - AKS
  54. 54. AZURE KUBERNETES SERVICE Independent from IaaS Our OnPrem = Our OnCloud Consolidation of our micro-services Plug and play, e.g., monitoring
  55. 55. SIMPLICITY az aks CLI for setting k8s - README.rst Terraform for everything else 1Password and gopass.pw TF also sets our AWS
  56. 56. DIFFERENCE ☠ Two teams in Berlin and Warsaw Me in Warsaw
  57. 57. NEW EXPERIENCE devs really do not like TravisCI ... k8s yamls transition from PB to AKS was painful
  58. 58. SOLUTION make everything ligther c&p without modifications hide the k8s, remove magic deploy on tag Similar to the Kelsey Hightower approach
  59. 59. Repo .travis.yml language: go go: - '1.10' services: - docker install: - curl -sL https://${GITHUB_TOKEN}@raw.githubusercontent.com - if [ -f "tools/travis/install.sh" ]; then bash tools/travi script: - dep ensure - make lint - make test - if [ -z "${TRAVIS_TAG}" ]; then make snapshot; fi; deploy: provider: script
  60. 60. Makefile |- tools | |- Makefile | |- kube-service.yaml | - kube-deployment.yaml | |- Dockerfile - Makefile
  61. 61. CONTINUOUS DEPLOYMENT Github TravisCI hub.docker.com AKS
  62. 62. PROCESS 1. git tag and push
  63. 63. PROCESS 1. Generate deploy, ingress, and svc kubernetes files 2. Commit to smacc-platform.git on staging branch 3. Deploy to staging environment
  64. 64. PROCESS 1. Create PR in smacc-platform.git for production branch 2. On merge, deploy to production
  65. 65. smacc-platform 3 independent branches: dev, staging, and master Target for other scripts
  66. 66. KUBERNETES Pure, generated, kubernetes config 2x kubernetes operators
  67. 67. WHAT WORKED Hiding k8s Go for ubuntu-based docker images
  68. 68. WOULD DO DIFFERENT More sensitive to feedback
  69. 69. NEXT Acceptance tests on every deployment Scale our ML trainings on the top of k8s Deployment tool based on Keeping an eye on Istio missy
  70. 70. K8S - Linux Kubernetes not a silver bullet, but damn close Common runtime for onPrem and onCloud The biggest asset - the API With service discovery - an integration platform With kubevirt - might replace your Openstack
  71. 71. DZIĘKUJĘ. PYTANIA? ps. We are hiring.
  72. 72. BACKUP SLIDES
  73. 73. HIRING Senior Polyglot So ware Engineers Experienced System Engineers Front-end Engineers 1 Data-Driven Product Manager Apply: hello-warsaw@smacc.io, Questions? wojciech.barczynski@smacc.io, or We will teach you Go if needed. No k8s or ML, we will take care of that. FB LI
  74. 74. 0.1 ➡ 1.0
  75. 75. CHANGE THE WORK ORGANIZATION From Scrum To Kanban For the next talk
  76. 76. KUBERNETES CONCEPTS
  77. 77. Node Master Deployment Docker containers Node Docker Containers Pod Schedule Schedule
  78. 78. Node Deployment Pod Node Pod services Fixed virtual address Fixed DNS entry
  79. 79. PODS See each other on localhost Live and die together Can expose multiple ports Pod nginx WebFiles ENV: HOSTNAME=0.0.0.0 LISTENPORT=8080
  80. 80. SIDE-CARS Pod memcached prometheus exporter Pod app swagger-ui 8080 80 9150 11211
  81. 81. BASIC CONCEPTS Name Purpose Service Interface Entry point (Service Name) Deployment Factory How many pods, which pods Pod Implementation 1+ docker running
  82. 82. ROLLING RELEASE WITH DEPLOYMENTS Service Pods Labels Deployment Deployment << Creates >> << Creates >> Also possible

×