Ibm cloud nativenetflixossfinal

2,228 views

Published on

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,228
On SlideShare
0
From Embeds
0
Number of Embeds
22
Actions
Shares
0
Downloads
53
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • http://www.geekabout.com/2008-02-19-479/40-most-disastrous-cable-messes.html
  • Icons from http://www.iconhot.com/icon/rrze/computer-database.html
  • http://nabeeloo.com/2013/02/throwback-thursday-pagers-and-beepers/
    https://www.flickr.com/photos/justmartha/3998898107/sizes/n/
    http://www.digitaltrends.com/social-media/wake-up-a-3-a-m-phone-call-is-your-ticket-into-this-nocturnal-social-network/
  • Make bigger
  • TODO: Redo screen shots from actual Uptime and cleaner hystrix dashboard
  • Ibm cloud nativenetflixossfinal

    1. 1. Cloud Native NetflixOSS Services on Docker Andrew Spyker (@aspyker) Sudhir Tonse (@stonse)
    2. 2. Agenda • Introduction – NetflixOSS, Cloud Native with Operational Excellence, and IBM Cloud Services Fabric • Docker Local Port • Docker Cloud Port
    3. 3. About Andrew • IBM - Cloud Performance Architecture and Strategy • How did I get into cloud? – Performance led to cloud scale, led to cloud platforms – Created Mobile/Cloud Acme Air – Cloud platforms led to NetflixOSS, led to winning Netflix Cloud Prize for best sample application – Also ported to IBM Cloud - SoftLayer – Two years focused on IBM Cloud Services Fabric and Operations • RTP dad that enjoys technology as well as running, wine and poker @aspyker ispyker.blogspot.com
    4. 4. About Sudhir • Manages the Cloud Platform Infrastructure team at Netflix • Many of these components have been open sourced under the NetflixOSS umbrella. • Sudhir is a weekend golfer and tries to make the most of the wonderful California weather and public courses.
    5. 5. NetflixOSS on Github • NetflixOSS is what it takes to run a cloud service and business with operational excellence • netflix.github.io –40+ OSS projects –Expanding every day • Focusing more on interactive mid-tier server technology today
    6. 6. OR Other IaaS NetflixOSS Categorized
    7. 7. REST Framework/Bootstrapping/DI Karyon/Governator Resiliency/Fallback Hystrix RPC (Routing/LB) Ribbon/Eureka Distributed Co-ordination (Zookeeper) Curator Distributed Caching EVCache NoSQL (Cassandra) Persistence Astyanax Monitoring Turbine Metrics Servo Logging Blitz4J Function NetflixOSS Library Functional Reactive Programming RxJava Properties/Configuration Archaius Service Requests Data Access/Caching Config/Insights App Instance Eureka Server(s) Hystrix Dashboard IPC (smart LB) Metrics Dashboard Cassandra Netflix OSS – Application Container/Services
    8. 8. Elastic, Web and Hyper Scale Doing This Not Doing That Source: Programmableweb.com 2012
    9. 9. Elastic, Web and Hyper Scale Front end API (browser and mobile) Authentication Service Booking Service Temporal caching Durable Storage Load Balancers Strategy Benefit Make deployments automated Without automation impossible Expose well designed API to users Offloads presentation complexity to clients Remove state for mid tier services Allows easy elastic scale out Push temporal state to client and caching tier Leverage clients, avoids data tier overload Use partitioned data storage Data design and storage scales with HA
    10. 10. HA and Automatic Recovery Feeling This Not Feeling That
    11. 11. Micro service Implementation Call “Auth Service” Highly Available Service Runtime Recipe Ribbon REST client with Eureka Web App Front End (REST services) App Service (auth-service) Execute auth-service call Hystrix Eureka Server(s) Eureka Server(s) Eureka Server(s) Karyon Fallback Implementation Implementation Detail Benefits Decompose into micro services • Key user path always available • Failure does not propagate across service boundaries Karyon /w automatic Eureka registration • New instances are quickly found • Failing individual instances disappear Ribbon client with Eureka awareness • Load balances & retries across instances with “smarts” • Handles temporal instance failure Hystrix as dependency circuit breaker • Allows for fast failure • Provides graceful cross service degradation/recovery
    12. 12. IaaS High Availability Region (Dallas) DAL01 Datacenter (DAL06) DAL05 Eureka Local LBs Web App Auth Service Booking Service Cluster Auto Recovery and Scaling Services Global Load Balancers Rule Why? Always > 2 of everything 1 is SPOF, 2 doesn’t web scale and slow DR recovery Including IaaS and cloud services You’re only as strong as your weakest dependency Use auto scaler/recovery monitoring Clusters guarantee availability and service latency Use application level health checks Instance on the network != healthy
    13. 13. Only proof is testing! Chaos Testing Region (Dallas) DAL06 Datacenter (DAL05) DAL01 Eureka Local LBs Web App Auth Service Booking Service Cluster Auto Recovery and Scaling Services Global Load Balancers ✗ Chaos Gorilla ✗ Videos: bit.ly/noss-sl-blog, http://bit.ly/sl-gorilla
    14. 14. Continuous Delivery Reading This Not This
    15. 15. Continuous Delivery Cluster v1 Canary v2 Cluster V2 Step Technology Developers test locally Unit test frameworks Continuous build Continuous build server based on gradle builds Build “bakes” full instance image Imaginator (Aminator inspired) creates SoftLayer images Developer work across dev and test Archaius allows for environment based context Developers do canary tests, red/black deployments in prod Asgard console provides app cluster common devops approach, security patterns, and visibility Continuous Build Server Baked to SoftLayer Image Templates (or AMI’s)
    16. 16. Operational Visibility If you can’t see it, you can’t improve it
    17. 17. Operational Visibility Web App Auto Service Visibility Point Technology Basic IaaS instance monitoring Not enough (not scalable, not app specific) User like external monitoring SaaS offerings or OSS like Uptime Service to service interconnects Hystrix streams Turbine aggregation Hystrix dashboard Application centric metrics Servo gauges, counters, timers sent to metrics store Remote logging Logstash/Kibana Threshold monitoring and alerts Services like PagerDuty for incident management Servo Hystrix/TurbineUptime Metric/Event Repositories LogStash/Elastic Search/Kibana Incidents
    18. 18. 3. Region (us-south-1) 5. Asgard Service 3. Datacenter (DAL01) – Fabric services are clustered across 3 DC’s 3. Datacenter (DAL05) – Apps are clustered across 3 DC’s Datacenter (DAL06) 1. Eureka 2. Local LB Service A service you depend on 4. Cluster Auto Recovery and Scaling Services 2. Global Load Balancers 8. Logstash Kibana 6. Imaginator Service 7. Uptime Service Your built code Tested base images /w agents Your front end service Your mid tier service Code and Image Build Devops Current IBM Cloud Services Fabric Currently VM based
    19. 19. Agenda • Introduction • Docker Local Port – Lessons Learned – Open Source • Docker Cloud Port
    20. 20. Demo Start Start demo loading here
    21. 21. Service Discovery (Eureka) Web App Auth Service Region (docker-local) Datacenter (docker-local-1a) Cluster Auto Recovery & Scaling Service (Microscaler) Load Balancer (Zuul) Docker-local-1c Docker-local-1b Users Devops (admin) Devops Console (Asgard) Acme Air Web App Acme Air Auth Service Cassandra NodeBlue and green boxes are container instances Docker “Local” Setup Skydock SkyDNS
    22. 22. Why Docker for our work? • Because we could, actually … – To show Netflix cloud platform as portable to non-VM clouds – Help with NetflixOSS understanding inside of IBM • Local Testing – “Cloud in a box” more production like – Developers able to do larger scale testing – Continuous build/test tool systems able to run at “scale” • Public Cloud Support – Understand how an container IaaS layer could be implemented • So far, proof of concept, you can help continue – More on that later (hint open source!)
    23. 23. Micro service Implementation Call “Auth Service” Ribbon REST client with Eureka Web App Front End (REST services) App Service (auth-service) Execute auth-service call Eureka Server(s) Eureka Server(s) Eureka Server(s) Karyon DockerHost SkydockSkyDNS Eureka Auth Service Micro Service Docker Daemon Event API Two Service Location Technologies?
    24. 24. Service Location Lessons Learned • Both did their job well – SkyDNS/SkyDock for container basic DNS • Must be careful of DNS caching clients – Eureka for application level routing • Interesting to see the contrasts – Intrusiveness (Eureka requires on instance/in app changes) – Data available (DNS isn’t application aware) – Application awareness (running container != healthy code) • Points to value in “above IaaS” service location registration – Transparent IaaS implementations struggle to be as application aware • More information on my blog http://bit.ly/aws-sd-intr
    25. 25. Instance Auto Recovery / Scaling • Auto scaling performs three important aspects – Devops cluster rolling versions – Auto recovery of instances due to failure – Auto scaling due to load • Various NetflixOSS auto scalers – For NetflixOSS proper – Amazon Auto Scaler – For SoftLayer port – RightScale Server Arrays – For Docker local port – we implemented “Microscaler”
    26. 26. Microscaler Agent Architecture • OSS at http://github.com/EmergingTechnologyInstitute/microscaler • Microscaler service, agent are containers • Microscaler has CLI remote client and REST interface • Note: – No IBM support, OSS proof of concept of auto scaler needed for local usage – Works well for small scale Docker local testing Dockerhost WebAppi001 WebAppi002 AuthServicei001 AuthServicei002 MicroscalerAgent Docker Remote API Microscaler Microscaler REST or CLI
    27. 27. Microscaler CLI/REST usage • Login CLI: – ms login --target <API URL> --user user01 --key key • Login REST: – curl -X POST -H "Content-Type: application/json" -d '{"user":“user01","key":“key01"}' http://localhost56785/asgcc/login – {"status":"OK","token":"a28e7079-db0b-4235-8b9b-01c229e02e9a“} • Launch Config CLI: – ms add-lconf --lconf-name lconf1 --lconf-image-id cirros --lconf-instances-type m1.small --lconf-key key1 • Launch Config REST: – curl -X POST -H "Content-Type: application/json" -H "authorization: a28…e9a" -d '{"name":"mylconf","image_id":”img1","instances_type":"m1.small","key":"keypair"}' http://locahost:56785/asgcc/lconfs – {"status":"OK”} • ASG CLI: – ms add-ms --ms-name asg1--ms-availability-zones docker01,docker02 --asg-launch-configuration lconf1 --asg-min- instances 1 --asg-max-instances 3 --asg-scale-out-cooldown 300 --asg-scale-in-cooldown 60 --asg-no-load- balancer--asg-domain docker.local.io – ms start-ms --ms-name asg1 • ASG REST: – curl -X POST -H "Content-Type: application/json" -H "authorization: a28…e9a" -d '{"name":”asg1","availability_zones":[”az1"],"launch_configuration":”lconf1","min_instances":1,"max_instances": 3}' http://localhost:56785/asgcc/asgs – {"status":"OK“} – curl -X PUT -H "Content-Type: application/json" -H "authorization: a28e…e9a” http://localhost:56785/asgcc/asgs/myasg/start – {"status":"OK”}
    28. 28. Working with the Docker remote API • Microscaler and Asgard need to work against the “IaaS” API – Docker remote API to the rescue – Start and stop containers, query images and containers • Exposed http://172.17.42.1:4243 to both – Could (should) have used socket – Be careful of security once you do this • Found that this needs to easily configurable – Boot2docker and docker.io default to different addresses • Found that current API isn’t totally documented – Advanced options not documented or shown in examples – Open Source to the rescue (looked at service code) – Need to work on submitting pull requests for documentation
    29. 29. Region and Availability Zones • Coded Microscaler to assign availability zones – Via user_data in an environment variable – Need metadata about deployment in Docker eventually? • Tested Chaos Gorilla – Stop all containers in a single availability zone • Tested Split Brain Monkey – Jepsen inspired, used iptables to isolate Docker network • Eureka awareness of availability zones not there yet – Should be an easy change based on similar SoftLayer port
    30. 30. Image management • Docker and baked images are kindred spirits • Using locally built images - Easy for a simple demo • Haven’t yet pushed the images to dockerhub • Considering Imaginator (Aminator) extension – To allow for Docker images to be built as we are VM’s – Considering http://www.packer.io/ – Or maybe the other way around? • Dockerfiles for VM images?
    31. 31. Using Docker as an IaaS? • We do all the bad things – Our containers run multiple processes – Our containers use unmanaged TCP ports – Our containers run and allow ssh access • Good – Get all the benefits of Docker containers and images – Only small changes to CSF/NetflixOSS cloud platform • Bad – Might not take full advantage of Docker • Portability, container process optimizations, composability • Considering more Docker centric approaches over time
    32. 32. Where can I play with this? # on boot2docker or docker.io under virtual box Ubuntu git clone http://github.com/EmergingTechnologyInstitute/ acmeair-netflixoss-dockerlocal cd bin # please read http://bit.ly/aa-noss-dl-license ./acceptlicenses.sh # get coffee (or favorite caffeinated drink), depending on download speed ~ 30 min ./buildsimages.sh # this is FAST! – but wait for about eight minutes for cross topology registration ./startminimum.sh # Route your network from guest to docker network (http://bit.ly/docker-tcpdirect) ./showipaddrs.sh # Look at the environment (Zuul front end, Asgard console, Eureka console, etc.) Browse to http://172.17.0.X All Open Source Today!
    33. 33. Service Discovery (Eureka) Web App Auth Service Region (docker-local) Datacenter (docker-local-1a) Cluster Auto Recovery & Scaling Service (Microscaler) Load Balancer (Zuul) Docker-local-1c Docker-local-1b Users Devops (admin) Devops Console (Asgard) Acme Air Web App Acme Air Auth Service Cassandra NodeBlue and green boxes are container instances Docker “Local” Setup Skydock SkyDNS Show demo here
    34. 34. Agenda • Introduction • Docker Local Port • Docker Cloud Port – Lessons Learned
    35. 35. DAL05 Datacenter SoftLayer Private Network Docker Cloud on IBM SoftLayer DAL06 Datacenter Dockerhost Dockerhost Dockerhost WebAppi001 WebAppi003 AuthServicei001 AuthServicei003 WebAppi002 WebAppi004 AuthServicei002 AuthServicei004 Registry Zuul Eureka Cassandra Microscaler MicroscalerAgent MicroscalerAgent Skydock SkyDNS Skydock Skydock Asgard APIProxy Docker Remote API Docker Remote API
    36. 36. Networking • Docker starts docker0 bridge to interconnect single host instances • We assigned the subnet of the bridge to be a portable subnet within our SoftLayer account within a VLAN – We routed all traffic to the actual private interface • This allows network to work seamlessly – Between datacenters – Across hardware firewall appliances – To external load balancers – To all other instances (VM’s, bare metal) in SoftLayer • This allowed for easy networking between multiple Docker hosts
    37. 37. Docker API and Multi-host • Once you have multiple Docker hosts – You have multiple Docker remote API’s • Wrote “API Proxy” to deal with this • Not the best solution in the world, but worked • Considering how this works with existing IaaS API – Single SoftLayer API handles bare metal, virtual machines – How to keep the API Docker compatible • Maybe other more Docker centric approaches coming?
    38. 38. Image Management • Currently using standard Docker private registry • Considering how this could be integrated with SoftLayer Image management system – Use optimized cross datacenter distribution network – Expose Docker layered versions through console • Again, important to not lose Docker value in image transparency and portability
    39. 39. DAL05 Datacenter SoftLayer Private Network Docker Cloud on IBM SoftLayer DAL06 Datacenter Dockerhost Dockerhost Dockerhost WebAppi001 WebAppi003 AuthServicei001 AuthServicei003 WebAppi002 WebAppi004 AuthServicei002 AuthServicei004 Registry Zuul Eureka Cassandra Microscaler MicroscalerAgent MicroscalerAgent Skydock SkyDNS Skydock Skydock Asgard APIProxy Docker Remote API Docker Remote API Demos 1-1 today or tomorrow at Jerry’s session
    40. 40. Questions?

    ×