Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Kubernetes as Orchestrator for A10 Lightning Controller

581 views

Published on

A10 Lightning Application Delivery System (ADS) supports hybrid environments by providing secure application services and advanced analytics across the entire deployment – from traditional on-premise data centers, to public and/or private clouds, or any combination thereof. A10 Lightning employs a controller-based architecture that can self-managed on-premise or in a private cloud, or utilized as a SaaS offering managed by A10, to enable management of heterogeneous workloads across physical hardware-based environments, as well as public, private, and hybrid clouds.

This presentation talks about our journey from a VM based Controller to a Kubernetes based Controller

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Kubernetes as Orchestrator for A10 Lightning Controller

  1. 1. Confidential | ©A10 Networks, Inc. Using Kubernetes as Orchestrator for A10 Lightning Controller Akshay Mathur Manu Dilip Shah
  2. 2. A10 Lightning Application Delivery Service DATA CONTROL Analytics Admin Portal API Client A10 Lightning Controller REST API LADC Cluster Application ServicesClients Lightning Controller • A micro services based application • Configuration, Visibility, Analytics • Multi-tenant portal • Programmability with REST APIs Lightning ADC • Scale-out • Traffic Management • App Security
  3. 3. Controller Architecture
  4. 4. Why we thought of Kubernetes • On failure, K8s brings up the pod automatically • Rolling upgrade of code can be done easily • Scaling policy can be setup to scale each micro service as needed • Pod health can be monitored easily and acted upon
  5. 5. What we achieved at high level • Controller was only available as SaaS • Launch and Scaling was manual • Installation was dependent on underlying infrastructure platform • Controller is available for on-premise • It can be scaled from One VM to Multiple depending on use case • Launch and Scaling is automated • Installation is independent of underlying infrastructure platform From AWS VMs to K8s Containers in Multiple Environments
  6. 6. Current Environment for Controller • Kubernetes core components • Kube-dns – Internal DNS service • Flannel – Overlay networking • Heapster – Monitoring of pods • Kubernetes Dashboard - Helps monitoring the pods • jq – Programmatically Editing JSONs for K8s objects
  7. 7. The Journey: From  to  • Everything was manual to start with • Selecting Master and Minion • Mapping node port to container port • Cross node communication Configuration • Limitations Realized • Cant run same type of pod on one node • Packaging and distribution issues e.g. build process automation • Data loss when node stops
  8. 8. The Journey: From  to  • Second Level Issues – After some level of simplifications • Cumbersome overlay network configuration • Passing env info to pod – Startup script env variables are not scalable • Installation was still too many steps • Thought for Future – Solved now • Adding node to the K8s cluster when more capacity is needed • Migrating static IP of the node to other node when node is replaced • Adding component in future with minimal change in existing components
  9. 9. Design Choices • Keep all micro-services as is • One K8s service per micro-service • One pod per k8s deployment • Multiple services exposed externally • Continue to use third-party registry service • Kubernetes Registry Service can be used instead of third-party
  10. 10. Accessing Micro Services • Multiple micro services of Controller are required to be accessed from outside • Micro services accessing each other also can’t depend on IP address • Kubernetes Services and kube-dns allow fixing name as well as a fixed IP address for each service • All internal access (between components) is using service name • Service IP is mapped to Node IP for all external access • Public static IP is assigned to the node for external access
  11. 11. Simplifying Networking • Each pod gets the IP address that is internal to the node • Overlay networking facilitates communication between pods across nodes • Flannel creates an overlay network that spans across nodes • Each pod gets IP address from same subnet • This subnet is internal to the K8s cluster • This provides seamless communication between pods across nodes • Private Subnet for Service IPs is configured in K8s configuration
  12. 12. Overlay Network
  13. 13. Persisting Data • Pods may come and go or can spawn across nodes • Persistence is required for maintaining the state across reboots or across clusters • NFS, AWS EBS, GCE Persistent Disk or Azure Disk can be used as K8s Persistent Volume (PV) • In K8s Deployment object, ‘PV Claims’ can be done by each Pod, as needed • K8s provides PV matching the Claim to the Pod • This mounts the PV file system into container’s file system
  14. 14. Storage Objects in Kubernetes
  15. 15. Deploying Clustered Applications • Cluster application (e.g. datastores) each pod need to know about other pod running same application • Such applications needs to be deployed using K8s Stateful Set • K8s Stateful Set provide fixed names for each instance/pod • PV Claims in each instance of Stateful Set also have fixed names • Having fix names help a lot in the configuration and functioning of clustered applications • When the application requires more capacity, it is easy to add
  16. 16. We do many exciting things You can join the team mshah@a10networks.com amathur@a10networks.com  Thanks 

×