Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Openshift Container Platform on Azure


Published on

Combine Docker, Kubernetes, Ansible, ARM and the Azure Cloud to get a full HA container platform.

Published in: Software
  • Be the first to comment

Openshift Container Platform on Azure

  1. 1. Goals Design a HA production quality OSE architecture that leverages native Azure Cloud infrastructure and Services 3 x Masters - With HA Load Balancer 3 x Infra Nodes - With HA Load Balancer N x Nodes Bastion for Safety and Security Shared Performance Storage Simple and Flexible Expandable Usable in further automations
  2. 2. Provisioning and Automation Overview Several Choices - Ansible - Ansible Azure Resource Manager - ARM Ansible - ARM - Ansible In order to use the full function of Azure, a Azure Resource Manager Template was found to be the best way to fully leverage Azure.
  3. 3. Azure Resource Manager - Overview Resource Manager template - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group. It also defines the dependencies between the deployed resources. resource group - A container that holds related resources for an application. The resource group can include all of the resources for an application, or only those resources that you group together.
  4. 4. ARM Template ARM Templates are JSON files ARM Templates are nestable They can provision the majority of Azure Resources Microsoft recommended methodology for Cloud Orchestration
  5. 5. Openshift Enterprise on Azure Template Moving to openshift contrib directory soon.
  6. 6. Running Azure Openshift ARM Template Supply the following: Openshift User Name and Password (No @) SSH Public and Private(base64) Key RHN User Name and Password PoolId for subscription to use Number of Nodes you want - 3-30 Currently Azure Machine Sizing for: Master Infra Node Storage
  7. 7. Running It
  8. 8. Template Components Links to Deployment Template azuredeploy.json ARM Template - Orchestration bastion.json ARM Template - Bastion Host master.json ARM Template - Master(s) node.json ARM Template - Nodes(s) infranode.json ARM Template - Infra logging.json ARM Template - Logging store.json ARM Template - Storage Node(s) azuredeploy.parameters.json ARM Template - Common Parameters {hostttype}.sh Bash Script for VM Setup
  9. 9. Naming and Inventory - Internal Masters master1,master2,master3 Infranode Infranode (1 and 2 comming soon) Nodes Node01--32 (99+ coming soon) Bastion basion Storage Store1 (more coming)
  10. 10. Naming and Inventory - External Masters {resourcegroupname}m1... Infranode Determined by user Nodes No public Ip Bastion {resourcegroupname}b1 Storage No Public Ip
  11. 11. Masters and Load Balancing Azure Traffic Manager = Load Balancer Load Balance - Round Robin Health Checks DNS Level Survives complete data center loss Considered more reliable than Azure Load Balancer.
  12. 12. Bastion Using a ARM Extension Launched Script Sets up ssh keys Gets bastion subscriptions setup Builds Host Inventory /etc/ansbile/hosts Setup ansible settings Build Ansible script to setup subscriptions Setup PostInstall Script Build Launch Turn off .ssh key checking Run ansible subscribe playbook Run ansible openshift byo playbook Run Postinstall playbook
  13. 13. Azure / OSE Storage - Overview Storage is needed in a few categories RHEL System/Boot Disk Docker Container Storage Persistent Storage Registry Azure Has: No NFS No Native Iscsi No FC Upstream coming for azure block
  14. 14. Azure Storage Lessons Learned/Problems: Azure Standard Storage is really slow Minimal Config of Script could take 5Hrs Questionable for Apps with Db/MsgQ Most apps today in data center are SSD What we want: Full HA Redundance Support for Database Apps MySQL/MongoDB Easy to add more storage Supported with existing storage plugin
  15. 15. Azure Storage Solution Choose VM types that support Premium Storage Implement Persistent Volumes based on ISCSI Use RHEL Iscsi target support Created automation to automatically create lvm backed ISCSI targets Iscsi Quota Enforced by size of volume Use LVM Striped Volumes Azure 3x redundancy Expand more by adding another appliance Only needs standard RHEL
  16. 16. Store1 Server Provisioned automatically as part of AzureDeploy Start with 8 Data Drives in one volume group Auto Partition and Format drives Executes 3x ose_pvcreate Ose_pvcreate Auto create iscsi target device Auto create lun Auto share the lun Set acl Auto create yml pv definition Register pv with ose
  17. 17. Two (Current) Separate Objectives ● Create A Reference Architecture on best practice to Install OSE on Azure ● To create automations that make it easy to deploy Current status Team for OSE / Azure ● One Systems Design and Engineering resource ● PM support ● Engineering mgr support ● Trello board ● Upstream github repo
  18. 18. ● Container Network ○ Openshift-sdn ● Load Balancer - HA is Default ○ Azure Traffic Manager for Masters ○ Azure Traffic Manager for Infra ● OpenShift "router" deployed ● Local registry deployed ● DNS ● Authentication Current status - OSE on Azure ● Auto - Scaling ○ Auto Scaling current not in scope ● Iscsi for Persistent Volumes ○ Docker Registry storage ○ OpenShift Application storage
  19. 19. ● Authentication ○ Authenticate based on htpasswd ● Deployment Environments ○ OpenShift deployment via Packages Current status - OSE on Azure (cont.) ● Target OS ○ RHEL 7 ● Packages ○ RHEL GA Repos ● QE ● Docs ○ Reference architecture WIP
  20. 20. ● Short term: ○ We should ship support these at some point ○ Reference Architecture being worked on by Glenn West ● Medium term: ○ OSE on Azure wishlist: ■ Ansible template to Deploy ARM Template ● Long term: ○ Plugable - Click deployment of additional nodes and storage ○ Ansible Tower Integration Plans
  21. 21. Participating Currently under active drevelopment. Current Git Hub: Soon in upstream. (Active Development above, pushed to contrib soon for stable)
  22. 22. Demo 2 With active discussion and walkthru. Demo(s)