Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

reBuilding cloud @adform

228 views

Published on

reBuilding cloud @adform

Published in: Technology
  • Be the first to comment

  • Be the first to like this

reBuilding cloud @adform

  1. 1. REBUILDING CLOUD @ADFORM Matas Tvarijonas, Cloud Services
  2. 2. ABOUT 10 years in IT 4 years in enterprise architecture 1 year ago learned how to exit VI 2 year in opensource Passion for storage and virtualization Adform CloudServices team: Members 5 linux engineers 1 product owner Technology Openstack Ceph ScaleIO Nginx Ha-proxy SaltStack
  3. 3. WHAT WAS BEFORE ? Water – Scrum - Fall
  4. 4. WHAT WAS BEFORE ?
  5. 5. WHAT WAS BEFORE ?
  6. 6. CLOUD 1.0 OPENSTACK + CEPH
  7. 7. CEPH 1.0 CRUSH MAP SSD !=SSD ( RI drives doesn’t fit CEPH journaling) RadosGW bucket sharding There are limits in unlimited storage :] 17 mln. objects per bucket isn’t good idea 4 days to delete Monitoring acting as user is best Bucket create, object create, object read, object delete, bucket delete… Design HW ! ( as similar storage nodes as possible) LESSONS LEARNED
  8. 8. CLOUD 1.0 Storage QoS is a must (Images, Volumes) Design Management, AZ, RACK, NW, DC layout Plan over provisioning( CPU, RAM, DISK ratio [4,1,1] ) Plan plenty of IP pools for VMs. Monitoring from user perspective (Openstack Rally) Separate networks (Public, Storage, Replication ...) Centralized Logging and monitoring LESSONS LEARNED
  9. 9. CLOUD 1.0 Test overlay NW performance (GRE, VXLAN) (MTU) Openstack RDO Linux Kernel 4+ Iperf 4c.4m.50gb “VLAN for prod” LESSONS LEARNED
  10. 10. CLOUD 1.0 LESSONS LEARNED
  11. 11. CLOUD EXTENSIONS IF PRIVATE CM ready images ( saltstack + puppet) is a must ! Continuous images update( new OS versions, packets, features) Packer + Openstack Prepared networks – not everyone is NW expert GRE vs VXLAN vs VLAN, subnet, gw, router and etc… Migration tools: Change VM owner or project Migrate form one AZ to another Migrate form one Openstack to another
  12. 12. WHY REBUILD THE CLOUD 1.0 ? 8 ways to ride dead horse Buying a stronger whip. Changing riders. Declaring, “God told us to ride this horse.” Appointing a committee to study the horse. Hiring an outside consultant to advise on how to better ride the horse. Proclaiming, “This is the way we’ve always ridden this horse.” Develop a training session to improve our riding ability. Riding the dead horse “smarter, not harder.”
  13. 13. BUILDING BLOCKS
  14. 14. OPENSTACK
  15. 15. BUILDING FROM SCRATCH OPENSTACK CLOUD ARCHITECTURE
  16. 16. BUILDING FROM SCRATCH OPENSTACK CLOUD MASTER REGION ARCHITECTURE 3 x Cloud controller physical servers in each DC Service to DB communication via LB Openstack projects, tenants, roles, assignments, service accounts @keystone DB Identity @AD
  17. 17. BUILDING FROM SCRATCH 3 x Availability zones per datacenter Management servers distributed in each AZ AZ - power and rack isolated API to API and API to DB communication via LB REGION ARCHITECTURE
  18. 18. CLOUD STORAGE Volume is persistent Root and ephemeral are not !
  19. 19. BUILDING FROM SCRATCH CEPH DESIGN SSD journaling (SSD WI) CRUSH MAP aligned to AZ CEPH rule-sets by disk type Pools use rule-sets Metadata, index pools on SSD Bucket sharding
  20. 20. LINUX IO SCHEDULER FOR SSD [NOOP] cat /sys/block/sdc/queue/scheduler [noop] deadline cfq
  21. 21. KVM – KSM AND LACP MEMORY DEDUP sh ksm_stat Shared memory is 16 892 MB Saved memory is 90 426 MB LACP cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE="bond0" USERCTL="no" BOOTPROTO="none" ONBOOT="yes" BONDING_OPTS="downdelay=0 miimon=100 use_carrier=on mode=4 xmit_hash_policy=layer2+3 updelay=0 lacp_rate=0 ad_select=0 "
  22. 22. DEMO BOOT 20 VM AND ATTACH 20 VOLUMES
  23. 23. TEST SETUP Storage backend: 4x CEPH OSD nodes, 5 SSD drives each (20 SSD in pool, replication 3x) 2x Openstack – KVM based compute nodes with 5 VMs on each Each VM has 100GB volume attached from backend storage ( 10 volumes in total) Salt-stack used to launch FIO test: salt csfiot00* cmd.run "fio --name=/fio-ssd/randrw-ssd -- ioengine=libaio --iodepth=16 --rw=rw --rwmixread=70 -- bs=4k --direct=1 --size=8G --numjobs=4 --runtime=600 -- group_reporting" IOPS TEST
  24. 24. RESULTS
  25. 25. SLACK
  26. 26. INSIGHTS Opensource isn’t free Be ready to develop some tools ScaleOut products must be designed Provisioning, CM, Automation, Monitoring Be ready to make mistakes WHAT IS HIDDEN
  27. 27. INSIGHTS BENEFITS Storage tiers Clear ownership Self-service Security Scalability On demand DevOps ready API “Can you get VM in 15s ?”
  28. 28. QUESTIONS ?

×