Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

GMOインターネット様 発表「OpenStackのモデルの最適化とConoHa, Z.comとGMOアプリクラウドへの適用」 - OpenStack最新情報セミナー 2015年12月

11,841 views

Published on

OpenStackのモデルの最適化とConoHa, Z.comとGMOアプリクラウドへの適用

郷古 直仁 (GMOインターネット株式会社)

アジェンダ:
- Using OpenStack at GMO Internet
--- OpenStack Diablo cluster: Onamae.com VPS
--- OpenStack Grizzly cluster: ConoHa
--- OpenStack Havana cluster: GMO AppsCloud
--- OpenStack Juno cluster: ConoHa (2), GMO AppsCloud (2)
------ OpenStack Authentication in Juno (V2 keystone domains)
- How to using OpenStack at GMO Internet
--- OpenStack Designate DNSaaS: ConoHa, Z.com(OEM)
--- OpenStack Cinder Block storage: ConoHa: NexentaStor(SDS), AppsCloud: NetApp
--- OpenStack Ironic: Only AppsCloud: Undercloud Ironic deploy, Multi-tenant Ironic deploy
--- OpenStack Swift: shared cluster

Published in: Technology
  • Be the first to comment

GMOインターネット様 発表「OpenStackのモデルの最適化とConoHa, Z.comとGMOアプリクラウドへの適用」 - OpenStack最新情報セミナー 2015年12月

  1. 1. 1 OpenStack最新情報セミナー(2015/12/02) (in サイバーエージェント社 セミナールーム) Naoto Gohko <naoto-gohko@gmo.jp> IT Architect Enginner / GMO Internet Inc., OpenStackのモデルの最適化と適用: ConoHaとZ.comとGMOアプリクラウド
  2. 2. 2 History of our services using OpenStack in GMO Internet Inc., Nova-network model and Diablo: Onamae.com VPS Quantum overlay network: ConoHa Grizzly cluster High performance network: GMO AppsCloud(Havana) Juno ConoHa: Regison, Domain, DNS and SDS Juno GMO AppsCloud: Ironic and copy offload Cinder Swift cluster (shared from each OpenStack) # Agenda
  3. 3. 3 About GMO Internet http://gmo.jp/en
  4. 4. 4 Infrastructure Business
  5. 5. 5 Using OpenStack at GMO Internet
  6. 6. 6 Public Clouds We are offering four public cloud services.
  7. 7. 7 Physical Servers Running VMPhysical Server 1508 25294 Created VM Running Infrastructure 137223
  8. 8. 8 Swift cluster GMO Internet, Inc.: VPS and Cloud services Onamae.com VPS (2012/03) : http://www.onamae-server.com/ Forcus: global IPs; provided by simple "nova-network" tenten VPS (2012/12) http://www.tenten.vn/ Share of OSS by Group companies in Vietnam ConoHa VPS (2013/07) : http://www.conoha.jp/ Forcus: Quantam(Neutron) overlay tenant network GMO AppsCloud (2014/04) : http://cloud.gmo.jp/ OpenStack Havana based 1st region Enterprise grade IaaS with block storage, object storage, LBaaS and baremetal compute was provided Onamae.com Cloud (2014/11) http://www.onamae-cloud.com/ Forcus: Low price VM instances, baremetal compute and object storage ConoHa Cloud (2015/05/18) http://www.conoha.jp/ Forcus: ML2 vxlan overlay, LBaaS, block storage, DNSaaS(Designate) and original services by keystone auth OpenStack Diablo on CentOS 6.x Nova Keystone Glance Nova network Shared codes Quantam OpenStack Glizzly on Ubuntu 12.04 Nova Keystone Glance OpenStack Havana on CentOS 6.x Keystone Glance Cinder Swift Swift Shared cluster Shared codes KeystoneGlance Neutron Nova Swift Baremetal compute Nova Ceilometer Baremetal compute Neutron LBaaS ovs + gre tunnel overlay Ceilometer Designate SwiftOpenStack Juno on CentOS 7.x NovaKeystone Glance Cinder Ceilometer Neutron LBaaS GMO AppsCloud (2015/09/27) : http://cloud.gmo.jp/ 2nd region by OpenStack Juno based Enterprise grade IaaS with High IOPS Ironic Compute and Neutron LBaaS Upgrade Juno GSLB Swift Keystone Glance CinderCeilometer Nova Neutron Ironic LBaaS
  9. 9. 9 OpenStack Diablo cluster: • Onamae.com VPS
  10. 10. 10 Oname.com VPS(Diablo) • Service XaaS model: – VPS (KVM, libvirt) • Network: – 1Gbps • Network model: – Flat-VLAN (Nova Network), without floting IP – IPv4 only • Public API – None (only web-panel) • Glance – None • Cinder – None • ObjectStorage – None OpenStack service: Onamae.com VPS(Diablo)
  11. 11. 11
  12. 12. 12 Oname.com VPS(Diablo) • Nova Network: – very simple(LinuxBridge) – Flat networking is scalable. • Only 1 NIC per VM. • Only 1 Public Network IP – MQ(rabbitmq) dependency is little(sync. API) • More scalable than Juno, Kilo, Liberty and Mitaka • Cloud ? – Only virtulization management But There is no added value, such as a free configuration of the network OpenStack service: Onamae.com VPS(Diablo)
  13. 13. 13 OpenStack service: Onamae.com VPS(Diablo) model compute vm compute NIC NIC Vlan network bridge NIC vlan vlan tap vNIC Vlan network
  14. 14. 14 OpenStack Grizzly cluster: • ConoHa
  15. 15. 15 ConoHa(Grizzly) • Service XaaS model: – VPS + Private networks (KVM + libvirt) • Network: – 10Gbps wired(10GBase-T) • Network model: – Flat-VLAN + Quantam ovs-GRE overlay – IPv6/IPv4 dualstack • Public API – None (only web-panel) • Glance – None • Cinder – None • ObjectStorage – Swift (After Havana) OpenStack service: ConoHa(Grizzly)
  16. 16. 16 ConoHa(Grizzly) • Quantam Network: – It was using the initial version of the Open vSwitch full mesh GRE-vlan overlay network with LinuxBridge Hybrid But When the scale becomes large, Localization occurs to a specific node of the communication of the GRE-mesh-tunnel (with under cloud network(L2) problems) (Broadcast storm?) OpenStack service: ConoHa(Grizzly)
  17. 17. 17 Grizzly network: LibvirtHybridOVSBridgeDriver OpenStack Docmentより(Nakai san)
  18. 18. 18 OpenStack Havana cluster: • GMO AppsCloud
  19. 19. 19 GMO AppsCloud(Havana) • Service XaaS model: – KVM compute + Private VLAN networks + Cinder + Swift • Network: – 10Gbps wired(10GBase SFP+) • Network model: – IPv4 Flat-VLAN + Neutron LinuxBridge(not ML2) + Brocade ADX L4-LBaaS original driver • Public API – Provided the public API • Ceilometer • Glance – Provided(GlusterFS) • Cinder – HP 3PAR(Active-Active Multipath original) + NetApp • ObjectStorage – Swift cluster • Bare-Metal Compute – Modifiyed cobbler bare-metal deploy driver. OpenStack service: GMO AppsCloud(Havana)
  20. 20. 20 OpenStack service: GMO AppsCloud(Havana) model compute vm NIC Vlan network bridge NIC vlan tap vNIC Vlan network vNIC bridge vlan tap compute NIC bridge NIC vlan bridge vlan public network Neutronだけどsimpleな LinuxBridge model (Context Switchが少ない) >> Game配信など高速用途の 仮想化ネットワーク それが、GMO AppsCloud
  21. 21. 21 GMO AppsCloud(Havana) public API
  22. 22. 22 GMO AppsCloud(Havana) public API Web panel(httpd, php) API wrapper proxy (httpd, php Framework: fuel php) Havana Nova API Customer sys API Havana Neutron API Havana Glance API OpenStack API for input validation Customer DB Havana Keystone API OpenStack API Havana Cinder API Havana Ceilometer API Endpoint L7:reverse proxy Havana Swift Proxy
  23. 23. 23 Havana: baremetal compute cobbler driver
  24. 24. 24 Havana: baremetal compute cobbler driver Baremetal net: • Bonding NIC • Taged VLAN • allowd VLAN + dhcp native VLAN
  25. 25. 25 Havana: baremetal compute Cisco iOS in southbound https://code.google.com/p/cisco-ios-cli-automation/
  26. 26. 26 OpenStack Juno cluster: • ConoHa (2) • GMO AppsCloud (2)
  27. 27. 27 Swift cluster GMO Internet, Inc.: VPS and Cloud services Onamae.com VPS (2012/03) : http://www.onamae-server.com/ Forcus: global IPs; provided by simple "nova-network" tenten VPS (2012/12) http://www.tenten.vn/ Share of OSS by Group companies in Vietnam ConoHa VPS (2013/07) : http://www.conoha.jp/ Forcus: Quantam(Neutron) overlay tenant network GMO AppsCloud (2014/04) : http://cloud.gmo.jp/ OpenStack Havana based 1st region Enterprise grade IaaS with block storage, object storage, LBaaS and baremetal compute was provided Onamae.com Cloud (2014/11) http://www.onamae-cloud.com/ Forcus: Low price VM instances, baremetal compute and object storage OpenStack Diablo on CentOS 6.x Nova Keystone Glance Nova network Shared codes Quantam OpenStack Glizzly on Ubuntu 12.04 Nova Keystone Glance OpenStack Havana on CentOS 6.x Keystone Glance Cinder Swift Swift Shared cluster Shared codes KeystoneGlance Neutron Nova Swift Baremetal compute Nova Ceilometer Baremetal compute Neutron LBaaS ovs + gre tunnel overlay Ceilometer Upgrade Juno
  28. 28. 28 OpenStack Juno cluster: • ConoHa (2)
  29. 29. 29  Multi Region  SSD Only  Scalability  API  Simple and competitive pricing # Newly Released ConoHa
  30. 30. 30 In ConoHa, We added two additional features. – Multi-location region – Domain Structure: Application to multi-location region structure – 1 Domain == 1 OEM service or Product service – Domain on API validation wrapper proxy Multi-Location region and domain structures
  31. 31. 31 The meaning of the word • Domain • Keystone domain • With v2 API service (our cloud) • != DNS Domain • Location • Different geographic locations on the Earth • US(San Jose), JP(Tokyo), SG(Singapore) • Region • OpenStack region • Location != Region • Can setup up multiple Region in one Location
  32. 32. 32 Tokyo Singapore Sanjose # ConoHa has data centers in 3 Locations
  33. 33. 33 CentOS 7.1 x86_64 Juno (RDO) Maria DB Connect to Tokyo KeyStone from All regions. Add each region endpoints to Tokyo KeyStone. Did not need to modify OpenStack code.  OS and OpenStack Versions  Multi Region Setting # Specs
  34. 34. 34 Tokyo Singapole User/tenant User/tenant API Management Keystone API API Management Keystone APIAPI Management Keystone API Token Token Tokyo SanJoseSingapore API Management Keystone API API Management Keystone API READ/WRIT E READ READ TokenToken Token Do not create/delete users Do not create/delete users Our Customer base User administration # User-registration is possible in Japan only DB Replication DB Replication User/tenant User/tenantUser/tenant R/W R/W
  35. 35. 35 # Issues and Restrictions on Multi Region  User-registration is possible in Japan only  VPN performance issue  Issues on replicating token table.
  36. 36. 36 API Management Keystone API KeystoneDB Nova Neutron Glance Cinder OpenStack Cluster Nova Get/token Glance Get/token Neutron Get/token Cinder Get/tokenVM Create ! Nova user token:001 Neutron Token:002 Glance Token:003 Cinder Token:004 VM Create ! VM Create ! Nova user token:002 Neutron Token:003 Glance Token:004 Cinder Token:005 Nova user token:006 Neutron Token:007 Glance Token:008 Cinder Token:009 # Bloat access tokens  Too many tokens will be created from each components.
  37. 37. 37 Setting example.conf [keystone_authtoken] token= 100 year expires token [neutron_authtoken] token= 100 year expires token [glance_authtoken] token= 100 year expires token [cinder_authtoken] token= 100 year expires token # Issues on replicating token table.  100 year expires token We fixed it so that any tokens can be used for each components.
  38. 38. 38 OpenStack Authentication in Juno (V2 keystone domains)
  39. 39. 39 Why?
  40. 40. 40 Swift cluster GMO Internet, Inc.: VPS and Cloud services Onamae.com VPS (2012/03) : http://www.onamae-server.com/ Forcus: global IPs; provided by simple "nova-network" tenten VPS (2012/12) http://www.tenten.vn/ Share of OSS by Group companies in Vietnam ConoHa VPS (2013/07) : http://www.conoha.jp/ Forcus: Quantam(Neutron) overlay tenant network GMO AppsCloud (2014/04) : http://cloud.gmo.jp/ OpenStack Havana based 1st region Enterprise grade IaaS with block storage, object storage, LBaaS and baremetal compute was provided Onamae.com Cloud (2014/11) http://www.onamae-cloud.com/ Forcus: Low price VM instances, baremetal compute and object storage ConoHa Cloud (2015/05/18) http://www.conoha.jp/ Forcus: ML2 vxlan overlay, LBaaS, block storage, DNSaaS(Designate) and original services by keystone auth OpenStack Diablo on CentOS 6.x Nova Keystone Glance Nova network Shared codes Quantam OpenStack Glizzly on Ubuntu 12.04 Nova Keystone Glance OpenStack Havana on CentOS 6.x Keystone Glance Cinder Swift Swift Shared cluster Shared codes KeystoneGlance Neutron Nova Swift Baremetal compute Nova Ceilometer Baremetal compute Neutron LBaaS ovs + gre tunnel overlay Ceilometer Designate SwiftOpenStack Juno on CentOS 7.x NovaKeystone Glance Cinder Ceilometer Neutron LBaaS GMO AppsCloud (2015/09/27) : http://cloud.gmo.jp/ 2nd region by OpenStack Juno based Enterprise grade IaaS with High IOPS Ironic Compute and Neutron LBaaS Upgrade Juno GSLB Swift Keystone Glance CinderCeilometer Nova Neutron Ironic LBaaS
  41. 41. 41 • The cost to operate Multi version Openstack have increased • It is difficult to upgrade or add new features  Managing multiple sites of OpenStack is a headache. What’s the problems abount Multi-Cluster?
  42. 42. 42
  43. 43. 43 ConoHa: based on OpenStack Juno (IaaS) • Multiple region openstack cluster • Tokyo / Singapore / San Jose • ... and so on • Full SSD storage • Multiple keystone service domain support • ConoHa and Next service (now in development) ... OEM etc. • LB as a Service: LVS-DSR (original) • DNS as a service : OpenStack Designate • OpenStack API and additional RESTful API • Multiple Languages web panel support • Japanese, ConoHa, English, Korean, Mandarin Chinese
  44. 44. 44 • Create scope in the domain – Scoped items • Flavor • Images • Volume type – Shared items • Public Networks • Hypervisor • Images (Default domain) • Using Keystone API v2.0 Motivation
  45. 45. 45 • We use and customize the code that is in Juno Keystone v3 domain – Enable Domain ID for Juno Keystone V2 API • SaaS implementation with python-keystoneclient – Process related Domain ID and Data implementation Domain ID from token API User: POST /v2.0/token Admin(service): GET /v2.0/token/{id} Juno Keystone V2 API : Does not support Domains
  46. 46. 46 Keystone: wrapper proxy at domain specific keystone endpoint Domains and user prefix namespace Domain Product Prefix name space gnc ConoHa gnc zjp JP OEM-1 zjp zsg SG OEM- 1 zsg ... ... OEM-n ... ... Exp) user: gnc0000348 Image name: gnc_centos7
  47. 47. 47 We released 2nd service on same Juno infra. (2015/10/20 ~) Adding domain(2nd): cloud.z.com
  48. 48. 49 Diferrent API endpoints in a separate Domain Multi-Domains and Multi-endpoint
  49. 49. 50 Endpoint configuration on keystone
  50. 50. 52 OpenStack Juno: 2 service cluster, released Mikumo ConoHa Mikumo Anzu Mikumo = 美雲 = Beautiful cloud New Juno region released: 10/26/2015
  51. 51. 53 • Service model: Public cloud by KVM • Network: 10Gbps wired(10GBase SFP+) • Network model: – Flat-VLAN + Neutron ML2 ovs-VXLAN overlay + ML2 LinuxBridge(SaaS only) – IPv6/IPv4 dualstack • LBaaS: LVS-DSR(original) • Public API – Provided the public API (v2 Domain) • Compute node: ALL SSD for booting OS – Without Cinder boot • Glance: provided • Cinder: SSD NexentaStore zfs (SDS) • Swift (shared Juno cluster) • Cobbler deply on under-cloud – Ansible configuration • SaaS original service with keystone auth – Email, web, CPanel and WordPress OpenStack Juno: 2 service cluster, released • Service model: Public cloud by KVM • Network: 10Gbps wired(10GBase SFP+) • Network model: – L4-LB-Nat + Neutron ML2 LinuxBridge VLAN – IPv4 only • LBaaS: Brocade ADX L4-NAT-LB(original) • Public API – Provided the public API • Compute node: Flash cached or SSD • Glance: provided (NetApp offload) • Cinder: NetApp storage • Swift (shared Juno cluster) • Ironic on under-cloud – Compute server deploy with Ansible config • Ironic baremetal compute – Nexsus Cisco for Tagged VLAN module – ioMemory configuration
  52. 52. 54 OpenStack Designate DNSaaS: ConoHa: Z.com(OEM):
  53. 53. 55 Designate DNS: ConoHa cloud(Juno) Client API DNS Identify Endpoint Storage DB OpenStack Keystone Backend DB RabbitMQ Central Components of the DNS and GSLB(original) back-end services Application of Designate DNS: • DNS as a service(tenant) • Undercloud Infra-network • No Keystone auth config
  54. 54. 56 OpenStack Cinder Block storage: ConoHa: NexentaStor(SDS) AppsCloud: NetApp
  55. 55. 57 Compute and Cinder(zfs): SSD Toshiba enterprise SSD • The balance of cost and performance we have taken. • Excellent IOPS performance, low latency Compute local SSD The benefits of SSD of Compute of local storage • The provision of high-speed storage than cinder boot. • It is easy to take online live snapshot of vm instance. • deployment of vm is fast. ConoHa: Compute option was modified: • take online live snapshot of vm instance. http://toshiba.semicon-storage.com/jp/product/storage- products/publicity/storage-20150914.html
  56. 56. 58 NexentaStor zfs cinder: ConoHa cloud(Juno) Compute
  57. 57. 59 NetApp storage: GMO Appscloud(Juno) If you are using the same Cluster onTAP NetApp a Glance and Cinder storage, it is possible to offload a copy of the inter-service of OpenStack as the processing of NetApp side. • Create volume from glance image ((glance the image is converted (ex: qcow2 to raw) required that does not cause the condition) • Volume QoS limit: Important function of multi- tenant storage • Uppper IOPS-limit by volume
  58. 58. 60 OpenStack Ironic: Only AppsCloud: • Undercloud Ironic deploy • Multi-tenant Ironic deploy
  59. 59. 61 Ironic with undercloud: GMO Appscloud(Juno) For Compute server deployment. Kilo Ironic and All-in-one • Compute server: 10G boot • Clout-init: network • Compute setup: Ansible Under-cloud Ironic(Kilo): It will use a different network and Ironic Baremetal dhcp for Service baremetal compute Ironic(Kilo). (OOO seed server) Trunk allowed vlan, LACP
  60. 60. 62 Ironic(Kilo) baremetal: GMO Appscloud(Juno) Boot baremetal instance • baremetal server (with Fusion ioMemory SanDisk) • 1G x4 bonding + Tagged allowed VLAN • Clout-init: network + lldp • Network: Nexsus Cisco Allowd VLAN security Ironic Kilo + Juno: Fine • Ironic Python driver • Whole Image write • Windows: OK
  61. 61. 63 Ironic network multi-tenant separation for Mitaka • https://wiki.openstack.org/wiki/Meetings/Ironic-neutron • Bare metal physical connectivity scenarios - supported and unsupported https://docs.google.com/document/d/1a- DX4FQZoX1SdTOd9w_Ug6kCKdY1wfrDcR3SKVhWlcQ/view?usp=sharing • サポートされるシナリオが図解されています(Libertyにおけるもの) • RackspaceのonMetalの実装もLibertyでは特殊な例 • Neutronがtrunk allowed vlan(tagged)を表現できない(in Liberty) • Mitaka待ち https://etherpad.openstack.org/p/summit-mitaka-ironic • ThinkITに解説を参照 https://thinkit.co.jp/article/8443 連載: OpenStack Summit Tokyo レポート Ironic最新動向:待望のマルチテナント対応が視野に。ストレージや運用自動化も進展(2015年11月26日(木)) 重松 光浩(NTT ソフトウェアイノベーションセンタ), 高田 唯子(NEC BI統括ユニット)
  62. 62. 64 Ironic network multi-tenant separation: model • Ironic neutron ML2 driver Integration https://blueprints.launchpad.net/nova/+spec/ironic-networks-support • Single port • LAG port (bonding) • MLAG port (LACP) • Trunk and multiple tagged VLAN or VXLAN(本気かどうか?) • Only support ML2 VLAN tunneling network • LinuxBridge ML2 VLAN tunnel compute • ovs ML2 VLAN tunnel compute, ovs ML2 VXLAN tunnel • GMO AppsCloudのモデルでは、undercloud Ironic, multi-tenent Ironicともに • MLAG port (LACP) • Trunk and multiple tagged VLAN + vlan allowed • Vlan allowedがmulti-tenantのセキュリティ設定の要
  63. 63. 65 Ironic network: rackspace onMetal = GMO AppsCloud for Mitaka • Vlan aware VMs • https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms • VMの中にtagged vlanが通る • これと同じようにして、baremetalにもというらしいのだが • Rackspace OnMetal • 現実的実装 : https://github.com/rackerlabs/ironic-neutron-plugin • 製品の説明 : https://www.rackspace.com/knowledge_center/article/create-onmetal- cloud-servers • ユーザ目線での情報: https://major.io/2015/08/21/using-systemd-networkd-with-bonding-on-rackspaces- onmetal-servers/ • Rackspaceも考えることは一緒だった << bonding + tagged VLAN • ほぼ、我々と同じような実装
  64. 64. 66 • Service model: Public cloud by KVM • Network: 10Gbps wired(10GBase SFP+) • Network model: – Flat-VLAN + Neutron ML2 ovs-VXLAN overlay + ML2 LinuxBridge(SaaS only) – IPv6/IPv4 dualstack • LBaaS: LVS-DSR(original) • Public API – Provided the public API (v2 Domain) • Compute node: ALL SSD for booting OS – Without Cinder boot • Glance: provided • Cinder: SSD NexentaStore zfs (SDS) • Swift (shared Juno cluster) • Cobbler deply on under-cloud – Ansible configuration • SaaS original service with keystone auth – Email, web, CPanel and WordPress OpenStack Juno: 2 service cluster, released • Service model: Public cloud by KVM • Network: 10Gbps wired(10GBase SFP+) • Network model: – L4-LB-Nat + Neutron ML2 LinuxBridge VLAN – IPv4 only • LBaaS: Brocade ADX L4-NAT-LB(original) • Public API – Provided the public API • Compute node: Flash cached or SSD • Glance: provided (NetApp offload) • Cinder: NetApp storage • Swift (shared Juno cluster) • Ironic on under-cloud – Compute server deploy with Ansible config • Ironic baremetal compute – Nexsus Cisco for Tagged VLAN module – ioMemory configuration
  65. 65. 67 OpenStack Swift: shared cluster
  66. 66. 68 Swift cluster (Havana to Juno upgrade) SSD storage: container/account server at every zone
  67. 67. 69 swift proxy keystone OpenStack Swift cluster (5 zones, 3 copy) swift proxy keystone LVS-DSrLVS-DSR HAProxy(SSL)HAProxy(SSL) Xeon E3-1230 3.3GHz Memory 16GB Xeon E3-1230 3.3GHz Memory 16GB Xeon E5620 2.4GHz x 2CPU Memory 64GB swift objects swift objects Xeon E3-1230 3.3GHz swift account swift container Xeon E5620 2.4GHz x 2CPU Memory 64GB, SSD x 2 swift objects swift objects Xeon E3-1230 3.3GHz swift account swift container Xeon E5620 2.4GHz x 2CPU Memory 64GB, SSD x 2 swift objects swift objects Xeon E3-1230 3.3GHz swift account swift container Xeon E5620 2.4GHz x 2CPU Memory 64GB, SSD x 2 swift objects swift objects Xeon E3-1230 3.3GHz swift account swift container Xeon E5620 2.4GHz x 2CPU Memory 64GB, SSD x 2 swift objects swift objects Xeon E3-1230 3.3GHz swift account swift container Xeon E5620 2.4GHz x 2CPU Memory 64GB, SSD x 2
  68. 68. 70 swift objects swift objects swift objects swift objects swift objects swift objects swift objects swift objects swift objects swift objects swift proxy keystone Havana AppsCloud swift proxy keystone Grizzly ConoHa Havana To Juno swift account swift container swift account swift container swift account swift container swift account swift container swift account swift container swift proxy keystone Juno ConoHa swift proxy keystone Juno AppsCloud Swift cluster: multi-auth and multi-endpoint swift proxy keystone Juno Z.com
  69. 69. 71 ceilometer-log の一部 (request count)
  70. 70. 72 • Juno release swift 2.2 el6 (self build: なんとか作った) Swift: Havana to Juno upgrade: el6-RPMS build
  71. 71. 73 Swift: Junoより先のupgrade • Kilo以降の開発は確実に python 2.7以降の検証しかされてない – >> python 2.6で動くかどうかは、確実に機能テストをしてから適用するべき • Python 2.7 でパッケージ作成も検討 – >> 冗長片系づつ更新するので、問題なさそう (◎: 最有力) – Python 3.4で動かす意義: asyncio thread (△: Swiftには反映されていない) • go-lang swiftは? – Hummingbird swift (go-lang) – https://github.com/openstack/swift/tree/feature/hummingbird/go – これまで、Plugin作ったもの>> go-langにする必要が出てくる (△: ここがつらい)
  72. 72. 74 Finally: The GMO AppsCloud in Juno OpenStack it was released on 10/27/2015. • Deployment of SanDisk Fusion ioMemory by Kilo Ironic on Juno OpenSack I can also. • Compute server was deployed by Kilo Ironic with under-cloud All-in-One openstack. Compute server configuration was deployed by Ansible. • Cinder and Glance was provied NetApp copyoffload storage mechanism. • LbaaS is Brocade ADX NAT mode original driver. • Linux Bridge Neutron mode is best performance without L3 switch On the otherhand; Juno OpenStack ConoHa released on 05/18/2015. • Designate DNS and GSLB service was started on ConoHa. • Cinder storage is SDS provied NexentaStor zfs storage for single volume type. • LBaaS is LVS-DSR original driver. • ovs-VXLAN overlay Neutron mode is more high degree of freedom. • And Z.com OEM openstack domain was living together in ConoHa
  73. 73. 75 Fin.
  74. 74. 76 Develop OpenStack related tools Tool that create Docker host. Golang Develop Vagrant provider for ConoHa. Fix a problem and pull request. Docker Machine https://github.com/hironobu-s/vagrant-conoha
  75. 75. 77 CLI tool that handle ConoHa specific APIs Golang Develop plugin that enable to save media files to Swift(Object Store) Develop OpenStack related tools https://github.com/hironobu-s/conoha-iso https://wordpress.org/plugins/conoha-object-sync/

×