1. GMO Internet operates multiple public cloud services using OpenStack including ConoHa public cloud and GMO AppsCloud.
2. They have a limited number of staff developing and operating OpenStack services across many clusters but must run a large number of OpenStack services.
3. They have upgraded their OpenStack installations over time from Diablo to Juno, expanding services from basic compute to block storage, object storage, load balancing, and more.
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...Masaaki Nakagawa
DOCOMO MAIL is 24/7 cloud mail system which has accesses from over 20 million people. This mail system stores user's mail archive in OpenStack Swift with Peta Byte scale capacity deployed by NTT DATA.
We have been successfully operating this service since Sep 2014 without any downtime. In this session, we'll present the actual issues and challenges we have faced and conquered.
Here're some specific points we'd like to highlight.
* No service degrade, no downtime.
* Massive scale and still growing.
* Hundreds of servers operated by few people.
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...Masaaki Nakagawa
DOCOMO MAIL is 24/7 cloud mail system which has accesses from over 20 million people. This mail system stores user's mail archive in OpenStack Swift with Peta Byte scale capacity deployed by NTT DATA.
We have been successfully operating this service since Sep 2014 without any downtime. In this session, we'll present the actual issues and challenges we have faced and conquered.
Here're some specific points we'd like to highlight.
* No service degrade, no downtime.
* Massive scale and still growing.
* Hundreds of servers operated by few people.
OpenStack Korea 2015 상반기스터디(devops) 스크립트로 오픈스택 설치하기 20150728jieun kim
※ 본 발표자료는 DevOps팀의 codetree님이 주도적으로 작성하신 shell script를 리뷰하여 작성하였습니다.
[OpenStack Korea Community Study Group, DevOps]
2015년 상반기 두번째 스터디, DevOps Class
"쉘 스크립트를 활용한 오픈스택 Kilo 설치 - 10분만에 끝내기"
D2에서 진행한 스터디 마무리 발표, 2번째 발표에대한 자료입니다.
Presentation from OpenStack Summit Tokyo
Online video link is below.
https://www.openstack.org/summit/tokyo-2015/videos/presentation/approaching-open-source-hyper-converged-openstack-using-40gbit-ethernet-network
VPC Implementation In OpenStack Heat
a) CreateVPC == Create Virtual Network
b) CreateSubnet == Create Subnet in Virtual Network(VPC)
c) CreateInternetGateway == Get external network defined in the Project
d) AttachInternetGateway == Connect external network to routers in the Virtual Network(VPC)
e) CreateRouteTable == Create a router and attach to Virtual Network(VPC)
f) AssociateRouteTable == Attach subnet to router
g) CreateEIP == Attach floating ip to instance
- What is NOVA ?
- NOVA architecture
- How instance are spawned in Openstack ?
- Interaction of nova with other openstack projects like neutron, glance and cinder.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
A look at some of the ways available to deploy Postgres in a Kubernetes cloud environment, either in small scale using simple configurations, or in larger scale using tools such as Helm charts and the Crunchy PostgreSQL Operator. A short introduction to Kubernetes will be given to explain the concepts involved, followed by examples from each deployment method and observations on the key differences.
OpenStack: DevStack installation using VirtualBox & Ubnutu (Juno with Neutron)Ian Choi
This slide briefly describes how to install DevStack Juno with Neutron using VirtualBox and Ubuntu.
The main difference from the two videos: http://youtu.be/zoi8WpGwrXM and http://youtu.be/1GgODv34E08 are 1) Juno, not Icehouse and 2) two NICs (NAT & Host-only) are used in Ubuntu virtual machine.
A lot of Internet of things devices use linux as its core. More so with the advent of DIY projects and Internet of things projects. A lot of Raspberry PI's, Beaglebone, Tessel boards are out there with default settings, and all connected to the internet, ready to be taken over. With the recent dyn DNS attack its of prime importance to know how we can keep these end point devices secure and out of the hands of botnet hoarders, attackers. In this presentation Rabimba Karanjai will show how to harden the security on these endpint devices taking a RaspBerry PI as an example. He will explain different techniques with code examples along with a toolkit made specifically for this demo which will make devices considerable harder to compromise. And even when they are, will allow to locate and detect the breach. After all, proetcting the device fially protects us all (prevents another DDOS)
Openstack summit walk DNSaaS 2015-0713 Summit LTNaoto Gohko
We will introduce the first is DNSaaS OpenStack Designate.
We will talk about what has been announced for the Designate at OpenStack summit 2015 / Liberty in Vancouver.
We will talk about how to spend the Summit, which was limited to the specific theme of DNS.
まずDNSaaSであるOpenStack Designateについてご紹介します。
OpenStack summit 2015/Liberty in Vancouver にてDesignateについて発表された内容について話します。
DNSという特定のテーマに限定したSummitの過ごし方についてお話します。
OpenStack Korea 2015 상반기스터디(devops) 스크립트로 오픈스택 설치하기 20150728jieun kim
※ 본 발표자료는 DevOps팀의 codetree님이 주도적으로 작성하신 shell script를 리뷰하여 작성하였습니다.
[OpenStack Korea Community Study Group, DevOps]
2015년 상반기 두번째 스터디, DevOps Class
"쉘 스크립트를 활용한 오픈스택 Kilo 설치 - 10분만에 끝내기"
D2에서 진행한 스터디 마무리 발표, 2번째 발표에대한 자료입니다.
Presentation from OpenStack Summit Tokyo
Online video link is below.
https://www.openstack.org/summit/tokyo-2015/videos/presentation/approaching-open-source-hyper-converged-openstack-using-40gbit-ethernet-network
VPC Implementation In OpenStack Heat
a) CreateVPC == Create Virtual Network
b) CreateSubnet == Create Subnet in Virtual Network(VPC)
c) CreateInternetGateway == Get external network defined in the Project
d) AttachInternetGateway == Connect external network to routers in the Virtual Network(VPC)
e) CreateRouteTable == Create a router and attach to Virtual Network(VPC)
f) AssociateRouteTable == Attach subnet to router
g) CreateEIP == Attach floating ip to instance
- What is NOVA ?
- NOVA architecture
- How instance are spawned in Openstack ?
- Interaction of nova with other openstack projects like neutron, glance and cinder.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
A look at some of the ways available to deploy Postgres in a Kubernetes cloud environment, either in small scale using simple configurations, or in larger scale using tools such as Helm charts and the Crunchy PostgreSQL Operator. A short introduction to Kubernetes will be given to explain the concepts involved, followed by examples from each deployment method and observations on the key differences.
OpenStack: DevStack installation using VirtualBox & Ubnutu (Juno with Neutron)Ian Choi
This slide briefly describes how to install DevStack Juno with Neutron using VirtualBox and Ubuntu.
The main difference from the two videos: http://youtu.be/zoi8WpGwrXM and http://youtu.be/1GgODv34E08 are 1) Juno, not Icehouse and 2) two NICs (NAT & Host-only) are used in Ubuntu virtual machine.
A lot of Internet of things devices use linux as its core. More so with the advent of DIY projects and Internet of things projects. A lot of Raspberry PI's, Beaglebone, Tessel boards are out there with default settings, and all connected to the internet, ready to be taken over. With the recent dyn DNS attack its of prime importance to know how we can keep these end point devices secure and out of the hands of botnet hoarders, attackers. In this presentation Rabimba Karanjai will show how to harden the security on these endpint devices taking a RaspBerry PI as an example. He will explain different techniques with code examples along with a toolkit made specifically for this demo which will make devices considerable harder to compromise. And even when they are, will allow to locate and detect the breach. After all, proetcting the device fially protects us all (prevents another DDOS)
Openstack summit walk DNSaaS 2015-0713 Summit LTNaoto Gohko
We will introduce the first is DNSaaS OpenStack Designate.
We will talk about what has been announced for the Designate at OpenStack summit 2015 / Liberty in Vancouver.
We will talk about how to spend the Summit, which was limited to the specific theme of DNS.
まずDNSaaSであるOpenStack Designateについてご紹介します。
OpenStack summit 2015/Liberty in Vancouver にてDesignateについて発表された内容について話します。
DNSという特定のテーマに限定したSummitの過ごし方についてお話します。
This is the presentation materials of Japanese OCDET of bare metal computing meeting.
In "GMO AppsCloud" of GMO Internet, Inc., by modifying the nova Baremetal compute of OpenStack Havana so as to drive the Ansible, by installing the OS in the cobbler, has commercialized the environment to start with disk boot loader.
Janog36 ConoHa: Making GSLB - OpenStack Designate and PowerDNSNaoto Gohko
GSLB, Global server load balancing, is a technology to dispatch DNS requests to the different servers. But the server appliances with these features are complex and expensive. So we try to make it ourselves with the open source softwares.
Designate is one of the components in OpenStack to provide DNSaaS services. It has features that can register DNS records via RESTful APIs and can select backend types; for example choosing BIND, NSD, PowerDNS, etc.
In this session, we will present GSLB with Designate and PowerDNS.
About GMO Internet, Inc.
GMO Internet Group, headquartered in Tokyo, is a leading force in the Internet industry offering one of the most comperehensive ranges of Internet services worldwide.
We are providing a public cloud called “ConoHa” and “GMO APPs Cloud” as part of our services. Both are based on OpenStack.
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes MeetupLaure Vergeron
Julien Girardin presents metal-k8s, an opinionated Kubernetes distribution designed for bare-metal deployments. Julien explains why we chose certain Kubespray plugins over others for Zenko's needs of scalability and petabyte-scale storage over multiple public and private clouds.
Automated Application Management with SaltStackinovex GmbH
SaltStack is a new System Management Platform that provides various automations for the lifecycle of systems (HW/VMs). This makes it possible to trigger routines based on specific events using Salt Reactor. The event-based orchestration component of SaltStack recognizes f.e. the adding of new Salt minions (agents) in the Salt host inventory/database, the start of minions after the first system booting, the execution of any (distributed) commands (local or master-triggered) and much more. You can use this framework to provision newly created hosts/VMs with packages and configuration files, or to fully automate the rollout/deployment of new software releases and pre/post actions (DB backup, schema update, removal von temporary files, etc.).
Event: inovex Meetup Köln, 08.06.2016
Speaker: Arnold Bechtoldt
weitere Tech-Vorträge: https://www.inovex.de/de/content-pool/vortraege/
OpenStack Neutron Havana Overview - Oct 2013Edgar Magana
Presentation about OpenStack Neutron Overview presented during three meet-ups in NYC, Connecticut and Philadelphia during October 2013 by Edgar Magana from PLUMgrid
BBVA Bank on OpenStack
Due to unproven scalability and security concerns, enterprises take a ‘wait and see’ approach to Open Source deployments much less OpenStack. Yet, not only are these deployments feasible but also can yield substantial multi tenant efficiency, agility, speed, dynamic and security advantages over legacy frameworks. While a hybrid cloud approach is quite popular for agile services delivery, for some enterprise segments a private cloud is essential in order to comply with regulations.
In this session, we will explore how Banco Bilbao Vizcaya Argentaria SA (BBVA), a Spain-based global financial group, banks on OpenStack. BBVA has designed an automated, multi tenant service Cloud that provides:
Efficient, granular security: Via a global policy framework from Nuage Networks
Agility: Via utilization of KVM as a virtualization hypervisor
Speed: Provisioning and delivery of services in near real-time via the RedHat OpenStack distribution
Moreover, we show the integration of Neutron based on external SDN overlay solutions in order to improve the networking and security functionalities.
This will be an eye-opening session – you can bank on it! (Seguro que si!)
Quantum - Virtual networks for Openstacksalv_orlando
An overview of Quantum, the soon-to-be default Openstack network service.
These slides introduce Quantum, its design goals, and discusses the API. It also tries to address how quantum relates to Software Defined Networking (SDN)
There are some issues for OpenStack multi-region mode, for example, lack of global view quotas control, resource utilization, metering data, replication of image / keypair / security group / volume , L2/L3 networking across OpenStack, ... etc. OpenStack cascading is the best-matched solution to solve these issues in multi-site multi-region cloud
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
CERN OpenStack Cloud Control Plane - From VMs to K8sBelmiro Moreira
CERN is the home of the Large Hadron Collider (LHC), a 27km circular proton accelerator that generates petabytes of physics data every year. To process all this data, CERN runs an OpenStack Cloud (>300K cores) that helps scientists all around the world to unveil the mysteries of the Universe. The Infrastructure is also used to run all the IT services of the Organization.
Delivering these services, with high performance and reliable service levels has been one of the major challenges for the CERN Cloud engineering team. We have been constantly iterating the architecture and deployment model of the Cloud control plane.
In this presentation we will describe the different control plane architecture models that we relied over the years. Finally, we will describe all the work done to move the OpenStack Cloud control plane from VMs into a kubernetes cluster. We will report about our experience running this architecture at scale, its advantages and challenges.
What is OpenStack and the added value of IBM solutionsSasha Lazarevic
OpenStack has become de-facto standard for private cloud implementations. This is presentation of OpenStack basics, with a conclusion that can be valuable to professional services. I recommend the clients to pay attention to IBM's value-added solutions like Cloud Manager and Cloud Orchestrator.
"Lesson learns from Japan cloud trend" explains the followings
- CloudStack Mascot History
- Japan SP / Academic cloud use cases
- Japan CloudStack Community
Tech Talk by Gal Sagie: Kuryr - Connecting containers networking to OpenStack...nvirters
These are slides from the Tech Talk at http://www.meetup.com/openvswitch/events/226518209/
Synopsis
Kuryr is a new project under Neutron's big tent that makes Neutron networking available to Docker containers by means of a Docker plugin.
In this session Gal will introduce Kuryr and show how it provides networking for containers in plain Docker environments and in mixed Docker, OpenStack environments. He will also present Kuryr's roadmap and integration with networking models in other orchestration engines like Kubernetes and Docker
About Gal Sagie
Gal Sagie is an open source software architect at Huawei European Research Centre, focusing work on OpenStack networking and containers networking. Working on various projects in the community like Dragonflow, OVN, Kuryr, and Multisite/Hybrid clouds in OpenStack. Blogging for anything SDN/NFV/OpenStack related at http://galsagie.github.io
End-to-end IoT solutions with Java and Eclipse IoTBenjamin Cabé
The IoT market is poised to an exponential growth, but there are still lots of barriers that prevent building a real, open, Internet of Things. Over the last years, Eclipse has been growing an ecosystem of open-source projects for IoT, that are used in real-world solutions, from smart gateways bridging sensors to the cloud, to device management infrastructures or home automation systems.
Java is a key-enabler for IoT, and this presentation provides you with concrete examples on how to build end-to-end solutions with the Eclipse IoT Java stack and projects like Paho, Kura, SmartHome, Californium, OM2M, Eclipse SCADA, Concierge ... This session will give you the keys to build a scalable IoT solution on top of open-source technology and open standards.
ConoHa cloud is based in OpenStack Juno. but the latest OpenStack is Ocata.
I released a MetaPackage that can easily install OpenStack Juno client in python 2.7 environment on ConoHa cloud (and Mikumo ConoHa) 4th birthdays.
2015 0228 OpenStack swift; GMO Internet ServicesNaoto Gohko
GMO Internet Inc., has been service provided by the quotient material made of the fact that OpenStack Swift the ConoHa VPS brand and GMO Apps Cloud. discussed the differences between the physical configuration of the OpenStack Swift at rackspace and ConoHa, was carried out optimization of the configuration.
In addition, you have an implementation that provided by Dual-head on multiple merchandise by invoking the swift-proxy for each service.
TechOYAJI 2014 tokyo summer LT; CentOS7 and RDO Icehouse OpenStackNaoto Gohko
CentOS7 is OSS of RHEL7. But we had problems RDO Icehouse OpenStack install with packstack.
This behavior is due to the version notation was introduced in CentOS7 called "7.0.1406". So far, in CentOS7, and we use the notation such as "6.5" treated as values in the decimal point, it was also similar even RHEL.
String introduced in CentOS7 called "7.0.1406" can not be treated as a number.
Confuse itself caused the puppet upstream community that said version number is difficult to make out CentOS7 development community.
JOSUG2014 OpenStack 4th birthday party in Japan; the way of OpenStack API DragonNaoto Gohko
JOSUG2014 OpenStack 4th birthday party in Japan
the way of OpenStack API Dragon.
we provide OpenStack API on "GMO Apps Cloud" known to be capable of providing efficient social Games.
ER(Entity Relationship) Diagram for online shopping - TAEHimani415946
https://bit.ly/3KACoyV
The ER diagram for the project is the foundation for the building of the database of the project. The properties, datatypes, and attributes are defined by the ER diagram.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
Multi-cluster Kubernetes Networking- Patterns, Projects and Guidelines
Openstack days taiwan 2016 0712
1. 1
~ Architecture of our public clouds ~
OpenStack Days Taiwan
Jul 12, 2016
Naoto Gohko (@naoto_gohko)
GMO Internet, Inc.
How is GMO Ineternet using OpenStack
for Public Cloud
Slide URL
http://www.slideshare.net/chroum/openstack-days-taiwan-2016-0712-public-cloud-arch
ConoHa public cloud (lang zh)
https://www.conoha.jp/zh/
ConoHa public cloud (lang en)
https://www.conoha.jp/en/
5. 5
Cloud service development team: (abount 30 people)
– OpenStack Neutron team: 4 people
• Neutron driver / modification / engineering
– Cloud API development team: 5 people
• Public API validation program
• OpenStack modification / scaduler programing / keystone
– Cloud Infra. development team: 11 people
• Security engineering / glance driver / cinder driver / nova additional
extensions / construction of OpenStack infra.
– Applicatoin cloud service development team: 5 people
• Billing engineering / staff tools / GMO AppsCloud web GUI
Additional engineering team: many people (30 ~)
– QA Team / Server Engineering Team / GUI development Team
– Network Engineering Team / SaaS development Team
– CRM backend and billing Team
Cloud service development team: Now(2016)
6. 6
Cloud service development team: Office(2016) #1
Neutron Team
And
Cloud API Team
Cloud Infra. Team
And
AppsCloud Team
7. 7
Cloud service development team: Office(2016) #2
Neutron Team
And
Cloud API Team
Cloud Infra. Team
And
AppsCloud Team
8. 8
Limied number of people.
But, we have to run a lot of OpenStack
service clusters.
17. 18
swift proxy
keystone
OpenStack Swift cluster (5 zones, 3 copy)
swift proxy
keystone
LVS-DSRLVS-DSR HAProxy(SSL)HAProxy(SSL)
Xeon E3-1230 3.3GHz
Memory 16GB
Xeon E3-1230 3.3GHz
Memory 16GB
Xeon E5620 2.4GHz x 2CPU
Memory 64GB
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
18. 19
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift proxy keystone
Havana AppsCloud
swift proxy keystone
Grizzly ConoHa
Havana
To
Juno
swift account
swift container
swift account
swift container
swift account
swift container
swift account
swift container
swift account
swift container
swift proxy keystone
Juno ConoHa
swift proxy keystone
Juno AppsCloud
Swift cluster: multi-auth and multi-endpoint
swift proxy keystone
Juno Z.com
23. 24
Grizzly
• Quantam Network:
– It was using the initial version of the Open vSwitch full mesh GRE-vlan
overlay network with LinuxBridge Hybrid
But
When the scale becomes large,
Localization occurs to a specific node
of the communication of the GRE-mesh-tunnel
(with under cloud network(L2) problems)
(Broadcast storm?)
OpenStack service: ConoHa(Grizzly)
24. 25
• Service XaaS model:
– KVM compute + Private VLAN networks + Cinder + Swift
• Network:
– 10Gbps wired(10GBase SFP+)
• Network model:
– IPv4 Flat-VLAN + Neutron LinuxBridge(not ML2) + Cisco Nexsu L2 sw/port driver
– Brocade ADX L4-LBaaS original driver
• Public API
– Provided the public API
• Ceilometer (Billing)
• Glance : Provided(GlusterFS)
• Cinder : HP 3PAR(Active-Active Multipath original) + NetApp
• ObjectStorage : Swift cluster
• Bare-Metal Compute
– Modifiyed cobbler bare-metal deploy driver
– Cisco Nexsus switch bare-metal networking driver (L2 tenant NW)
OpenStack service: GMO AppsCloud(Havana)
25. 26
OpenStack service: GMO AppsCloud model
compute
vm
NIC
Vlan network
bridge
NIC vlan
tap
vNIC
Vlan network
vNIC
bridge
vlan
tap
compute
NIC
bridge
NIC vlan
bridge
vlan
public network
Neutron LinuxBridge model(very Fast, simple is Best)
this cloud is optimized services for the GAME server.
26. 27
Cisco Nexsus L2 sw/Port manage driver(self made)
• L2 resource is limited / SW CPU
– MAC ADDRESS
– VLAN per Network
– VLAN per Port
Allowed VLAN to trunked port is allowed only
VLAN to be used in LinuxBridge in VM/Baremetal
Compute node.
– Baremetal : link aggregation port
– Port discovery using by lldp
• Cisco Nexsus NX-OS
– Server:
LACP : port-Channel
Active-Active link aggreration
Fully redundant
server
(Act-Act link
aggreration)
Nexus 5k’s
(VPC)
Nexus 2k: FEX’s
(dual homed)
Compute node
Baremetal
Compute node
Switch/Port
API server
Cisco Nexsus Fabric
SW Manage NW
OpenStack Manage NW
29. 31
Public API security and load balance:
• LVS-DSR
• L7 reverse-proxy
• API validation wrapper
30. 32
public API
Web panel(httpd, php)
API wrapper proxy
(httpd, php
Framework: fuel php)
Nova API
Customer sys API
Neutron API Glance API
OpenStack API for
input validation
Customer DB
Keystone API
OpenStack API
Cinder APICeilometer API
Endpoint L7:reverse proxy
Swift Proxy
31. 33
Public API global network
LVS-DSR
(act-stby)
the Cloud
(Internet)
HAProxy
LVS
heatbeat
api-reverse-proxy01 api-reverse-proxy02elvs01
elvs02
VMx2
LVS
heatbeat
VMx2
HAProxy
ext-api-wrapper01
php + httpd
- keystone
- nova
- cinder
- neutron
- glance
- account
ext-api-wrapper02
php + httpd
- keystone
- nova
- cinder
- neutron
- glance
- account
control-nodes01
- keystone API
- nova API
- cinder API
- neutron API
- glance API
control-nodes02
- keystone API
- nova API
- cinder API
- neutron API
- glance API
OpenStack Management network
step 1)
step 2)
step 3)
step 4)
public API: step 1, step 2)
step 1) LVS-DSR (L4) is received https(tcp/443) packet,
then forward api-reverse-proxy real IP’s.
step 2) HAProxy has valid API ACL and backend server configurations.
IF HAProxy allowed POST “/v2.0/tokens”, then the request call to ext-api-wrapper0[12].
32. 34
Public API global network
LVS-DSR
(act-stby)
the Cloud
(Internet)
HAProxy
LVS
heatbeat
api-reverse-proxy01 api-reverse-proxy02elvs01
elvs02
VMx2
LVS
heatbeat
VMx2
HAProxy
ext-api-wrapper01
php + httpd
- keystone
- nova
- cinder
- neutron
- glance
- account
ext-api-wrapper02
php + httpd
- keystone
- nova
- cinder
- neutron
- glance
- account
control-nodes01
- keystone API
- nova API
- cinder API
- neutron API
- glance API
control-nodes02
- keystone API
- nova API
- cinder API
- neutron API
- glance API
OpenStack Management network
step 1)
step 2)
step 3)
step 4)
public API: step 3), step 4)
step 3) ext-api-wrapper0 [12], it is a php program.
request URI and header, and the input value of json of the body was confirmed
by php, and then call the real OpenStack API as the next processing.
step 4) OpenStack API that is checked the input value will be run.
35. 37
Tokyo Singapole
User/tenant User/tenant
API Management
Keystone API
API Management
Keystone APIAPI Management
Keystone API
Token Token
Tokyo SanJoseSingapore
API Management
Keystone API
API Management
Keystone API
READ/WRIT
E
READ READ
TokenToken Token
Do not
create/delete
users
Do not
create/delete
users
Our Customer base
User administration
# User-registration is possible in Japan only
DB Replication DB Replication
User/tenant User/tenantUser/tenant
R/W R/W
36. 38
OpenStack Juno: 2 service cluster, released
Mikumo ConoHa Mikumo Anzu
Mikumo = 美雲 =
Beautiful cloud
New Juno region released:
10/26/2015
37. 39
• Service model: Public cloud by KVM
• Network: 10Gbps wired(10GBase SFP+)
• Network model:
– Flat-VLAN + Neutron ML2 ovs-VXLAN overlay +
ML2 LinuxBridge(SaaS only)
– IPv6/IPv4 dualstack
• LBaaS: LVS-DSR(original)
• Public API
– Provided the public API (v2 Domain)
• Compute node: ALL SSD for booting OS
– Without Cinder boot
• Glance: provided
• Cinder: SSD NexentaStore zfs (SDS)
• Swift (shared Juno cluster)
• Cobbler deply on under-cloud
– Ansible configuration
• SaaS original service with keystone auth
– Email, web, CPanel and WordPress
OpenStack Juno: 2 service cluster, released
• Service model: Public cloud by KVM
• Network: 10Gbps wired(10GBase SFP+)
• Network model:
– L4-LB-Nat + Neutron ML2 LinuxBridge VLAN
– IPv4 only
• LBaaS: Brocade ADX L4-NAT-LB(original)
• Public API
– Provided the public API
• Compute node: Flash cached or SSD
• Glance: provided (NetApp offload)
• Cinder: NetApp storage
• Swift (shared Juno cluster)
• Ironic on under-cloud
– Compute server deploy with Ansible config
• Ironic baremetal compute
– Nexsus Cisco for Tagged VLAN module
– ioMemory configuration
40. 42
NetApp storage: GMO AppsCloud(Havana/Juno)
If you are using the same Cluster onTAP NetApp a
Glance and Cinder storage, it is possible to offload a
copy of the inter-service of OpenStack as the
processing of NetApp side.
• Create volume from glance image
((glance the image is converted (ex: qcow2 to raw)
required that does not cause the condition)
• Volume QoS limit: Important function of multi-
tenant storage
• Uppper IOPS-limit by volume
42. 44
Ironic with undercloud: GMO AppsCloud(Juno)
For Compute server deployment.
Kilo Ironic and All-in-one
• Compute server: 10G boot
• Clout-init: network
• Compute setup: Ansible
Under-cloud Ironic(Kilo):
It will use a different network and
Ironic Baremetal dhcp for Service
baremetal compute Ironic(Kilo).
(OOO seed server)
Trunk allowed vlan, LACP
私からのお話は以上です。
ここからはGohkoさんが、我々のパブリッククラウドで使われている技術について解説します。
==============
Hi everyone.
My name is Naoto Gokhko.
I am working in the Cloud Service development team of GMO Internet.
We are from a team at GMO Internet that focuses on developing services based on Openstack.
AND, we are offerring multiple public cloud services;
ConoHa cloud, z.com cloud, GMO AppsCloud, Onamae cloud and vps.
Within all the services we’ve launched so far,
we have 2,000 active physical node and over 100,000 VMs activated.
Development Team !!
I would like to talk about our current development team,
and the scale of the development department.
Neutron Team is 4 people.
Cloud API Team is 5 people.
Cloud Infra Team is 11 people.
Application cloud Team is 5 people.
There is Neutron Team and Cloud API Team.
And There is Cloud Infra Team and AppsCloud Team.
This photo was took made late at night.
Here he who works has just joined the company after graduating from university last year.
But, he is a great guy.
He did a renovation of cinder driver of speed up.
We are limited number of people.
But, we have to run a lot of OpenStack service clusters.
And here,
we will talk about our service development history by OpenStack.
Our multiple OpenStack clusters are operating in multiple Products within our environment.
Starting with the Diablo cluster.
Then, share the OpenStack code that we have to fix bug, and we provieded to our group company in Vietnam.
we’ve built many OpenStack clusters such as Grizzly for 1st VPS ConoHa, Havana for GMO AppsCloud,
and Juno for ConoHa/z.com cloud/GMO AppsCloud,
and they are still in operations.
Swift cluster are shared every cluster.
It is a dark age for the cloud suppliers
cloud cat is surprised. !!
The cost to operate Multi version Openstack have increased,
and
it is difficult to upgrade or add new features.
Managing multiple sites of OpenStack is a headache for us.
=========
クラウド事業者の暗黒面に突入しています。
First, we describe the swift cluster.
The swift object nodes we use servers with twelve HDD of Asus company.
I chose those of high-clock(3.3GHz) in about Xeon E3.
Network nodes, such as LVS-DSR and L7-proxy is using microblade of Supermicro company.
In E3-Xeon, the CPU is the node of the clock-oriented.
The account-server and container server that I used things E5-2CPU model.
Sqlite Database area is SSD.
The swift-proxy nodes had to be output for charging the log of ceilometer agent.
Log area is HDD in RAID 10.
This is a block diagram of a swift object storage.
Load balancer is LVS-DSR and Layer7 HAProxy.
The reason for using HAProxy is, we assume that the future put a pure HTTP request in caching node like a Apache Traffic Server.
I was upgrade to Juno in its own package the swift environment from havana.
upgrade in python 2.6 is Juno is the last.
In the next and later versions, I need to consider how to be what.
OpenStack Swift cluster is 5 zones 3 copy configuration.
As the first to say, swift cluster is shared by more than one service.
As a result, the system has multiple API endpoint.
Multiple of Swift-Proxy settings are connected to each of the keystone.
reseller_prefix has started with different settings as the name space.
Other node of swift cluster has been completely shared.
I will explain what kind of transition of configuration was there in our service for the computing environment.
This is Diablo cluster.
Still alive.
We use Nova virtulization flat Nova network, keystone and glance.
We challenged the ovs-GRE overlay networking AND IPv6/IPv4 dualstack by Quantam in Grizzly release.
But we need to many many many fix, a lot of the program.
As a result, We were not able to publish the API for End user.
First(1st) ovs-GRE overlay network was giant in L2 networking.
Grizzly’s overlay configuration is Full Mesh.
But, sometime the trouble has occurred in this GRE-mesh-tunnel.
Network traffic of the Game is the load is large.
The current mobile game requires both a large amount of data transfer and short packet forwarding is.
It is using Brocade, Inc. appliance as LBaaS.
We're using the HPE 3 PAR and NetApp FAS as Cinder.
We have provided as the first public API in this service the OpenStack API to the end user.
Neutron LinuxBridge model is very Fast,
This is simple is the Best.
This cloud is optimized services of the GAME servers.
Allowed VLAN to trunked port is allowed only VLAN to be used in LinuxBridge in Compute node.
By controlling the dynamic that allowed vlan,
it will be the performance of the network switch can be used in a more L2 network.
So Neutron LinuxBridge VLAN,
VM and nova baremetal instance is almost the same as to control the trunk allowed vlan of Switch Port,
to build the security of the tenant network.
By using this python library for the above of Cisco iOS,
we have to control the NX-OS.Processing is serialized as API runs on the Switch.
The nova-baremetal environment of havana, has major processing are modified to ansible is executed.
Even in our Juno of Ironic environment, as well as to configure the network with link aggregation in ansible.
This is ALL component and networks.
Differences in Juno environment, and that the version of OpenStack is different,
it is that the nova-baremetal has changed to Ironic.
I will explain about the security and load balance of the public API.
For public API are running on the same system configuration from the time of Havana.
LVS-DSR (L4), reverse-proxy (L7), API wrapper program (L8),
has become a three-layer configuration that.
step 1) LVS-DSR (L4) is received https(tcp/443) packet,
then forward api-reverse-proxy real IP’s.
step 2) HAProxy has valid API ACL and backend server configurations.
IF HAProxy allowed POST “/v2.0/tokens”, then the request call to ext-api-wrapper0[12].
Layer-7 reverse proxy, a confirmation of permission URI with ACL,
method verification, and set the request limit setting.
As a result, it is possible to protect from DoS and DDoS of the API,
you can be a back-end API server in a secure configuration.
step 3) ext-api-wrapper0 [12], it is a php program.
request URI and header, and the input value of json of the body was confirmed by php,
and then call the real OpenStack API as the next processing.
step 4) OpenStack API that is checked the input value will be run.
The last is a cluster of Juno.
We will talk about ConoHa and AppsCloud.
ConoHa has 3 reageon cluster.
First, ConoHa has 3 of the region.
Supporting multi region was our first priority out of all the features in the roadmap.
Physical location of the servers means a lot to our users.
The data center location we chose initially were Tokyo, Singapore and San Jose.
We’ve successfully built a multi region architected OpenStack environment between the 3 locations.
The system that manages both service site and user information only exists in Japan.
Therefore user registration was only available in Japan.
Among them, it has built a AppsCloud in Juno Tested.
The AppsCloud of Juno environment was able to build in the minimum of time.
Left side is ConoHa,
Right side is AppsCloud.
ConoHa is cheap price cloud.
AppsCloud is high performance cloud.
For Cinder, we mentioned the difference in ConoHa and AppsCloud.
NexentaStor is a simple an SDS product due to the zfs, it is OpenIndiana system.
The collaboration between Dell-Nexenta, you can build the structure at a low price was taking advantage of the performance of the SSD.
Meaning that use NetApp as Cinder is
because it is possible to provide a high speed due to copy offload function to volume expansion from the glance image.
About Ironic in Juno environment, and easy to add the information.
In Juno environment,
we are using a Ironic and Ansible to the initial construction of the node.
All in one OpenStack AND set up the OS with a GUI in Ironic,
at the final networking configuration:
bonding, VLAN and OpenStack software environment in ansible.
In AppsCloud, it offers baremetal node in Ironic as Guest environment.
Security settings of tenant network will be applied by the previous switch / port driver.
ansible By execution in Ironic, setting of PCI-E Flash, such as ioMemory is also possible.
In this way, in a different offer price range at Juno, we were able to be provided as a public service in a different cost.
In particular, ConoHa as OpenStack service, we will continue to improve the compatibility of the provision of the OpenStack API for developers.
==========================
このように、Junoにて異なる提供価格帯で、違うコストでpublicサービスとして提供することができました。
とくに、OpenStackサービスとしてのConoHaは、開発者のためのOpenStack APIの提供の互換性について継続的に改善していきます。