Improving POD Usage
in labs, CI and testing
Jack Morgan, Intel
Fatih Degirmenci, Ericsson
What is the problem
we are trying to solve?
As OPNFV community grows in the number of
installers, feature projects, and test activities…
we need to be getting more out of our current
OPNFV hardware resources…
OPNFV Community labs
Several community labs…
Geographically distributed…
Standard hardware configurations…
Multiple roles…
• CI Production (OPNFV releases)
• Testing
• Development
Organization Location POD
Linux Foundation Portland, Oregon, USA 5
China Mobile Beijing, China 1
Enea Kista, Sweden 2
Ericsson Rosenburg, Sweden 2
Huawei Xi an, China 1
Huawei Santa Clara, CA, USA 1
Intel Portland, Oregon, USA 14
Orange Lannion, France 1
Orange Paris, France 1
ZTE Shang Hai, China 1
CENGN Ottawa, Canada 1
Nokia Espoo, Finland 1
OOL Okinawa, Japan 1
BII Beijing, China 1
Flex Milpitas, CA, USA 1
Total 34
• Pharos Specification
• Jump server - virtualized OpenStack/OPNFV installer
• Controller/Compute nodes – for high availability
• Network topology – LOM, Admin, Public, Private and
Storage networks
• Remote management – OpenVPN + SSH access
• Hardware requirements
• Intel Xeon processor
• Minimum 32GB RAM
• 1TB HDD – OS and additional software/tools
• 1TB HDD – CEPH object store
• 100GB SSD – CEPH journal
• Network requirements
• Option I: 4x1G control, 2x40G data
• 4x1G for LOM, Admin/PXE boot, control plane, storage
• 2x40G (10G) for data network
• Option II: 1x1G control, 2x40G (10G) data
• 1x1G for control via VLANs
• 2x40G (10G) for data/storage via VLANs
• Option III: 2x1G control, 2x10G data, 2x40G storage
• 1x1G LOM, Admin/PXE boot
• 2x10G for control, storage
• 2x40G (10G) for data network
Source: http://artifacts.opnfv.org/pharos/docs/pharos-spec.html
CI Production: static model
• To support OPNFV release, each installer is currently allocated 2 POD
• What is the utilization of these community resources?
Installer POD POD
Apex intel-pod7 lf-pod1
Compass intel-pod8 huawei-pod1
Fuel ericsson-pod2 lf-pod2
Joid intel-pod5 huawei-pod12
Daisy pod1 pod2
Armada pod1 pod2
What else do we have?
• Build servers
• Servers for virtual deployments (vPOD)
• Test result server
How do we better use of labs we
currently have to support the growing
community needs?
• Replace static CI Production allocation with Dynamic Allocation
• Dynamic Allocation - Howto
POD	
Descriptor
File
Scenario
Descriptor
File
OPNFV
Installer
POD Descriptor File
• Defines what hardware in in an OPNVF POD
• It is common and consistent between all installers
• Goal is for OPNFV installers to use these files natively
• Lab owners are working on creating these for OPNFV community labs
POD Descriptor File Convertor
• Converts POD descriptor files into installer specific format
• This effort is in progress and targeted to be done by end of June
What about the rest of the hardware resources?
• Automate setup, configuration, management
• Infrastructure as Code – use CM tooling – eg. Ansible
• Utilize containers
• Dynamically bring up/tear down resources
• Use the infra OPNFV puts together!
• OPNFV Cloud
Conclusions
• We need to follow basic principles and apply best practices
• Have clear strategy and vision to provide resources needed for the
current and future use cases
• Increase the resource utilization by having them allocated and used
dynamically
• Take part in solving problems by providing feedback, contributing, and
using what OPNFV Infra provides
Questions?
Thank you for attending

Improving POD Usage in Labs, CI and Testing

  • 2.
    Improving POD Usage inlabs, CI and testing Jack Morgan, Intel Fatih Degirmenci, Ericsson
  • 3.
    What is theproblem we are trying to solve? As OPNFV community grows in the number of installers, feature projects, and test activities… we need to be getting more out of our current OPNFV hardware resources…
  • 4.
    OPNFV Community labs Severalcommunity labs… Geographically distributed… Standard hardware configurations… Multiple roles… • CI Production (OPNFV releases) • Testing • Development Organization Location POD Linux Foundation Portland, Oregon, USA 5 China Mobile Beijing, China 1 Enea Kista, Sweden 2 Ericsson Rosenburg, Sweden 2 Huawei Xi an, China 1 Huawei Santa Clara, CA, USA 1 Intel Portland, Oregon, USA 14 Orange Lannion, France 1 Orange Paris, France 1 ZTE Shang Hai, China 1 CENGN Ottawa, Canada 1 Nokia Espoo, Finland 1 OOL Okinawa, Japan 1 BII Beijing, China 1 Flex Milpitas, CA, USA 1 Total 34
  • 5.
    • Pharos Specification •Jump server - virtualized OpenStack/OPNFV installer • Controller/Compute nodes – for high availability • Network topology – LOM, Admin, Public, Private and Storage networks • Remote management – OpenVPN + SSH access • Hardware requirements • Intel Xeon processor • Minimum 32GB RAM • 1TB HDD – OS and additional software/tools • 1TB HDD – CEPH object store • 100GB SSD – CEPH journal
  • 6.
    • Network requirements •Option I: 4x1G control, 2x40G data • 4x1G for LOM, Admin/PXE boot, control plane, storage • 2x40G (10G) for data network • Option II: 1x1G control, 2x40G (10G) data • 1x1G for control via VLANs • 2x40G (10G) for data/storage via VLANs • Option III: 2x1G control, 2x10G data, 2x40G storage • 1x1G LOM, Admin/PXE boot • 2x10G for control, storage • 2x40G (10G) for data network Source: http://artifacts.opnfv.org/pharos/docs/pharos-spec.html
  • 7.
    CI Production: staticmodel • To support OPNFV release, each installer is currently allocated 2 POD • What is the utilization of these community resources? Installer POD POD Apex intel-pod7 lf-pod1 Compass intel-pod8 huawei-pod1 Fuel ericsson-pod2 lf-pod2 Joid intel-pod5 huawei-pod12 Daisy pod1 pod2 Armada pod1 pod2
  • 8.
    What else dowe have? • Build servers • Servers for virtual deployments (vPOD) • Test result server
  • 9.
    How do webetter use of labs we currently have to support the growing community needs?
  • 10.
    • Replace staticCI Production allocation with Dynamic Allocation
  • 11.
    • Dynamic Allocation- Howto POD Descriptor File Scenario Descriptor File OPNFV Installer
  • 12.
    POD Descriptor File •Defines what hardware in in an OPNVF POD • It is common and consistent between all installers • Goal is for OPNFV installers to use these files natively • Lab owners are working on creating these for OPNFV community labs POD Descriptor File Convertor • Converts POD descriptor files into installer specific format • This effort is in progress and targeted to be done by end of June
  • 13.
    What about therest of the hardware resources? • Automate setup, configuration, management • Infrastructure as Code – use CM tooling – eg. Ansible • Utilize containers • Dynamically bring up/tear down resources • Use the infra OPNFV puts together! • OPNFV Cloud
  • 14.
    Conclusions • We needto follow basic principles and apply best practices • Have clear strategy and vision to provide resources needed for the current and future use cases • Increase the resource utilization by having them allocated and used dynamically • Take part in solving problems by providing feedback, contributing, and using what OPNFV Infra provides
  • 15.