Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Improving POD Usage in Labs, CI and Testing

321 views

Published on

Fatih Degirmenci, Ericsson, Jack Morgan, Intel

The OPNFV community relies on our community labs, CI and testing projects to ensure we release quality code. The current strategies to use hardware resources in OPNFV community labs will not be able to sustain its current growth. New strategies need to be implemented to allow for new OPNFV projects. The presenters will look at the current lab usage model and discuss ways already being worked in OPNFV community labs through the POD descriptor file. In our CI process through Dynamic CI, Cross Community CI and other initiatives. In our testing projects use of hardware resources and its importance in the release process. The presenters will show current tools used to track usage such as the Bitergia dashboard.

Published in: Software
  • Be the first to comment

  • Be the first to like this

Improving POD Usage in Labs, CI and Testing

  1. 1. Improving POD Usage in labs, CI and testing Jack Morgan, Intel Fatih Degirmenci, Ericsson
  2. 2. What is the problem we are trying to solve? As OPNFV community grows in the number of installers, feature projects, and test activities… we need to be getting more out of our current OPNFV hardware resources…
  3. 3. OPNFV Community labs Several community labs… Geographically distributed… Standard hardware configurations… Multiple roles… • CI Production (OPNFV releases) • Testing • Development Organization Location POD Linux Foundation Portland, Oregon, USA 5 China Mobile Beijing, China 1 Enea Kista, Sweden 2 Ericsson Rosenburg, Sweden 2 Huawei Xi an, China 1 Huawei Santa Clara, CA, USA 1 Intel Portland, Oregon, USA 14 Orange Lannion, France 1 Orange Paris, France 1 ZTE Shang Hai, China 1 CENGN Ottawa, Canada 1 Nokia Espoo, Finland 1 OOL Okinawa, Japan 1 BII Beijing, China 1 Flex Milpitas, CA, USA 1 Total 34
  4. 4. • Pharos Specification • Jump server - virtualized OpenStack/OPNFV installer • Controller/Compute nodes – for high availability • Network topology – LOM, Admin, Public, Private and Storage networks • Remote management – OpenVPN + SSH access • Hardware requirements • Intel Xeon processor • Minimum 32GB RAM • 1TB HDD – OS and additional software/tools • 1TB HDD – CEPH object store • 100GB SSD – CEPH journal
  5. 5. • Network requirements • Option I: 4x1G control, 2x40G data • 4x1G for LOM, Admin/PXE boot, control plane, storage • 2x40G (10G) for data network • Option II: 1x1G control, 2x40G (10G) data • 1x1G for control via VLANs • 2x40G (10G) for data/storage via VLANs • Option III: 2x1G control, 2x10G data, 2x40G storage • 1x1G LOM, Admin/PXE boot • 2x10G for control, storage • 2x40G (10G) for data network Source: http://artifacts.opnfv.org/pharos/docs/pharos-spec.html
  6. 6. CI Production: static model • To support OPNFV release, each installer is currently allocated 2 POD • What is the utilization of these community resources? Installer POD POD Apex intel-pod7 lf-pod1 Compass intel-pod8 huawei-pod1 Fuel ericsson-pod2 lf-pod2 Joid intel-pod5 huawei-pod12 Daisy pod1 pod2 Armada pod1 pod2
  7. 7. What else do we have? • Build servers • Servers for virtual deployments (vPOD) • Test result server
  8. 8. How do we better use of labs we currently have to support the growing community needs?
  9. 9. • Replace static CI Production allocation with Dynamic Allocation
  10. 10. • Dynamic Allocation - Howto POD Descriptor File Scenario Descriptor File OPNFV Installer
  11. 11. POD Descriptor File • Defines what hardware in in an OPNVF POD • It is common and consistent between all installers • Goal is for OPNFV installers to use these files natively • Lab owners are working on creating these for OPNFV community labs POD Descriptor File Convertor • Converts POD descriptor files into installer specific format • This effort is in progress and targeted to be done by end of June
  12. 12. What about the rest of the hardware resources? • Automate setup, configuration, management • Infrastructure as Code – use CM tooling – eg. Ansible • Utilize containers • Dynamically bring up/tear down resources • Use the infra OPNFV puts together! • OPNFV Cloud
  13. 13. Conclusions • We need to follow basic principles and apply best practices • Have clear strategy and vision to provide resources needed for the current and future use cases • Increase the resource utilization by having them allocated and used dynamically • Take part in solving problems by providing feedback, contributing, and using what OPNFV Infra provides
  14. 14. Questions? Thank you for attending

×