Tim Rozet, Red Hat, Feng Pan, Red Hat
This session will explore the challenges and lessons learned with integrating accelerated dataplanes into OPNFV deployments. More specifically the talk will focus on FD.IO (VPP) and OVS DPDK integration into Apex, including different types of configuration options, platform requirements, performance tuning, and deployment challenges. This talk will also provide context to how OpenStack functions differently with these types of dataplanes, and how integration with the OpenDaylight controller works.
Testing, CI Gating & Community Fast Feedback: The Challenge of Integration Pr...OPNFV
Jose Lausuch, Ericsson, Nikolas Hermanns, Ericsson
How can we make sure that new code in OPNFV does not break or stop CI?
How can we ensure quick feedback for each patch-set?
With the new way to snapshot a virtual deployment it is now possible to get virtual clouds up and running in about 2 min. In addition, through low amount of disk/cpu consumption and isolation of the networking it is possible to have a very high number of virtual deployments co-existing in the same bare-metal server.
How Many Ohs? (An Integration Guide to Apex & Triple-o)OPNFV
Dan Radez, Red Hat, Tim Rozet, Red Hat
The OPNFV ecosystem is made up of projects that need to integrate with each other. Project Apex uses Triple-o under the covers which most people usually need some assistance to integrate with.
Come and spend a session with the Apex development team learning the ins and outs of Triple-o.
In this session participants will learn about the deployment process that is run when an Apex/Triple-o deployment is executed and how to assign services to nodes and generate networking configurations withing Triple-o to successfully integrate and deploy a new component in OpenStack.
Come learn how to untangle the learning curve presented when integrating and using Triple-o and simplify your future development and deployment endeavors with a new found intimate knowledge of the Apex & Triple-o platform.
Trinath Somanchi, NXP, Prasad Gorja, NXP
Tacker is an OpenStack community project complementing the VNFM and NFVO modules of ETSI NFV E2E architecture. Moving forward, making VNFs as first class citizens in the NFV world, more capabilities are to be added to VNFM like enhanced service assurance, Network service level VNF forwarding graph and multisite VNF management. Tacker is now advancing with new features while aligning with ETSI NFV E2E architecture to provide best in class services for telcos.
This session gives a idea about new features proposed into Pike release.
Demo how to efficiently evaluate nf-vi performance by leveraging opnfv testi...OPNFV
Liang Gao, Huawei, Trevor Cooper, Intel
NFV environments are highly flexible and this introduces unique challenges for testing performance of NFVI and Network Services. This presentation introduces OPNFV performance test projects and explains their role as part of the testing ecosystem. Examples from three performance testing categories will be demonstrated showing test results and their interpretation. Test cases discussed will include data-path performance, live migration performance and storage performance.
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...OPNFV
Zhiqiang Yu, China Mobile, Huabin Tang, China Mobile
Open Data Center Committee (ODCC) is co-founded by Baidu, Tencent, Alibaba, China Telecom, China Mobile, Intel, China Academy of Information and Communications Technology (CAICT). It is a non-profit industrial organization, focusing on researching open hardware such as server, data center and open network technologies to meet the growing demand on hardware in Chinese market.
Scorpio Multi-node Server is an ODCC project sponsored by China Mobile. It is a 4U size server chassis with 8 compute nodes or 4 storage nodes in maximum. It can also be mixture of different kind of servers, like 4 compute nodes and 2 storage nodes. Compared with traditional ATCA or Blade server, Multi-node Server's advantages include:
1.It is easier and cheaper to extend.
2.It has more choices for compute and storage nodes combination.
3.It is easier to maintain by engineers.
4. Even higher density.
5. 4U is more flexible than 10~14U Blade server.
OPNFV develops an integrated and tested open source platform that can be used to build NFV functionality. We are running OPNFV releases Colorado on Scorpio Multi-node smoothly and will try recent Danube on it in China Mobile’s Novonet (Next Generation Network) laboratory.
This presentation will introduce how OPNFV and Scorpio Multi-node server fit perfectly together. It's a fully open implementation of open software and open hardware.
Testing, CI Gating & Community Fast Feedback: The Challenge of Integration Pr...OPNFV
Jose Lausuch, Ericsson, Nikolas Hermanns, Ericsson
How can we make sure that new code in OPNFV does not break or stop CI?
How can we ensure quick feedback for each patch-set?
With the new way to snapshot a virtual deployment it is now possible to get virtual clouds up and running in about 2 min. In addition, through low amount of disk/cpu consumption and isolation of the networking it is possible to have a very high number of virtual deployments co-existing in the same bare-metal server.
How Many Ohs? (An Integration Guide to Apex & Triple-o)OPNFV
Dan Radez, Red Hat, Tim Rozet, Red Hat
The OPNFV ecosystem is made up of projects that need to integrate with each other. Project Apex uses Triple-o under the covers which most people usually need some assistance to integrate with.
Come and spend a session with the Apex development team learning the ins and outs of Triple-o.
In this session participants will learn about the deployment process that is run when an Apex/Triple-o deployment is executed and how to assign services to nodes and generate networking configurations withing Triple-o to successfully integrate and deploy a new component in OpenStack.
Come learn how to untangle the learning curve presented when integrating and using Triple-o and simplify your future development and deployment endeavors with a new found intimate knowledge of the Apex & Triple-o platform.
Trinath Somanchi, NXP, Prasad Gorja, NXP
Tacker is an OpenStack community project complementing the VNFM and NFVO modules of ETSI NFV E2E architecture. Moving forward, making VNFs as first class citizens in the NFV world, more capabilities are to be added to VNFM like enhanced service assurance, Network service level VNF forwarding graph and multisite VNF management. Tacker is now advancing with new features while aligning with ETSI NFV E2E architecture to provide best in class services for telcos.
This session gives a idea about new features proposed into Pike release.
Demo how to efficiently evaluate nf-vi performance by leveraging opnfv testi...OPNFV
Liang Gao, Huawei, Trevor Cooper, Intel
NFV environments are highly flexible and this introduces unique challenges for testing performance of NFVI and Network Services. This presentation introduces OPNFV performance test projects and explains their role as part of the testing ecosystem. Examples from three performance testing categories will be demonstrated showing test results and their interpretation. Test cases discussed will include data-path performance, live migration performance and storage performance.
Run OPNFV Danube on ODCC Scorpio Multi-node Server - Open Software on Open Ha...OPNFV
Zhiqiang Yu, China Mobile, Huabin Tang, China Mobile
Open Data Center Committee (ODCC) is co-founded by Baidu, Tencent, Alibaba, China Telecom, China Mobile, Intel, China Academy of Information and Communications Technology (CAICT). It is a non-profit industrial organization, focusing on researching open hardware such as server, data center and open network technologies to meet the growing demand on hardware in Chinese market.
Scorpio Multi-node Server is an ODCC project sponsored by China Mobile. It is a 4U size server chassis with 8 compute nodes or 4 storage nodes in maximum. It can also be mixture of different kind of servers, like 4 compute nodes and 2 storage nodes. Compared with traditional ATCA or Blade server, Multi-node Server's advantages include:
1.It is easier and cheaper to extend.
2.It has more choices for compute and storage nodes combination.
3.It is easier to maintain by engineers.
4. Even higher density.
5. 4U is more flexible than 10~14U Blade server.
OPNFV develops an integrated and tested open source platform that can be used to build NFV functionality. We are running OPNFV releases Colorado on Scorpio Multi-node smoothly and will try recent Danube on it in China Mobile’s Novonet (Next Generation Network) laboratory.
This presentation will introduce how OPNFV and Scorpio Multi-node server fit perfectly together. It's a fully open implementation of open software and open hardware.
We are working on KVM enhancements for NFV as a collaborative development project in OPNFV, focusing on three key features: minimal Interrupt latency variation, inter-VM (Virtual Machine) communication, and fast live migration. In this presentation, we introduce and provide an update on the project, and how we plan to work with the upstream KVM project.
Minimal Interrupt latency variation is required for data plane VNFs to achieve deterministic execution. We present an update, demonstrating how hardware and software enhancements can help when reducing latency variations.
We evaluate and compare the options for inter-VM communication (e.g. ivshmem, vhost user, VMFUNC, etc.) in terms of performance, interface/API, usability/programing model, security, and maintenance.
Finally we provide and update on fast live migration, including improvements with time to co
My network functions are virtualized, but are they cloud-readyOPNFV
Ulas Kozat, Huawei, Yaoguang Wang, Huawei
In the first phase of telco-cloud vision, the physical network functions are targeted for virtualization and became Virtual Network Functions (VNF) decoupled from the specific hardware platform. As we dive into the second phase of the cloud era, the core need is to provide VNF implementations that can take advantage of what cloud has to offer in terms of utility based computing (a.k.a. scaling), availability, data durability, etc. To this end, we have been developing a VNF Performance Modeling framework for automatic characterization of a particular VNF implementation in terms of its cloud-readiness and its bottlenecks towards cloud-readiness. We will present the details of our performance modeling framework and show its utility based on the existing open source VNF implementations. The next frontier of telco-cloud vision is to develop cloud-native network functions and services. Thus, in the last part of our talk, we will cover the future evolution of the framework and discuss the needs, requirements, potential metrics for evaluating the cloud-nativeness of network functions.
Software-defined migration how to migrate bunch of v-ms and volumes within a...OPNFV
Kentaro Matsumoto, KDDI Corporation, Hyde Sugiyama, Red Hat, Inc
As telecom career, we KDDI have been managing thousands of physical servers and run various kinds of workloads. In our operation of such a huge environment, We are frequently required to shut down our servers for maintenance, but it is not easy to negotiate with our tenant users to allow downtime. To make it easier, we are developing the structure called "Zone Migration", using the framework of OpenStack project "Watcher". "Zone Migration" makes it possible to migrate tenants’ workloads from compute nodes and storage devices we want to maintain (source zone) to new blank ones (destination zone) efficiently, automatically, and with minimum downtime.
These requirements as follows are realized.
-A lot of VMs and volumes should be migrated within a limited time frame
-Operations should be automated, but also can be controlled manually
-Time and load of migration should be under control so that tenants’ systems will not be affected
We are proceeding with the project in cooperation with NEC and Red Hat, and developing this structure on Red Hat OpenStack Platform.
In this talk, Tim Bird will discuss the recent status of the Linux with regard to embedded systems. This will include a review of the last year's worth of mainline kernel releases, as well as topic areas specifically related to embedded, such as boot-up time, security, system size, etc. Tim will also present recent and planned work by the Core Embedded Linux Project of the Linux Foundation, and discuss the current status of Linux in various markets and fields. Tim will go over current areas of work, and discuss remaining challenges faced by Linux in embedded projects.
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
Challenges in testing for composite vim platformsOPNFV
Jose Lausuch, Ericsson
Can I use OPNFV test frameworks on non-OPNFV deployments?
What are the limitations they have? What if I have a different VIM than OpenStack? What about K8? We need solutions that address next generation telco needs.
Monitoring Large-scale Cloud Infrastructures with OpenNebulaNETWAYS
Efficient monitoring is crucial when managing your Cloud infrastructure. The metrics collected by OpenNebula can be used to trigger automatic scaling, or quickly detect failures to automatically restart virtual machines. During this talk, I will show how OpenNebula can be used to efficiently monitor thousands of virtual machines at sub-1 minute interval. I will show how OpenNebula can be enhanced and optimized, and how different metrics collection tools such as Ganglia and Host-sFlow can be used with OpenNebula to monitor large-scale Cloud infrastructures.
Deploy TOSCA Network Functions Virtualization (NFV) Workloads in OpenStackSahdev Zala
Talk was given at the OpenStack Austin Summit 2016 and demonstrates how TOSCA Network Functions Virtualization (NFV) workloads can be deployed in OpenStack cloud.
Supercomputing by API: Connecting Modern Web Apps to HPCOpenStack
Audience Level
Intermediate
Synopsis
The traditional user experience for High Performance Computing (HPC) centers around the command line, and the intricacies of the underlying hardware. At the same time, scientific software is moving towards the cloud, leveraging modern web-based frameworks, allowing rapid iteration, and a renewed focus on portability and reproducibility. This software still has need for the huge scale and specialist capabilities of HPC, but leveraging these resources is hampered by variation in implementation between facilities. Differences in software stack, scheduling systems and authentication all get in the way of developers who would rather focus on the research problem at hand. This presentation reviews efforts to overcome these barriers. We will cover container technologies, frameworks for programmatic HPC access, and RESTful APIs that can deliver this as a hosted solution.
Speaker Bio
Dr. David Perry is Compute Integration Specialist at The University of Melbourne, working to increase research productivity using cloud and HPC. David chairs Australia’s first community-owned wind farm, Hepburn Wind, and is co-founder/CTO of BoomPower, delivering simpler solar and battery purchasing decisions for consumers and NGOs.
Faster, Higher, Stronger – Accelerating Fault Management to the Next LevelOPNFV
Yujun Zhang, ZTE Corporation, Carlos Goncalves, NEC
Fault management is a component that allows operations teams to monitor, detect, isolate and automate the recovery of faults. With an efficient fault management system, countermeasures can negate the effects of any deployment faults, avoiding bad user experiences or violation of service-level agreements (SLAs). The OPNFV Doctor project has been developing fault management features that increases resiliency to cloud-based mobile platforms and provides system integration.
The OPNFV Doctor team continues improving its framework, not only making fault management more reliable but also faster to satisfy Telco requirements. The 4G mobile system demonstrated at the OpenStack Summit Barcelona keynote featured already a double-digit millisecond fault notification. The team has identified scalability issues in and between relevant OpenStack projects and in conjunction with other open-source software. We will share performance figures, how we continuously profile and red-flag unexpected results (e.g. performance regressions). Finally, we will present solutions to make the overall OpenStack-based fault management framework even faster.
VSPERF BEnchmarking the Network Data Plane of NFV VDevices and VLinksOPNFV
Performance of virtual devices (vswitches, vforwarders, VNFs) and virtual connectivity (VNF-to-NIC, VNF-to-VNF, NIC-to-NIC), is a key consideration for any NFV design and infrastructure – both the methodology of benchmarking deterministic performance, as well as the actual test results and their understanding. The OPNFV VSPERF project addresses this important domain. This session reviews and combines VSPERF results with with the results of Cisco internal benchmarking project that evaluates best-of-breed NFV open-source and commercial technologies. The talk includes lessons learned in VNF benchmarking methodology, extended RFC2544 methodology, results highlights, runtime x86 resource analysis and what matters conclusions on the state of virtualize networking based on KVM.
OpenStack and OVS: From Love-Hate to Match Made in HeavenOPNFV
Many OPNFV developers building Openstack clouds at scale have a “love-hate” relationship with OVS. They love the flexibility and elasticity offered by a distributed virtual switch operating within each server, but hate the reality of first-gen OVS implementations well-known to be a bottleneck for cloud network performance and scalability. As performance-sensitive VNFs keep pushing for higher data forwarding performance across the NFV network infrastructure, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
In this session, Mellanox will present a novel way to offload the entire OVS dataplane onto the embedded switch in the server NIC. This approach can not only boost server I/O performance to near line-rate, be it 10G, 40G, or 100G, but also doing so at a fraction of the CPU load needed by existing OVS implementation.
OpenStack & OVS: From Love-Hate Relationship to Match Made in Heaven - Erez C...Cloud Native Day Tel Aviv
"Many developers building OpenStack clouds have “love-hate” relationship with OVS. They love flexibility and elasticity offered by OVS, but hate the network performance and scalability. As emerging technologies such as NFV keep pushing for higher network performance, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
In this session, we will present an approach that Mellanox has devised with input from key partners and customers to accelerate Virtual Switch dataplane, using the embedded switch implemented in the server Network Interface Card (NIC)’s hardware. This approach supports both ParaVirt vNIC interfaces and SRIOV based vNICs interfaces"
DPACC Acceleration Progress and DemonstrationOPNFV
The session provides an update to on the DPACC project within the OPNFV with a brief discussion on APIs and implementation progress. This session will review the API definition progress and follow up with a demo highlighting a common application as the vNF running on top of the DPACC defined layers. The demo will highlight the use of both hardware and software acceleration utilizing the DPACC defined acceleration layers. The demonstrationIt will highlight the progress in optimizing performance and latency characteristics of a platform to realize the vision of NFV while meeting stringent requirements, particularly for certain workloads, required by carriers.
Presentation delivered at LinuxCon China 2017.
Open vSwitch (OVS) is a multilayer open source virtual switch. OVS is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces. OVN is a new network virtualization project that brings virtual networking to the Open vSwitch user community. OVN includes logical switches and routers, security groups, and L2/L3/L4 ACLs, implemented on top of a tunnel-based overlay network.
In this presentation, we will provide an overview of the current state of the projects and their future plans, such as:
- The current state of the Linux, DPDK, and Hyper-V ports
- A status update on a portable BPF-based datapath
- The latest stateful and OpenFlow features available in OVS
- Performance and debugging enhancement to OVN
- OVN features under development such as ACL logging and encrypted tunnels
The overall volume of Internet traffic has been growing in a tremendous rate day-by-day which also contains unwanted malicious traffic. It has been a continuous challenge for the network operators to effectively identify the threats from line rate traffic. Hyperscan is a pattern (in terms of regular expression) matching software ideal for applications such as intrusion prevention/detection system, antivirus, unified threat management, deep packet inspection systems, etc.
Hyperscan works in two phases. At first, the customer patterns are parsed and compiled into databases in terms of bytecode. During runtime, these bytecode are used to search for patterns against blocks/streams data. Hyperscan library runs entirely in software and scales with IA processors to provide the maximum throughput of 293 Gbps.
Swimming upstream: OPNFV Doctor project case studyOPNFV
Based on the lifecycle of the OPNFV Doctor project, this case study shows how operator requirements “on paper” have successfully been realized step-by-step and in close cooperation with upstream community projects into a mature fault management framework. A demo of the solution had been presented in a keynote at the last OpenStack Summit. The talk will describe how we have worked in the OPNFV Doctor project and will provide some lessons learned on this journey. With significant experience now of working OPNFV requirements upstream to OpenStack, we’ll share best practices for submitting contributions upstream, how to best communicate, and how to overcome the primary challenges.
We are working on KVM enhancements for NFV as a collaborative development project in OPNFV, focusing on three key features: minimal Interrupt latency variation, inter-VM (Virtual Machine) communication, and fast live migration. In this presentation, we introduce and provide an update on the project, and how we plan to work with the upstream KVM project.
Minimal Interrupt latency variation is required for data plane VNFs to achieve deterministic execution. We present an update, demonstrating how hardware and software enhancements can help when reducing latency variations.
We evaluate and compare the options for inter-VM communication (e.g. ivshmem, vhost user, VMFUNC, etc.) in terms of performance, interface/API, usability/programing model, security, and maintenance.
Finally we provide and update on fast live migration, including improvements with time to co
My network functions are virtualized, but are they cloud-readyOPNFV
Ulas Kozat, Huawei, Yaoguang Wang, Huawei
In the first phase of telco-cloud vision, the physical network functions are targeted for virtualization and became Virtual Network Functions (VNF) decoupled from the specific hardware platform. As we dive into the second phase of the cloud era, the core need is to provide VNF implementations that can take advantage of what cloud has to offer in terms of utility based computing (a.k.a. scaling), availability, data durability, etc. To this end, we have been developing a VNF Performance Modeling framework for automatic characterization of a particular VNF implementation in terms of its cloud-readiness and its bottlenecks towards cloud-readiness. We will present the details of our performance modeling framework and show its utility based on the existing open source VNF implementations. The next frontier of telco-cloud vision is to develop cloud-native network functions and services. Thus, in the last part of our talk, we will cover the future evolution of the framework and discuss the needs, requirements, potential metrics for evaluating the cloud-nativeness of network functions.
Software-defined migration how to migrate bunch of v-ms and volumes within a...OPNFV
Kentaro Matsumoto, KDDI Corporation, Hyde Sugiyama, Red Hat, Inc
As telecom career, we KDDI have been managing thousands of physical servers and run various kinds of workloads. In our operation of such a huge environment, We are frequently required to shut down our servers for maintenance, but it is not easy to negotiate with our tenant users to allow downtime. To make it easier, we are developing the structure called "Zone Migration", using the framework of OpenStack project "Watcher". "Zone Migration" makes it possible to migrate tenants’ workloads from compute nodes and storage devices we want to maintain (source zone) to new blank ones (destination zone) efficiently, automatically, and with minimum downtime.
These requirements as follows are realized.
-A lot of VMs and volumes should be migrated within a limited time frame
-Operations should be automated, but also can be controlled manually
-Time and load of migration should be under control so that tenants’ systems will not be affected
We are proceeding with the project in cooperation with NEC and Red Hat, and developing this structure on Red Hat OpenStack Platform.
In this talk, Tim Bird will discuss the recent status of the Linux with regard to embedded systems. This will include a review of the last year's worth of mainline kernel releases, as well as topic areas specifically related to embedded, such as boot-up time, security, system size, etc. Tim will also present recent and planned work by the Core Embedded Linux Project of the Linux Foundation, and discuss the current status of Linux in various markets and fields. Tim will go over current areas of work, and discuss remaining challenges faced by Linux in embedded projects.
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
Evan Powell presented this deck at the MSST 2107 Mass Storage Conference.
"What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment."
Watch the video: http://wp.me/p3RLHQ-gPs
Learn more: blog.openebs.io
and
http://storageconference.us
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Control Your Network ASICs, What Benefits switchdev Can Bring UsHungWei Chiu
In this slide, I will introduce what is switchdev and what problem it wants to solve. To this day, most of the hardware switch's application-specific integrated circuit (ASIC) only be controlled by the vendor's proprietary binary (SDK) and it's inconvenient for system administrator/developer. In order to break the chip vendor's lock-in situation, the switchdev had been designed to solve this. With the help of switchdev, we can develop a general solution for hardware switch chips and break the connection with vendor's binary-blob (SDK).
In order words. Linux kernel can directly communicate with the vendor's proprietary ASIC now, and the software programmer/system administrator can easily control that ASIC to provide more flexible, powerful and programmable network function.
Challenges in testing for composite vim platformsOPNFV
Jose Lausuch, Ericsson
Can I use OPNFV test frameworks on non-OPNFV deployments?
What are the limitations they have? What if I have a different VIM than OpenStack? What about K8? We need solutions that address next generation telco needs.
Monitoring Large-scale Cloud Infrastructures with OpenNebulaNETWAYS
Efficient monitoring is crucial when managing your Cloud infrastructure. The metrics collected by OpenNebula can be used to trigger automatic scaling, or quickly detect failures to automatically restart virtual machines. During this talk, I will show how OpenNebula can be used to efficiently monitor thousands of virtual machines at sub-1 minute interval. I will show how OpenNebula can be enhanced and optimized, and how different metrics collection tools such as Ganglia and Host-sFlow can be used with OpenNebula to monitor large-scale Cloud infrastructures.
Deploy TOSCA Network Functions Virtualization (NFV) Workloads in OpenStackSahdev Zala
Talk was given at the OpenStack Austin Summit 2016 and demonstrates how TOSCA Network Functions Virtualization (NFV) workloads can be deployed in OpenStack cloud.
Supercomputing by API: Connecting Modern Web Apps to HPCOpenStack
Audience Level
Intermediate
Synopsis
The traditional user experience for High Performance Computing (HPC) centers around the command line, and the intricacies of the underlying hardware. At the same time, scientific software is moving towards the cloud, leveraging modern web-based frameworks, allowing rapid iteration, and a renewed focus on portability and reproducibility. This software still has need for the huge scale and specialist capabilities of HPC, but leveraging these resources is hampered by variation in implementation between facilities. Differences in software stack, scheduling systems and authentication all get in the way of developers who would rather focus on the research problem at hand. This presentation reviews efforts to overcome these barriers. We will cover container technologies, frameworks for programmatic HPC access, and RESTful APIs that can deliver this as a hosted solution.
Speaker Bio
Dr. David Perry is Compute Integration Specialist at The University of Melbourne, working to increase research productivity using cloud and HPC. David chairs Australia’s first community-owned wind farm, Hepburn Wind, and is co-founder/CTO of BoomPower, delivering simpler solar and battery purchasing decisions for consumers and NGOs.
Faster, Higher, Stronger – Accelerating Fault Management to the Next LevelOPNFV
Yujun Zhang, ZTE Corporation, Carlos Goncalves, NEC
Fault management is a component that allows operations teams to monitor, detect, isolate and automate the recovery of faults. With an efficient fault management system, countermeasures can negate the effects of any deployment faults, avoiding bad user experiences or violation of service-level agreements (SLAs). The OPNFV Doctor project has been developing fault management features that increases resiliency to cloud-based mobile platforms and provides system integration.
The OPNFV Doctor team continues improving its framework, not only making fault management more reliable but also faster to satisfy Telco requirements. The 4G mobile system demonstrated at the OpenStack Summit Barcelona keynote featured already a double-digit millisecond fault notification. The team has identified scalability issues in and between relevant OpenStack projects and in conjunction with other open-source software. We will share performance figures, how we continuously profile and red-flag unexpected results (e.g. performance regressions). Finally, we will present solutions to make the overall OpenStack-based fault management framework even faster.
VSPERF BEnchmarking the Network Data Plane of NFV VDevices and VLinksOPNFV
Performance of virtual devices (vswitches, vforwarders, VNFs) and virtual connectivity (VNF-to-NIC, VNF-to-VNF, NIC-to-NIC), is a key consideration for any NFV design and infrastructure – both the methodology of benchmarking deterministic performance, as well as the actual test results and their understanding. The OPNFV VSPERF project addresses this important domain. This session reviews and combines VSPERF results with with the results of Cisco internal benchmarking project that evaluates best-of-breed NFV open-source and commercial technologies. The talk includes lessons learned in VNF benchmarking methodology, extended RFC2544 methodology, results highlights, runtime x86 resource analysis and what matters conclusions on the state of virtualize networking based on KVM.
OpenStack and OVS: From Love-Hate to Match Made in HeavenOPNFV
Many OPNFV developers building Openstack clouds at scale have a “love-hate” relationship with OVS. They love the flexibility and elasticity offered by a distributed virtual switch operating within each server, but hate the reality of first-gen OVS implementations well-known to be a bottleneck for cloud network performance and scalability. As performance-sensitive VNFs keep pushing for higher data forwarding performance across the NFV network infrastructure, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
In this session, Mellanox will present a novel way to offload the entire OVS dataplane onto the embedded switch in the server NIC. This approach can not only boost server I/O performance to near line-rate, be it 10G, 40G, or 100G, but also doing so at a fraction of the CPU load needed by existing OVS implementation.
OpenStack & OVS: From Love-Hate Relationship to Match Made in Heaven - Erez C...Cloud Native Day Tel Aviv
"Many developers building OpenStack clouds have “love-hate” relationship with OVS. They love flexibility and elasticity offered by OVS, but hate the network performance and scalability. As emerging technologies such as NFV keep pushing for higher network performance, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
In this session, we will present an approach that Mellanox has devised with input from key partners and customers to accelerate Virtual Switch dataplane, using the embedded switch implemented in the server Network Interface Card (NIC)’s hardware. This approach supports both ParaVirt vNIC interfaces and SRIOV based vNICs interfaces"
DPACC Acceleration Progress and DemonstrationOPNFV
The session provides an update to on the DPACC project within the OPNFV with a brief discussion on APIs and implementation progress. This session will review the API definition progress and follow up with a demo highlighting a common application as the vNF running on top of the DPACC defined layers. The demo will highlight the use of both hardware and software acceleration utilizing the DPACC defined acceleration layers. The demonstrationIt will highlight the progress in optimizing performance and latency characteristics of a platform to realize the vision of NFV while meeting stringent requirements, particularly for certain workloads, required by carriers.
Presentation delivered at LinuxCon China 2017.
Open vSwitch (OVS) is a multilayer open source virtual switch. OVS is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces. OVN is a new network virtualization project that brings virtual networking to the Open vSwitch user community. OVN includes logical switches and routers, security groups, and L2/L3/L4 ACLs, implemented on top of a tunnel-based overlay network.
In this presentation, we will provide an overview of the current state of the projects and their future plans, such as:
- The current state of the Linux, DPDK, and Hyper-V ports
- A status update on a portable BPF-based datapath
- The latest stateful and OpenFlow features available in OVS
- Performance and debugging enhancement to OVN
- OVN features under development such as ACL logging and encrypted tunnels
The overall volume of Internet traffic has been growing in a tremendous rate day-by-day which also contains unwanted malicious traffic. It has been a continuous challenge for the network operators to effectively identify the threats from line rate traffic. Hyperscan is a pattern (in terms of regular expression) matching software ideal for applications such as intrusion prevention/detection system, antivirus, unified threat management, deep packet inspection systems, etc.
Hyperscan works in two phases. At first, the customer patterns are parsed and compiled into databases in terms of bytecode. During runtime, these bytecode are used to search for patterns against blocks/streams data. Hyperscan library runs entirely in software and scales with IA processors to provide the maximum throughput of 293 Gbps.
Swimming upstream: OPNFV Doctor project case studyOPNFV
Based on the lifecycle of the OPNFV Doctor project, this case study shows how operator requirements “on paper” have successfully been realized step-by-step and in close cooperation with upstream community projects into a mature fault management framework. A demo of the solution had been presented in a keynote at the last OpenStack Summit. The talk will describe how we have worked in the OPNFV Doctor project and will provide some lessons learned on this journey. With significant experience now of working OPNFV requirements upstream to OpenStack, we’ll share best practices for submitting contributions upstream, how to best communicate, and how to overcome the primary challenges.
The Open Platform for Network Functions Virtualization (OPNFV) project within the Linux Foundation is uniquely positioned to bring together the work of open source communities and standards bodies, and commercial suppliers to deliver a de facto NFV platform for the industry. Hear the overall vision for OPNFV, learn how the technical community functions, and get an understanding of the areas covered by 50+ active projects.
In this keynote, we will talk about how to transform from virtualization to full-scale cloudification and in Huawei’s view how OPNFV can develop into a full-scale cloud platform. We will explain this in five aspects: 1) cloudification of the software architecture; 2) cloudification of the networks; 3) cloudification of the network operations; 4) cloudification of the VNFs; 5) NFVI platform cloudification. Then a summarization of Huawei’s contribution in OPNFV is provided including code authors, labs, key roles, projects, code commits, etc. In the end, we will also briefly introduce our demos in this summit and welcome everyone to our booth.
Summit 16: Open-O Mini-Summit - Open Source, Orchestration, and OPNFVOPNFV
Summit 16: Open-O Mini-Summit - Open Source, Orchestration, and OPNFV,
Deng Hui, Chair, OPEN-O Governing Board, China Mobile,
Christopher Donley, Chair, OPEN-O Technical Steering Committee, Huawei,
Marc Cohn, Director, OPEN-O Project, The Linux Foundation
OpenStack has been a part of OPNFV from the start and the OpenStack and OPNFV communities have strong areas of overlap. We will explain OPNFV from an Openstack and practical perspective, providing a specific example (SFC scenario) of how we are daily testing different components of OpenStack and other communities (ODL, OVS, etc). We’ll also talk about how OPNFV is useful to OpenStack because (hint: telco requirements & testing) and briefly describe several OPNFV projects which have contributed to OpenStack: NetReady, Multisite, Doctor, Cross CI, Copper, etc.
Challenges in positioning open stack for nf-vi_ are we biting off more than w...OPNFV
Hwee Ming Ng, Red Hat, Sadique Puthen, Red Hat
Many Service providers and communities like OPNFV is seeing OpenStack as the preferred cloud IaaS platform for NFV. However, Openstack was not designed with NFV in mind from day 1 and brings a lot of challenges when adapting to Telco environments. These challenges range from product design and development to solution design and architecture, deployment and support to match Telco expectations.
Red Hat has been working with a number of early adopters to roll out NFV solutions. Even though we have many successes, we have our fair share of challenges. When a solution architect and support engineer stand on the dais, it may be appropriate to recollect these challenges based on our experience from a solution design, architecture and support perspective. These challenges include distributed NFV, High Availability everywhere, Fault Tolerance, Predictive recovery, network performance, interoperability with multiple vendors, accommodating different types of VNFs with different operating systems, troubleshooting, feature availability, etc from a solution design perspective and support perspective.
Throughout this session we will touch base on these challenges, what are the possible solutions, how did we overcome them and open a discussion for challenges which do not have an acceptable solution. We will also discuss details of some of the challenges associated with troubleshooting issues specific to NFV deployments.
Challenge in asia region connecting each testbed and poc of distributed nfv ...OPNFV
Shuya Nakama, Okinawa Open Laboratory / NEC Solution Innovators, Eric Chang, Institute for Information Industry, Hideyasu Hayashi, Okinawa Open Laboratory and NEC Solution Innovators, Torii Takashi, NEC Corporation and Okinawa Open Laboratory
There are many countries in Asia region those have the motivation to innovate their telecom system and educate new technologies to young engineers. It is important how to encourage and involve these countries to OPNFV communities, and also educate to contribute to open source activities.
In these session, we will introduce our trial to the issue. Okinawa Open Laboratories (OOL) in Japan and Institute for Information Industry (III) in Taiwan, have been doing joint research activities in these years about SDN/NFV area, and this year, we have connected each testbed using OPNFV. Over the distributed testbed, we have started our POC of NFV use cases such as vEPC, vCPE etc. We also have communication with several research and academic organization in Asia region, so we would like to connect each country’s testbed and expand our testbed to Asia region.
There are many challenges, and we have learned from our experience, so in the session we will share the lessons learned from our trial. That will be good example for the whole community, and help progressing collaboration of global eco system.
Requirement analysis of vim platform reliability in a three-layer decoupling ...OPNFV
Gil Hellmann, Wind River, Xuesong Wang, Wind River, Qiao Fu, China Mobile, Jinglong Lv, China Mobile
A traditional non-virtualization environment is fully integrated with the hardware platform, software platform, and service application as indivisible whole. It makes failure detection and HA much easier in north - south directions. In a virtualization environment, especially three-layer decoupling, the hardware platform, NFVI/VIM, VNF and even MANO may all come from separate vendors. This provides operators with more choices and flexibility. However, it also introduces challenges on how to achieve the same level HA as with non-virtualization environment. In this session, we will cover the following:
• Common reliability strategy of traditional telecom network elements
• The general HA mechanisms of VIM (OpenStack) platform today
• What new requirements are brought to a VIM platform in the ETSI-MANO structure for a telecom-level high availability
• The impact of various types of failures on the VMs and business programs, and some fault handling strategies performed by VIM and MANO according to our test results.
MEF's inter-domain orchestration delivering dynamic third networks [presente...OPNFV
Shi Fan, China Telecom
Enterprise customers want on-demand connectivity and cloud services with assured performance and global reach. To deliver that cost-effectively, network operators are transitioning to more automated, virtualized, and interconnected networks powered by LSO (Lifecycle Services Orchestration), SDN, and NFV. Many of the world’s leading service providers are embracing the LSO framework and development of standardized, open APIs to enable end-to-end service orchestration across multiple interconnected provider networks and across various technology domains within a single provider network (e.g., packet WAN, NFV, SD-WAN, and optical transport).
The term 'orchestration' is used widely in a variety of contexts. This presentation will present MEF's view of the orchestration of dynamic services and service components across all internal and external domains from one or more providers. Lifecycle Service Orchestration supports the full lifecycle, and not just the configuration and activation phases of the service lifecycle. This presentation will also help put other form of orchestration in context.
Crossing the river by feeling the stones from legacy to cloud native applica...OPNFV
Doug Smith, Red Hat, Inc, Gergely Csatari, Nokia
There is an anecdote about a tourist lost in the middle of the countryside in Ireland, who pulls over and asks a local, "How can I get to Galway from here?" To which the local, after thinking for some time, responds, "If I was going to Galway, I wouldn't start from here at all."
Cloud native application development can feel like that sometimes, especially in the telecom industry. I have an application, it's running fine on a bare metal server, and now I am expected to make it resilient, scale-out, cloud native, microservice architecture, buzzword compliant. But how do you get there from where you are?
This presentation will present the hero's quest, identifying the key constraint to cloud resiliency at each stage, and identifying measures for addressing them. By showing the evolution story from the perspective of two applications, including a real telecom application, this presentation addresses the practical problems. The approach is not "rewrite your app from scratch", it is refactoring for incremental improvements.
Doug and Gergely will address the automation of application deployment and configuration, separation of state from behaviour, clustering, handling storage for cloud native applications, monitoring and event management, and container orchestration, so that, at each step along the journey, you know what problem you are solving, and how to get to the next step from where you are.
This presentation is in addition to a series of workshops held at the summit sponsored by the Cloud Native Computing Foundation and organized by Dave Neary, and includes a short summary of the topics presented in those workshops in addition to the perspectives on how to complete the quest to cloud native applications.
NFV solutions that pull from open source projects such as OPNFV, OpenStack, OpenDaylight, and others must be integrated and tested in an environment that fully supports the performance and availability requirements of service provider networks. We’ll show OPNFV performs open source NFV testing, including: methodology; mapping to ETSI NFV use-case/s; open source project integration; testing dashboards; Continuous Integration and Continuous Deployment (CI/CD); and testing acceleration. We’ll provide an overview of the OPNFV Pharos Community Test Lab infrastructure and the new Pharos Lab-as-a-Service to run a test deploy of OPNFV and try it out on your own. We’ll also give an overview of how OPNFV is working together with OpenStack Community as part of its Cross Community CI (XCI) effort in order to provide means for OPNFV developers to work with OpenStack master branch, reduce the time it takes to develop new features and test them on OPNFV Infrastructure, and more.
Ariel Waizel discusses the Data Plane Development Kit (DPDK), an API for developing fast packet processing code in user space.
* Who needs this library? Why bypass the kernel?
* How does it work?
* How good is it? What are the benchmarks?
* Pros and cons
Ariel worked on kernel development at the IDF, Ben Gurion University, and several companies. He is interested in networking, security, machine learning, and basically everything except UI development. Currently a Solution Architect at ConteXtream (an HPE company), which specializes in SDN solutions for the telecom industry.
DPDK Summit - 08 Sept 2014 - 6WIND - High Perf Networking Leveraging the DPDK...Jim St. Leger
Thomas Monjalon, 6WIND, presents on where/how to use DPDK, the DPDK ecosystem, and the DPDK.org community.
Thomas is the community maintainer of DPDK.org.
Kirill Tsym discusses Vector Packet Processing:
* Linux Kernel data path (in short), initial design, today's situation, optimization initiatives
* Brief overview of DPDK, Netmap, etc.
* Userspace Networking projects comparison: OpenFastPath, OpenSwitch, VPP.
* Introduction to VPP: architecture, capabilities and optimization techniques.
* Basic Data Flow and introduction to vectors.
* VPP Single and Multi-thread modes.
* Router and switch for namespaces example.
* VPP L4 protocol processing - Transport Layer Development Kit.
* VPP Plugins.
Kiril is a software developer at Check Point Software Technologies, part of Next Generation Gateway and Architecture team, developing proof of concept around DPDK and FD.IO VPP. He has years of experience in software, Linux kernel and networking development and has worked for Polycom, Broadcom and Qualcomm before joining Check Point.
Intel's Out of the Box Network Developers Ireland Meetup on March 29 2017 - ...Haidee McMahon
For details on Intel's Out of The Box Network Developers Ireland meetup, goto https://www.meetup.com/Out-of-the-Box-Network-Developers-Ireland/events/237726826/
Intel Talk : Enhanced Platform Awareness for Openstack to increase NFV performance
By Andrew Duignan
Bio: Andrew Duignan is an Electronic Engineering graduate from University College Dublin, Ireland. He has worked as a software engineer in Motorola and now at Intel Corporation. He is now in a Platform Applications Engineering role, supporting technologies such as DPDK and virtualization on Intel CPUs. He is based in the Intel Shannon site in Ireland.
Stacks and Layers: Integrating P4, C, OVS and OpenStackOpen-NFP
Smart Network Interface Cards (SmartNICs) are increasingly being deployed in cloud data centers to offload inline network processing tasks from server CPUs, thereby improving system throughput while freeing up server CPU cycles for application processing. The match/action and tunnel handling semantics of SmartNIC datapaths can be either expressed directly in the P4 language, be defined by virtual switching software like Open vSwitch (implementing the semantics of a specification like OpenFlow), or by using a combination of these. This presentation compares these approaches, considering aspects like the expressiveness and performance of the resulting datapath as well how these datapath variants can be integrated into existing cloud management systems (e.g. OpenStack).
Johann Tönsing
Chief Architect & SVP, Software, Netronome
Johann is a recognized industry expert in SDN, Linux-based networking technologies, network virtualization, security, and NFV. Johann has been an active contributing member and has been nominated to leadership roles in multiple standards bodies related to SDN and NFV. As Netronome’s Chief Architect, Johann leads all aspects of Netronome’s product design and development, with heavy emphasis on advanced and open server-based networking technologies where he also holds multiple patents. He holds a Masters of Engineering in Electronics.
Accelerate Service Function Chaining Vertical Solution with DPDKOPNFV
Service Function Chaining (SFC) is one of top 5 NFV use case. Supporting SFC in provider and enterprise networks requires performance assurance. Specifically, the Classifier and the Service Function Forwarder which are typically implemented in software such as virtual switches need to match line rate requirement. DPDK (Data Plane Development Kit) is an open source project comprising a set of libraries and drivers for fast packet processing. In this presentation, we will discuss our experiences accelerating SFC with DPDK. In addition, Telco and Datacenter carriers demands dynamic SFC that requires new SFC wire protocols (e.g. VxLAN-GPE and NSH) support in both data and control planes. We intend to share our experiences and future works of a high performance, NSH-aware SFC vertical solution with open-source ingredients: Openstack, Opendaylight, OpenvSwitch with DPDK acceleration.
Tuning Linux for your database FLOSSUK 2016Colin Charles
Some best practices about tuning Linux for your database workloads. The focus is not just on MySQL or MariaDB Server but also on understanding the OS from hardware/cloud, I/O, filesystems, memory, CPU, network, and resources.
How to Reuse OPNFV Testing Components in Telco Validation ChainOPNFV
Morgan Richomme, Orange
OPNFV provides lots of tooling that can be adopted and adapted to Service providers solution. These solutions are OpenStack based but not necessarily OPNFV solutions.
This session will detail how some components developed in OPNFV have been introduced in Orange Integration Center, an OpenStack based vendor solution including Contrail SDN controller and third party elements.
The best practices learned in OPNFV were used to design and build a CI chain including jenkins, functest, yardstick, the test API and the Test DB.
Morgan Richomme, Orange
Power consumption is a key driver of NFV. However very few projects deal with this aspect.
This session will detail a prototype realized in OPNFV Orange labs aiming to track power consumption during CI operations.
We could imagine that, if we generalize the information colelction to the Pharos community, we may get significative figures to establish power consumption profiles and why not try to get even deeper and get applicative profile using statistical tools
Hands-On Testing: How to Integrate Tests in OPNFVOPNFV
Jose Lausuch, Ericsson
I have developed and integrated a new feature but… how do I write test cases and where do I put them? How do I start?
These are common questions asked by developers bringing new features that need to be tested and verified in our CI pipeline.
Storage Performance Indicators - Powered by StorPerf and QTIPOPNFV
Yujun Zhang, ZTE Corporation, Mark Beierl, Dell EMC
StorPerf uses heat to create VMs with attached cinder volumes. The volumes are used *without* a file system (ie target=/dev/vdb). The FIO workload is run and stats collected every minute. When we get 10 samples in a row that fit within a certain range and slope, we say it is a valid measurement. This avoids false numbers due to Ceph balancing or other warm up. The metrics can be read after job completes, and there is an indicator to state if the volume metrics stabilized or not.
Storage QPI will be calculated based on the test results from storperf by QTIP. It aims to be a comparable indicator for storage performance among different platforms.
Big Data for Testing - Heading for Post Process and AnalyticsOPNFV
Yujun Zhang, ZTE Corporation, Donald Hunter, Cisco, Trevor Cooper, Intel
The testing community created tens of testing projects, hundreds of testing cases, thousands of testing jobs. Huge amount of testing data has been produced. What comes next, then?
The testing community puts in place tools and procedures to declare testcases/projects, normalize and upload results. These tools and procedures have been adopted so we now have lots of data covering lots of scenarios, hardware, installers.
In this presentation, we shall discuss the stakes and challenges of result post processing.
* How analytics can provide valuable inputs to the community, end users or upstream projects.
* How can we produce accurate indicators, reports and graphs, focus on interpreting / consuming test results.
* How can we get the best of breeds of our result mine?
Fatih Degirmenci, Ericsson, Yolanda Robla Mota, RedHat, Markos Chandras, SUSE
OPNFV has been working with the communities such as OpenStack, OpenDaylight, and fd.io as part of its Cross Community CI (XCI) effort in order to provide means for the developers to work with the latest versions of upstream components, cutting the time it takes to develop new features significantly and testing them on the OPNFV Infrastructure.
Apart from developing and testing new features, OPNFV XCI will enable developers to identify bugs earlier, issue fixes faster, and get feedback on a daily basis. This is a prerequisite for OPNFV in its CD & DevOps journey.
OPNFV aims to run XCI by reusing what other communities developed such as bifrost and openstack-ansible. While doing this, OPNFV intends to develop, maintain, and evolve OPNFV Infrastructure like how the other OPNFV projects do; upstream first. Whatever missing functionality and issues we identify in the components we use as part of our infrastructure and CI/CD toolchain, we strive to fix them directly upstream.
During this session, we will talk about the progress we have made so far, contributions we made to our upstream communities, and share our experiences. We will also highlight the key benefits of XCI for the community in order for developers to utilize the mechanisms, work with OpenStack master to implement new features and fix bugs using the toolchain XCI established.
Jose Lausuch, Ericsson
OPNFV provides different test frameworks which help developers to write new test cases. Those frameworks also borrow and integrate a variety of testing tools from other open source communities (OpenStack, OpenDaylight, Open-O, ...).
This session will go through all the tools that have been integrated so far in OPNFV and the cross community collaboration that has already started in Danube time frame.
Enabling Carrier-Grade Availability Within a Cloud InfrastructureOPNFV
Aaron Smith, Red hat, Pasi Vaananen, Red Hat
Carrier-Grade Cloud Infrastructure (Aaron Smith, Pasi Vaananen, Red Hat): The move from vertically integrated hardware and software to distributed execution in a cloud complicates the delivery of highly available services. Vertically integrated systems enabled all system layers required to communicate and participate in the support of availability of the service to be under control of single system vendor. With NFV, the cloud philosophy of infrastructure and application decoupling requires new open interfaces to support the necessary flow of information between layers and clear separation of the fault and availability management responsibilities between the infrastructure and application SW subsystems. Even in the cloud environment, traditional availability concepts such as fast detection, correlation, and fault notification still apply. A fast, low-latency fault management platform will be presented that allows cloud-based services to achieve 5NINES of availability and service continuity. Performance measurements from a prototype of the system will be presented along with a demo of the operation of a service requiring 50 ms fault remediation.
Learnings From the First Year of the OPNFV Internship ProgramOPNFV
Ray Paik, Linux Foundation, Serena Feng, ZTE
OPNFV launched its Internship program in Q1'2016, and there have been more than 10 interns around the world contributing to different OPNFV activities ranging from cross community CI, documentation, infrastructure, testing, etc. In this talk, there will be an overview of the OPNFV internship program that is different from more traditional internship programs and a discussion on areas for improvement that were identified. A community member who mentored two interns will also share her experience managing interns remotely and her advice for future interns & mentors. Finally, OPNFV interns will give a quick lightening round talk on their internship projects highlighting their contributions to the community. [NOTE: This is designed as a 60-minute session with interns' lightening round talks as 6-8 interns could be attending the OPNFV Summit. Presentations from Serena/Ray is expected to take about 20-25 minutes]
Juha Kosonen, Nokia, Mika Rautakumpu, Nokia
The Open Compute Project (OCP) is a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure. The designs have been optimized to lower cost of infrastructure and operations e.g. by removing non-essential components, disaggregating rack level solution with common resources, and simplifying server serviceability.
OpenStack provides the foundation for the NFVI and MANO components within OPNFV. OPNFV releases Colorado and recent Danube have been successfully integrated to OCP hardware and running smoothly. Also hardware acceleration is supported. The concept itself has gained a lot of interest from mobile operators, some of them are running OPNFV on top of OCP hardware in their test laboratories too.
This presentation will introduce how OpenStack, OCP and OPNFV open source projects fits perfectly together.
The Return of QTIP, from Brahmaputra to DanubeOPNFV
Yujun Zhang, ZTE Corporation, Julien Zhang, ZTE Corporation
QTIP project was suspended due to the changes in project team after Brahmaputra. Now it has returned to the community in Danube. Here is the story behind it.
- transfer from original team
- achievements in Danube
- intern projects
- vision for future
Fatih Degirmenci, Ericsson, Jack Morgan, Intel
The OPNFV community relies on our community labs, CI and testing projects to ensure we release quality code. The current strategies to use hardware resources in OPNFV community labs will not be able to sustain its current growth. New strategies need to be implemented to allow for new OPNFV projects. The presenters will look at the current lab usage model and discuss ways already being worked in OPNFV community labs through the POD descriptor file. In our CI process through Dynamic CI, Cross Community CI and other initiatives. In our testing projects use of hardware resources and its importance in the release process. The presenters will show current tools used to track usage such as the Bitergia dashboard.
Distributed vnf management architecture and use-casesOPNFV
Sridhar Pothuganti, NXP, Trinath Somanchi, NXP
Telco operators are on journey to discover what virtualization means for the network. Markets have believed that NFV architecture elements: NFVI and VIM, hold the complete responsibility in providing virtualized networks with carrier grade properties.
Telco operators have reached to a conclusion that VNFs must take their fair share of responsibility to realize NFV goals while meeting carrier-grade behavior in the entire NFV architecture. While the trend moves on, Cloud native VNFs are emerging best citizens of the cloud. Thus communication from EMS to VNFM is blurred and eventually may disappear in the future. This requires better understanding of, and agreement over the role of VNFMs and EMS for VNFs.
This presentation describes the evolution of Distributed VNF management, Architectural design considerations and Use-case scenarios. The following proposal is based on a comprehensive study on evolving cloud native VNF management.
Securing your nfv and sdn integrated open stack cloud- challenges, use-cases ...OPNFV
Sridhar Pothuganti, NXP, Trinath Somanchi, NXP
Network security and reliability are the most challenging tasks in any cloud. With NFV and SDN in place, Network Functions are virtualzied and network traffic is managed in separated control and data planes. Thus reducing the operational and capital expenditure. Virtualized Network Functions are tied with Software Defined Networks to boost the power of virtualization. This itself is challenging when Network services and security is a concern. While OpenStack is the best opted solution for IaaS, many service provides are moving towards best solutions to deal with service delivery and security challenges in SDN and NFV integrated OpenStack Cloud.
The Presentation outlines the challenges and proposes probable solutions for NFV and SDN integrated OpenStack Cloud.
Juha Oravainen, Nokia, Tapio Tallgren, Nokia
In the future factory robots will communicate wirelessly and cars on the highways will exchange the information with each other. This requires extremely low latency mobile networks, known as 5G. This network will run on telco grade cloud platforms of which OPNFV is one example.
The first cloud radio access networks have already been deployed to operators. More is needed with future technologies/networks as more functionalities will be moved to the cloud. This talk tells what is needed to overcome low latency and high availability challenges with cloud platforms. At Nokia we are continuously evaluating the latest OPNFV SW on Nokia HW with radio VNFs to guarantee interoperability with open source components.
Test and perspectives on nfvi from china unicom sdn nfv labOPNFV
Junjie Tong, China Unicom
This presentation explores our experience with the tests on NFVI in China Unicom's SDN/NFV lab.We have done the tests on both the hardware and VIM and discuss the lessones and painis about NFVI testing.We also discuss the sepcial requirements from NFV perspectives, what further improvements are needed for indusrty products and the working progress and plans on NFVI in China Unicom.
Automatic Integration, Testing and Certification of NFV in China MobileOPNFV
Qiao Fu, China Mobile, Liang Gao, Huawei
As Operators expand their deployment of NFV, automatic integration, testing and compliance certification become more and more important. In this speech, we would like to share our experience and progress in the China Mobile OPNFV Testlab on an automatic system of integration, testing and certification. This system takes fully use of OPNFV opensource tools, including installers such as Compass, testing such as Functest and Yardstick, compliance testing such as Dovetail. Such automatic system extremely decreases the human cost of Operators when deploying and testing the NFV cloud before large scale deployment.
NFV interoperability, for the success of commercial deploymentsOPNFV
Timo Perala, Nokia, Michael Wiegers, Ericsson
To further enhance Network Functions Virtualization (NFV) industrialization, Cisco, Ericsson, Huawei and Nokia announced Memorandum of Understanding (MoU) to create the NFV Interoperability Testing Initiative (NFV-ITI) last year December.
The main objective of NFV-ITI is to promote competition and create industry alignment on generic principles for NFV interoperability testing and support for specific customer situations. NFV-ITI will focus on testing interoperability configurations of commercial NFV solutions actually used in the communication services providers' networks. The initiative is open for ratification by any NFV vendor subscribing to the objectives of the MoU.
NFV-ITI will address NFV multi-vendor interoperability challenges for communication service providers, enabling them to optimize NFV deployment and integration costs, and reduce time-to-market for new services.
NFV-ITI will complement and heavily build on other industry interoperability activities, including but not limited to those of OPNFV, ETSI NFV Testing WG, and NVIOT Forum.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
3. To be covered
•OVS DPDK and FD.IO (VPP)
•Common Dependencies of OVS DPDK and FD.IO (VPP)
•Configuration/Integration Differences between OVS DPDK and FD.IO
•Integration Challenges
•Performance Tuning
•Automated Configuration options
•OpenDaylight Integration - design and operation
4. High Level OVS DPDK
•Runs as an application or system service in Linux userspace
•Packets incoming to the host are not processed by the OVS kernel
driver which handles packets in the non-DPDK scenario
•Ports on the host are bound to DPDK, then assigned to bridges in OVS
•VM ports on the host are created as vhostuser
•Poll Mode Driver (PMD) threads continuously poll for new packets
incoming to the DPDK dataplane
5. High Level FD.IO (VPP)
•Runs as an application or system service in Linux userspace
•Ports on the host are bound to DPDK, then assigned to interfaces in
VPP
•VM ports on the host are created as vhostuser
•Support L2 and L3 features (NAT/ACL/NSH/IPSEC)
6. Common Dependencies
•Both rely on PMD drivers, of which there are a few (uio_pci_generic,
vfio-pci)
•Both require hugepage configuration (required for DPDK)
•IOMMU is required for vfio-pci
•nova scheduler filters must be set to include NUMATopologyFilter (for
KVM)
•Host NIC must support DPDK
•libvirt must be able to read vhostuser sockets
•Though config is different, both require similar performance tuning
7. Integration Differences
•OVS requires being put into DPDK mode, and then restarted:
−ovs-vsctl –no-wait set Open_vSwitch . other_config:dpdk-init=true
•DPDK ports are added live to a “netdev” type bridge in OVS DPDK
•In FD.IO, DPDK ports are specified in startup config before FD.IO is run
•With OVS DPDK, Host NIC is bound manually, while it is automatic in
FD.IO
8. Integration Challenges
•After binding to DPDK, NIC essentially does not exist in Linux kernel
anymore
•Packets flowing in userspace makes debugging more difficult
•NUMA Topology and hardware properties about each Host should be
considered before deploying accelerated dataplanes
•Configuration changes may require host reboot and additional steps
•Using existing Neutron agents (DHCP/L3) requires patches, uses kernel
data path (allows VPP veth pair to Agent namespace)
9. Common Performance Tuning
•PMD threads should be isolated and pinned to their own cores using
isolcpus kernel argument
•How many hugepages/which socket to assign to the dataplane?
•Which NUMA node does the PCIe bus with DPDK NIC belong to?
•Nova instances should be pinned to cores on the same socket as PMD
•Use 1GB huge page if possible
•BIOS settings for maximum performance:
- Disable C state
- Enable Turboboost/Speedstep
10. OVS DPDK Performance Tuning
•PMD threads are pinned to cores via cpu mask:
−ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x6
−If hyper-threading is enabled, care should be given to core sibling
relationships
−Cores should be allocated to the NUMA node that PCIe bus of the
DPDK NIC belongs to
−Allocating more than 1 core will result in multiple PMD threads
spawned
11. OVS DPDK Performance Tuning (cont.)
•Dataplane hugepages should be configured to same socket:
−ovs-vsctl set Open_vSwitch . other_config:dpdk-socket-
mem="1024,0"
−format is “<MB of socket 0>, <MB of socket 1>, ...”
•Pin IRQs away from isolated cores using tuna or tuned
12. VPP Performance Tuning
• Run in multithreading mode
cpu/workers <n>: Create n worker threads
cpu/coremask-workers <mask> and cpu/corelist-workers <list>: Place worker threads
according to mask or list
cpu/main-core <n>: Assign main thread to specific core. Defaults to first available
core.
cpu/skip-cores <n>: Leave low n bits of process affinity mask clear
dpdk/coremask <mask>: process level core mask
dpdk/num-mbufs <n>: Number of I/O buffers. Defaults to 16384
dpdk/socket-mem <list>: Buffer memory allocation. Defaults to 256MB on each
NUMA node
• Tickless Kernel: nohz_full and rcu_nocbs kernel parameters, use with isolcpus
13. Automated Configuration Options
•puppet-vswitch
−manifests/dpdk.pp
−Capable of configuring OVS with DPDK performance options
−Part of OpenStack Puppet Modules
•puppet-fdio
−Capable of configuring fd.io with performance options
−Can also configure honeycomb agent for OpenDaylight
−Own project in FD.IO repository
14. OpenDaylight + OVS DPDK
Computenode-0 Computenode-1
Tenantnetworki/f
Tenantnetworki/f
Tenant network i/f
Bridge
br-phy
VM 2
vhost-
user
Controlnode-0
VXLAN
OVS DPDK
External network i/f
Internet
DHCP
tap
VXLAN
VXLAN
OVS
OpenStack Services Network Control
VM 1
Bridge
br-int
Bridge
br-int
Bridge
br-phy
Bridge
br-int
vhost-
user
OVS DPDK
Bridge
br-ex
15. OpenDaylight + FD.IO
Computenode-0 Computenode-1
Tenantnetworki/f
Tenantnetworki/f
Tenant network i/f
Bridge
Domain
VM 2
vhost-
user
Controlnode-0
VXLAN
VPP
External network i/f
Internet
DHCP
tap
VXLAN
VXLAN
VPP
VPP
OpenStack Services Network Control
VM 1
Bridge
Domain
Bridge
Domain
vhost-
user
HoneyComb HoneyComb
HoneyComb
16. OpenDaylight + FD.IO current work
● Add support for Nirvana stack (converged GBP+Netvirt)
● Add more complete performance tuning options to deployment
● Add interface bonding support
● Add DVR support
● Distributed DHCP to compute nodes (no VPP needed on control
node anymore)
17. OPNFV Apex support
•Apex is the only installer project that supports both ovs-dpdk and VPP
data path
•All deployment and performance tuning steps can be specified in
config files and automated
•Apex supports OpenDaylight with both dataplanes over multiple
scenarios