Juniper Networks' vMX product provides a virtualized routing platform that can run the same Junos operating system as physical MX routers. The vMX uses virtualized DPDK-accelerated packet processing called vTRIO to separate the control and data planes for high performance. It supports various hypervisor and container deployments and can scale throughput from 100Mbps up to multiple 10Gbps ports depending on vCPU and core allocation. The vMX is suited for applications such as virtual PE routers, DC gateways, cloud WAN routers, and route reflectors where service providers need a virtualized solution that leverages their existing Junos feature set.
Slawomir Janukowicz, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
The document discusses network function virtualization (NFV) in telecommunications networks. It provides an overview of NFV goals such as agility, scalability, and the ability to add new services through service chaining. It then discusses specific NFV use cases like virtual customer premises equipment (vCPE), virtual branch offices, virtual routing engines, and virtual route reflectors. It also covers Juniper's virtualized MX (VMX) product for NFV, including its performance, scaling capabilities, and deployment models.
The document discusses Juniper's WANDL and NorthStar solutions for network operators. It provides an overview of the key capabilities of each solution, including:
- WANDL's IP/MPLS View allows operators to design, plan, monitor and optimize multi-vendor Layer 3 networks. It provides network modeling, traffic analysis and automated provisioning capabilities.
- NorthStar combines WANDL's path computation with Juniper's dynamic IP control plane to enable stateful traffic engineering. It provides optimized routing using a centralized path computation approach.
- Both solutions help operators improve network performance, redundancy and efficiency through capabilities like failure simulation, capacity planning, high availability assessment and traffic engineering.
The document provides an overview of updates to Juniper's MX platform, including new line cards and interface options for increased scale and performance. Key points include:
- New MPC5E and MPC6E line cards that provide increased throughput and interface flexibility with options like 100G interfaces.
- Software features for increased routing scale, virtualization support, and packet performance optimization techniques like "hypermode forwarding" and "turbo filters."
- A next generation port extender architecture for simplifying management of satellite devices connected to MX routers.
- EVPN and VXLAN support for using MX routers as data center gateways in multi-tenant cloud environments.
Juniper Networks announced updates to its Junos operating system and release model. Key highlights include:
- Junos will move to a twice-yearly major release schedule focused on quality, along with four innovation releases per year for new features.
- Major releases will receive 3 years of engineering support and 6 months of service support. Innovation releases will receive 6 months of each.
- The new release model is aimed at providing customers more choice and a faster time to market for new features while improving release maturity.
- Programmability enhancements include expanding automation frameworks like Puppet and Chef, as well as enabling Python scripting directly on Juniper devices.
The document provides information about virtual machine extensions (VMX) on Juniper Networks routers. It discusses hardware virtualization concepts including guest virtual machines running on a host machine. It then describes the different types of virtualization including fully virtualized, para-virtualized, and hardware-assisted. The rest of the document goes into details about the VMX product, architecture, forwarding model, and performance considerations for different use cases.
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PROIDEA
Modern CPUs have many cores and advanced instruction sets like AVX that allow performing multiple operations simultaneously. To handle 100 million packets per second, a platform needs network interfaces with speeds of at least 10 Gbps and a PCIe bus and memory fast enough to keep up. The Linux networking stack is not optimized for these speeds, so achieving line rate requires implementing the network processing in userspace using techniques like DPDK that avoid kernel overhead.
The document provides an agenda for the Juniper Day 2016 campus event in Prague. It discusses upcoming trends in campus LANs including 2.5 and 5 Gbps Ethernet standards, Juniper's new Fusion architecture approach, and new EX switching series products like the EX9200, EX4300, EX3400 and EX2300 that support these trends and Juniper's Fusion Enterprise solution. It also covers timelines for multi-gigabit adoption and Junos Fusion capabilities for unifying campus networks.
Slawomir Janukowicz, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
The document discusses network function virtualization (NFV) in telecommunications networks. It provides an overview of NFV goals such as agility, scalability, and the ability to add new services through service chaining. It then discusses specific NFV use cases like virtual customer premises equipment (vCPE), virtual branch offices, virtual routing engines, and virtual route reflectors. It also covers Juniper's virtualized MX (VMX) product for NFV, including its performance, scaling capabilities, and deployment models.
The document discusses Juniper's WANDL and NorthStar solutions for network operators. It provides an overview of the key capabilities of each solution, including:
- WANDL's IP/MPLS View allows operators to design, plan, monitor and optimize multi-vendor Layer 3 networks. It provides network modeling, traffic analysis and automated provisioning capabilities.
- NorthStar combines WANDL's path computation with Juniper's dynamic IP control plane to enable stateful traffic engineering. It provides optimized routing using a centralized path computation approach.
- Both solutions help operators improve network performance, redundancy and efficiency through capabilities like failure simulation, capacity planning, high availability assessment and traffic engineering.
The document provides an overview of updates to Juniper's MX platform, including new line cards and interface options for increased scale and performance. Key points include:
- New MPC5E and MPC6E line cards that provide increased throughput and interface flexibility with options like 100G interfaces.
- Software features for increased routing scale, virtualization support, and packet performance optimization techniques like "hypermode forwarding" and "turbo filters."
- A next generation port extender architecture for simplifying management of satellite devices connected to MX routers.
- EVPN and VXLAN support for using MX routers as data center gateways in multi-tenant cloud environments.
Juniper Networks announced updates to its Junos operating system and release model. Key highlights include:
- Junos will move to a twice-yearly major release schedule focused on quality, along with four innovation releases per year for new features.
- Major releases will receive 3 years of engineering support and 6 months of service support. Innovation releases will receive 6 months of each.
- The new release model is aimed at providing customers more choice and a faster time to market for new features while improving release maturity.
- Programmability enhancements include expanding automation frameworks like Puppet and Chef, as well as enabling Python scripting directly on Juniper devices.
The document provides information about virtual machine extensions (VMX) on Juniper Networks routers. It discusses hardware virtualization concepts including guest virtual machines running on a host machine. It then describes the different types of virtualization including fully virtualized, para-virtualized, and hardware-assisted. The rest of the document goes into details about the VMX product, architecture, forwarding model, and performance considerations for different use cases.
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PROIDEA
Modern CPUs have many cores and advanced instruction sets like AVX that allow performing multiple operations simultaneously. To handle 100 million packets per second, a platform needs network interfaces with speeds of at least 10 Gbps and a PCIe bus and memory fast enough to keep up. The Linux networking stack is not optimized for these speeds, so achieving line rate requires implementing the network processing in userspace using techniques like DPDK that avoid kernel overhead.
The document provides an agenda for the Juniper Day 2016 campus event in Prague. It discusses upcoming trends in campus LANs including 2.5 and 5 Gbps Ethernet standards, Juniper's new Fusion architecture approach, and new EX switching series products like the EX9200, EX4300, EX3400 and EX2300 that support these trends and Juniper's Fusion Enterprise solution. It also covers timelines for multi-gigabit adoption and Junos Fusion capabilities for unifying campus networks.
This document discusses use cases and requirements for different cloud customer segments using Contrail. It describes Contrail's ability to enable IT as a service, enterprise migration to the cloud with legacy interconnects, public cloud services, and IoT/M2M use cases. It provides an overview of how Contrail works including its components, scale out architecture, and interaction with OpenStack. It also summarizes Contrail's features such as routing, security, analytics, and gateway services.
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof MazepaPROIDEA
IOS XR is Cisco's modular, distributed network operating system. In 2004, Cisco introduced IOS XR and the CRS-1 router, the first router to run IOS XR. IOS XR offers innovations such as a distributed architecture, high scalability, and always-on operations. In subsequent years, Cisco continued expanding IOS XR's capabilities with features like 64-bit support and virtualization.
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld
VMworld 2013
Ben Basler, VMware
Roberto Mari, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses NSX design and deployment considerations including:
1. Physical and logical infrastructure requirements for NSX including IP connectivity and MTU size.
2. Edge cluster design with options for collapsed or separated edge and infrastructure racks.
3. NSX manager and controller placement and sizing within management clusters.
4. Transport zone, VTEP, and VXLAN switching concepts which are fundamental to the NSX overlay architecture.
Cubro provides network visibility solutions including intelligent network interface cards (NICs) with dedicated system-on-chip (SoC) processors that offload networking tasks from server CPUs. Their solutions include packet brokers, probes, and appliances for traffic filtering, de-duplication, decryption, and metadata extraction running customized Linux distributions at speeds up to 20Gbps. Cubro products are designed to simplify network monitoring and analytics collection through open interfaces like DPDK and Open vSwitch.
High-performance 32G Fibre Channel Module on MDS 9700 Directors:Tony Antony
To better serve the new application requirements, Cisco is introducing a New high-performance Analytics ready 32G Fibre Channel Module on MDS 9700 Directors and a new 32G Host Bus Adapter for UCS C-series. The end to end 32G FC support across Cisco DC platforms set new standards for Storage Networking providing customers with choice. Along with this announcement, Cisco is also announcing NVMe over Fabric support on MDS 9000 Series enabling customers to take advantage of the performance and low latency benefits offered by the new technology to scale efficiently in the post-flash environments.
VXLAN is a protocol that allows large numbers of virtual LANs to be overlaid on a physical network by encapsulating Ethernet frames within UDP packets and transporting them over an IP network. It addresses the scalability limitations of VLANs in large multi-tenant cloud environments by using a 24-bit segment ID rather than a 12-bit VLAN ID. The document provides an overview of VXLAN, why it is used, key concepts like VTEPs and VNIs, and demonstrations of VXLAN configuration on Cisco and Arista switches.
This document provides an overview and summary of Cisco's Data Center networking and storage solutions, with a focus on the new Cisco MDS 9710 Director. Some key points:
- Cisco offers a multi-protocol portfolio including Fibre Channel, FCoE, and IP networking solutions to address growing data and connectivity demands in modern data centers.
- The Cisco MDS 9710 is the newest storage director that provides the highest scalability, availability, and investment protection in the industry for large scale data centers.
- It supports up to 384 line-rate 16Gbps Fibre Channel ports or 48-port 10GbE FCoE modules in a single chassis. This provides 3 times the performance of competing
Designing Multi-tenant Data Centers Using EVPNAnas
This document describes the design of a multi-tenant data center network fabric using EVPN-IRB. It discusses the objectives of operational simplicity, workload placement flexibility, efficient bandwidth utilization, and multi-tenancy. It then describes the key components of the solution including BGP EVPN for control plane, overlay IRB for inter-subnet routing, distributed anycast gateways for workload mobility, and how the control and data planes interact for host learning and traffic forwarding.
The last software upgrade gives every Cubro Packetmaster the ability to work as a bypass switch with heartbeat functionality. The Cubro Bypass solution supports data rates from 1 to 100 Gbit .
Special Features:
Multilink support
Multiple heartbeats for multiple service testing
Input output traffic compare option
Monitoring support
Switch to spare support
Packet Broker and Bypass in one unit support
Flexibility
Security feature DDoS protection
Medtronic had challenges virtualizing large workloads over 1Gb connections with vMotion failures in ESX 4.1. Upgrading to ESX 5.0 enabled features like multiple-NIC vMotion and Stun During Page-Send (SDPS) to improve performance for migrating large VMs. Using multiple 10Gb NICs for vMotion provided more bandwidth and reduced migration times. Quality of service (QoS) was important to prioritize traffic and avoid overwhelming switch interconnects when not using dedicated vMotion switches. Medtronic deployed a solution with UCS servers, Nexus 1000v switches, and four 10Gb FCoE NICs per host, achieving a 157:1 consolidation ratio while successfully
The document discusses Ethernet VPN (EVPN) which introduces a new control plane approach for delivery of Ethernet services using MP-BGP. EVPN provides benefits like integrated Layer 2 and Layer 3 services, network efficiency, design flexibility, and greater network control. It describes key EVPN operations like all-active multihoming, split horizon, proxy ARP/ND, MAC mobility, and default gateway inter-subnet forwarding. EVPN can use different data plane encapsulations including MPLS, PBB, and VXLAN. It provides an overview of EVPN status and specifications being standardized in the IETF.
The document provides troubleshooting tips and techniques for Cisco Data center switches including the Cisco Nexus 7000, Catalyst 6500 VSS, and high CPU utilization issues. It discusses using commands like show processes cpu sorted, debug netdr capture, and show ip cef to troubleshoot traffic flow and switching paths. It also covers troubleshooting software upgrades on the Nexus 7000 and gathering core dumps and logs to debug process crashes.
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
OpenContrail is an open source SDN platform that provides network virtualization and automation capabilities. It integrates with CloudStack to enable hybrid cloud deployments with workload mobility between private and public clouds. OpenContrail supports dynamic service chaining to provision and chain physical or virtual network services without downtime. It offers a massively scalable and highly available architecture based on proven MPLS VPN technology with multi-vendor interoperability.
Operationalizing EVPN in the Data Center: Part 2Cumulus Networks
In the second of our two-part series on EVPN, Cumulus Networks Chief Scientist Dinesh Dutt dives into more technical details of network routing, EVPN use cases, and best practices for operationalizing EVPN in the data center.
To view the recording of this webinar, visit http://go.cumulusnetworks.com/l/32472/2017-09-23/95t7xh
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
This demo/lab will guide you to install and configure FD.io Vector Packet Processing (VPP) on Intel® Architecture (AI) Server. You will also learn to install TRex* on another AI Server to send packets to the VPP, and use some VPP commands to forward packets back to the TRex*.
Speaker: Loc Nguyen. Loc is a Software Application Engineer in Data Center Scale Engineering Team. Loc joined Intel in 2005, and has worked in various projects. Before joining the network group, Loc worked in High-Performance Computing area and supported Intel® Xeon Phi™ Product Family. His interest includes computer graphics, parallel computing, and computer networking.
Presented by Eran Bello at the "NFV & SDN Summit" held March 2014 in Paris, France
Ideal for Cloud DataCenter, Data Processing Platforms and Network Functions Virtualization
Leading SerDes Technology: High Bandwidth – Advanced Process
10/40/56Gb VPI with PCIe 3.0 Interface
10/40/56Gb High Bandwidth Switch: 36 ports of 10/40/56Gb or 64 ports of 10Gb
RDMA/RoCE technology: Ultra Low Latency Data Transfer
Software Defined Networking: SDN Switch and Control End to End Solution
Cloud Management: OpenStack integration
Paving the way to 100Gb/s Interconnect
End to End Network Interconnect for Compute/Processing and Switching
Software Defined Networking
High Bandwidth, Low Latency and Lower TCO: $/Port/Gb
Understanding network and service virtualizationSDN Hub
This document discusses network and service virtualization technologies. It begins with an overview of challenges with current network architectures and how virtualization addresses them. It then covers three key trends: 1) network virtualization using SDN to program networks dynamically, 2) service virtualization using NFV to virtualize network functions, and 3) new infrastructure tools like Open vSwitch, OpenDaylight, and Docker networking. Finally, it discusses approaches to deploying network and service virtualization and provides a vendor landscape.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This document discusses use cases and requirements for different cloud customer segments using Contrail. It describes Contrail's ability to enable IT as a service, enterprise migration to the cloud with legacy interconnects, public cloud services, and IoT/M2M use cases. It provides an overview of how Contrail works including its components, scale out architecture, and interaction with OpenStack. It also summarizes Contrail's features such as routing, security, analytics, and gateway services.
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof MazepaPROIDEA
IOS XR is Cisco's modular, distributed network operating system. In 2004, Cisco introduced IOS XR and the CRS-1 router, the first router to run IOS XR. IOS XR offers innovations such as a distributed architecture, high scalability, and always-on operations. In subsequent years, Cisco continued expanding IOS XR's capabilities with features like 64-bit support and virtualization.
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld
VMworld 2013
Ben Basler, VMware
Roberto Mari, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses NSX design and deployment considerations including:
1. Physical and logical infrastructure requirements for NSX including IP connectivity and MTU size.
2. Edge cluster design with options for collapsed or separated edge and infrastructure racks.
3. NSX manager and controller placement and sizing within management clusters.
4. Transport zone, VTEP, and VXLAN switching concepts which are fundamental to the NSX overlay architecture.
Cubro provides network visibility solutions including intelligent network interface cards (NICs) with dedicated system-on-chip (SoC) processors that offload networking tasks from server CPUs. Their solutions include packet brokers, probes, and appliances for traffic filtering, de-duplication, decryption, and metadata extraction running customized Linux distributions at speeds up to 20Gbps. Cubro products are designed to simplify network monitoring and analytics collection through open interfaces like DPDK and Open vSwitch.
High-performance 32G Fibre Channel Module on MDS 9700 Directors:Tony Antony
To better serve the new application requirements, Cisco is introducing a New high-performance Analytics ready 32G Fibre Channel Module on MDS 9700 Directors and a new 32G Host Bus Adapter for UCS C-series. The end to end 32G FC support across Cisco DC platforms set new standards for Storage Networking providing customers with choice. Along with this announcement, Cisco is also announcing NVMe over Fabric support on MDS 9000 Series enabling customers to take advantage of the performance and low latency benefits offered by the new technology to scale efficiently in the post-flash environments.
VXLAN is a protocol that allows large numbers of virtual LANs to be overlaid on a physical network by encapsulating Ethernet frames within UDP packets and transporting them over an IP network. It addresses the scalability limitations of VLANs in large multi-tenant cloud environments by using a 24-bit segment ID rather than a 12-bit VLAN ID. The document provides an overview of VXLAN, why it is used, key concepts like VTEPs and VNIs, and demonstrations of VXLAN configuration on Cisco and Arista switches.
This document provides an overview and summary of Cisco's Data Center networking and storage solutions, with a focus on the new Cisco MDS 9710 Director. Some key points:
- Cisco offers a multi-protocol portfolio including Fibre Channel, FCoE, and IP networking solutions to address growing data and connectivity demands in modern data centers.
- The Cisco MDS 9710 is the newest storage director that provides the highest scalability, availability, and investment protection in the industry for large scale data centers.
- It supports up to 384 line-rate 16Gbps Fibre Channel ports or 48-port 10GbE FCoE modules in a single chassis. This provides 3 times the performance of competing
Designing Multi-tenant Data Centers Using EVPNAnas
This document describes the design of a multi-tenant data center network fabric using EVPN-IRB. It discusses the objectives of operational simplicity, workload placement flexibility, efficient bandwidth utilization, and multi-tenancy. It then describes the key components of the solution including BGP EVPN for control plane, overlay IRB for inter-subnet routing, distributed anycast gateways for workload mobility, and how the control and data planes interact for host learning and traffic forwarding.
The last software upgrade gives every Cubro Packetmaster the ability to work as a bypass switch with heartbeat functionality. The Cubro Bypass solution supports data rates from 1 to 100 Gbit .
Special Features:
Multilink support
Multiple heartbeats for multiple service testing
Input output traffic compare option
Monitoring support
Switch to spare support
Packet Broker and Bypass in one unit support
Flexibility
Security feature DDoS protection
Medtronic had challenges virtualizing large workloads over 1Gb connections with vMotion failures in ESX 4.1. Upgrading to ESX 5.0 enabled features like multiple-NIC vMotion and Stun During Page-Send (SDPS) to improve performance for migrating large VMs. Using multiple 10Gb NICs for vMotion provided more bandwidth and reduced migration times. Quality of service (QoS) was important to prioritize traffic and avoid overwhelming switch interconnects when not using dedicated vMotion switches. Medtronic deployed a solution with UCS servers, Nexus 1000v switches, and four 10Gb FCoE NICs per host, achieving a 157:1 consolidation ratio while successfully
The document discusses Ethernet VPN (EVPN) which introduces a new control plane approach for delivery of Ethernet services using MP-BGP. EVPN provides benefits like integrated Layer 2 and Layer 3 services, network efficiency, design flexibility, and greater network control. It describes key EVPN operations like all-active multihoming, split horizon, proxy ARP/ND, MAC mobility, and default gateway inter-subnet forwarding. EVPN can use different data plane encapsulations including MPLS, PBB, and VXLAN. It provides an overview of EVPN status and specifications being standardized in the IETF.
The document provides troubleshooting tips and techniques for Cisco Data center switches including the Cisco Nexus 7000, Catalyst 6500 VSS, and high CPU utilization issues. It discusses using commands like show processes cpu sorted, debug netdr capture, and show ip cef to troubleshoot traffic flow and switching paths. It also covers troubleshooting software upgrades on the Nexus 7000 and gathering core dumps and logs to debug process crashes.
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
OpenContrail is an open source SDN platform that provides network virtualization and automation capabilities. It integrates with CloudStack to enable hybrid cloud deployments with workload mobility between private and public clouds. OpenContrail supports dynamic service chaining to provision and chain physical or virtual network services without downtime. It offers a massively scalable and highly available architecture based on proven MPLS VPN technology with multi-vendor interoperability.
Operationalizing EVPN in the Data Center: Part 2Cumulus Networks
In the second of our two-part series on EVPN, Cumulus Networks Chief Scientist Dinesh Dutt dives into more technical details of network routing, EVPN use cases, and best practices for operationalizing EVPN in the data center.
To view the recording of this webinar, visit http://go.cumulusnetworks.com/l/32472/2017-09-23/95t7xh
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
This demo/lab will guide you to install and configure FD.io Vector Packet Processing (VPP) on Intel® Architecture (AI) Server. You will also learn to install TRex* on another AI Server to send packets to the VPP, and use some VPP commands to forward packets back to the TRex*.
Speaker: Loc Nguyen. Loc is a Software Application Engineer in Data Center Scale Engineering Team. Loc joined Intel in 2005, and has worked in various projects. Before joining the network group, Loc worked in High-Performance Computing area and supported Intel® Xeon Phi™ Product Family. His interest includes computer graphics, parallel computing, and computer networking.
Presented by Eran Bello at the "NFV & SDN Summit" held March 2014 in Paris, France
Ideal for Cloud DataCenter, Data Processing Platforms and Network Functions Virtualization
Leading SerDes Technology: High Bandwidth – Advanced Process
10/40/56Gb VPI with PCIe 3.0 Interface
10/40/56Gb High Bandwidth Switch: 36 ports of 10/40/56Gb or 64 ports of 10Gb
RDMA/RoCE technology: Ultra Low Latency Data Transfer
Software Defined Networking: SDN Switch and Control End to End Solution
Cloud Management: OpenStack integration
Paving the way to 100Gb/s Interconnect
End to End Network Interconnect for Compute/Processing and Switching
Software Defined Networking
High Bandwidth, Low Latency and Lower TCO: $/Port/Gb
Understanding network and service virtualizationSDN Hub
This document discusses network and service virtualization technologies. It begins with an overview of challenges with current network architectures and how virtualization addresses them. It then covers three key trends: 1) network virtualization using SDN to program networks dynamically, 2) service virtualization using NFV to virtualize network functions, and 3) new infrastructure tools like Open vSwitch, OpenDaylight, and Docker networking. Finally, it discusses approaches to deploying network and service virtualization and provides a vendor landscape.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
6WINDGate™ - Enabling Cloud RAN Virtualization6WIND
Traditional mobile networks are based on stand-alone Base Transceiver Stations covering a radio area. BTS overlap to provide a wide coverage to mobile users and are connected to the mobile core network through a backhaul network. Cloud Radio Access Network is a new architecture for mobile access networks that rely on simple radio front-ends connected to a pool of remote network resources. By leveraging cloud infrastructures, CAPEX and OPEX is lowered substantially.
Summit 16: How to Compose a New OPNFV Solution Stack?OPNFV
This session showcases how a new OPNFV solution stack (a.k.a. ""scenario"") is composed and stood up. We'll use a new solution stack framed around a new software forwarder (""VPP"") provided by the FD.io project as example for this session. The session discusses how an evolution/change of upstream components from OpenStack, OpenDaylight and FFD.io are put in place for the scenario, how installers and tests need to be evolved to allow for integration into OPNFV's continuous integration, deployment and test pipeline.
Sharing High-Performance Interconnects Across Multiple Virtual Machinesinside-BigData.com
In this deck from the Stanford HPC Conference, Mohan Potheri from VMware presents: Sharing High-Performance Interconnects Across Multiple Virtual Machines.
"Virtualized devices offer maximum flexibility: sharing of hardware between virtual machines, the use of VMware vMotion to handle migration and take snapshots. However, when performance is the most critical requirement there are other options. VMware Direct Path I/O delivers excellent performance, but only for a single virtual machine. Single root I/O virtualization (SR-IOV), on the other hand, offers the performance of pass-through mode while allowing devices to be shared by multiple virtual machines.
This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations."
Watch the video: https://youtu.be/-iYYmsBw8SU
Learn more: https://www.vmware.com
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
DPDK Summit 2015 - RIFT.io - Tim MortsolfJim St. Leger
DPDK Summit 2015 in San Francisco.
Presentation by RIFT.io's CTO Tim Mortsolf.
For additional details and the video recording please visit www.dpdksummit.com.
Not all networks are created equal. Brocade Ethernet Fabrics, as joined in the IBM Flex EN4023 embedded switch, revolutionizes by automating and optimizing your network, enabling you to reduce total cost of ownership, not just capital expenses. Lab tests have validated the Opex and Capex advantages of VCS Ethernet Fabrics over traditional networking. Learn how customers have reduced network infrastructure requirements by 25% and increases the networks performance by up to 30%. See how Dynamic Ports on Demand can save hardware costs. Learn the dramatic Operational impact VCS Fabric switches have on decreasing time to deploy the network by 79% and decreasing the time to implement network changes by 85%.
SD-WAN Catalyst a brief Presentation of solutionpepegaston2030
The document discusses Cisco's SD-WAN solution which provides flexibility, transport independence, and a unified management plane. It describes the SD-WAN architecture including the management plane with vManage, control plane with vSmart controllers, and data plane with physical/virtual WAN edge routers. The SD-WAN solution uses an overlay protocol and transport locators to establish encrypted tunnels between sites and distribute routing policies and services across the WAN.
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld
1. NSX brings network virtualization to VMware environments by providing scalable logical switching and distributed logical routing without dependency on physical network hardware or topology.
2. NSX has two consumption models - optimized for vSphere which leverages VMware infrastructure or as a multi-hypervisor, multi-cloud platform.
3. NSX deployment involves three simple steps - deploying the network infrastructure, deploying NSX manager and controllers, and consuming applications on the virtual networks.
The LEGaTO project received funding from the EU's Horizon 2020 program to develop a heterogeneous hardware platform called RECS for cloud to edge computing. RECS uses a modular microserver approach integrating CPUs, GPUs, FPGAs, and SOCs. It allows for flexible node composition through virtual functions to enable different compute and communication topologies.
The document discusses network function virtualization and how 6WIND's Virtual Accelerator solution addresses performance bottlenecks in virtualized network environments. It provides high-speed networking and packet processing capabilities independent of the underlying Linux kernel. This improves throughput for east-west traffic between virtual network functions and north-south traffic, allowing for higher VM and VNF densities. It also enables appliance-based network functions to be virtualized without performance limitations.
Dr. Christos Kolias – Senior Research Scientist
Keynote Title: “NFV: Empowering the Network”
Keynote Abstract: Network Functions Virtualization (NFV) envisions and promises to change the service provider landscape and has emerged as one of one of today’s significant trends. Although less than two years old, NFV has garnered the industry’s full attention and support. Moving swiftly, a number of key accomplishments have already taken place, and a lot more work is currently under way within ETSI NFV while we are embarking on its future phase. Various proofs-of-concepts (ranging from vEPC to vCPE, vIMS and vCDN) are being developed while issues such as open source and SDN are becoming key ingredients as the can play a pivotal role.
Dr. Christos Kolias' Bio: Christos Kolias is a senior research scientist at Orange Silicon Valley (a subsidiary of Orange). Christos is a co-founder of the ETSI NFV group and had led the formation of ONF’s Wireless & Mobile working group. He has lectured on NFV and SDN at several events. Christos has more than 15 years of experience in networking, he is the originator of Virtual Output Queueing (VOQ) used in packet switching. He holds a Ph.D. in Computer Science from UCLA.
---------------------------------------------------
★ Resources ★
Zerista: http://lcu14.zerista.com/event/member/137765
Google Event: https://plus.google.com/u/0/events/cpeksim4hr4ghhuufv5ic4viirs
Video: https://www.youtube.com/watch?v=tFDnj_342n4&list=UUIVqQKxCyQLJS6xvSmfndLA
Etherpad: http://pad.linaro.org/p/lcu14-400a
---------------------------------------------------
★ Event Details ★
Linaro Connect USA - #LCU14
September 15-19th, 2014
Hyatt Regency San Francisco Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
This is a level 200 - 300 presentation.
It assumes:
Good understanding of vCenter 4, ESX 4, ESXi 4.
Preferably hands-on
We will only cover the delta between 4.1 and 4.0
Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etc
Good understanding of related storage, server, network technology
Target audience
VMware Specialist: SE + Delivery from partners
Development, test, and characterization of MEC platforms with Teranium and Dr...Michelle Holley
Mobile edge computing delivers cloud computing at the edge of the cellular network to drive services quality and innovation. The ability for CSPs and ISVs to effectively develop, deliver, and deploy MEC services on a given platform directly correlates with the availability and maturity of associated tools and test environment. Dronava is a hyper-connected, web-scale network reference design for the 5G mobile network, suitable for use as a test and development socket for cloud applications developed for MEC platforms with tools such as the Intel NEV SDK. With Dronava, developers can drive the application with real traffics from the network edge to the EPC core, and if need be, connect with services in the core network in order to fully characterize the functionalities, latency, and throughput of the platform and application.Teranium is an integrated development environment that simplifies the development, packaging, and deployment/management of cloud applications. Teranium can be utilized to develop and deploy MEC applications on a number of platforms. Together with Dronava, Teranium helps to reduce complexity and improve efficiency in the ability of CSPs and ISVs to adopt and deploy MEC-base services.
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioOPNFV
This document discusses using Vector Packet Processor (VPP) to provide fast and flexible networking capabilities for NFV solution stacks. It introduces VPP as a high-performance virtual switch that can achieve high throughput even at large scale. VPP offers features like IPv4 and IPv6 routing, Layer 2 switching, and VXLAN tunneling with linear performance scaling across multiple CPU cores. The FastDataStacks project aims to integrate VPP into OpenStack-based NFV solution stacks to provide enhanced networking functions.
Learn more about how today's service provider's networks are built to deliver yesterday's services and how the Next generation service require a new approach with our Evolved Programmable Network's offerings will enable business transformation for new service deliveries.
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
This document discusses how hardware acceleration can improve the performance of modern data centers and machine learning workloads. It covers several key points:
1) Software-defined networking allows for flexibility but suffers from performance issues without hardware offloading. Hardware acceleration is needed to gain efficiency.
2) Technologies like SR-IOV, overlay networking, and RDMA can provide direct access and high-speed networking to virtual machines and accelerate workloads. Hardware offloads from NICs improve performance.
3) Frameworks like DPDK and ASAP2 can further accelerate workloads by offloading processing to the NIC and bypassing the CPU. This improves performance without additional CPU resources.
По статистике, три из четырех проектов заканчиваются неудачей. Из-за нечетких целей, плохого планирования, недоучета рисков и так далее и тому подобное.
И есть еще одна причина.
Плохое управление людьми. Проекты делают люди, поэтому, все управление проектами – это управление людьми. А вовсе не вырисовывание красивых картинок в MSProject. Об этом вы поговорите с Олегом Вайнбергом, экс CIO и тьютором факультета менеджмента Открытого Университета Великобритании.
Juniper Networks provides a data center solution consisting of Juniper switches, security devices, and Contrail SDN software. The solution addresses challenges of scale and automation needed to build future-proof clouds and data centers. Key aspects of the solution include Juniper's portfolio of data center switches like the QFX10000 line, partnerships with other vendors, and proven reference designs. Juniper helps customers address these challenges and create valuable cloud services.
This document discusses the growing cloud services market and opportunities for network providers. It notes that the cloud services market is expected to grow to $33 billion by 2018 with communication service providers increasing their share of the market. Enterprise adoption of cloud is growing as businesses recognize the benefits of flexibility, cost savings, and efficiency. However, cloud brings challenges around networking, security, and management. The document outlines Juniper's vision and solutions for data centers and cloud, including strategies for private, public and hybrid cloud models. It discusses Juniper's portfolio of switching, routing, security and software-defined networking products that help address the challenges of cloud and create value-added cloud services.
The document discusses Juniper's data center network transformation assessment. The assessment involves reviewing a customer's existing data center environment and goals to evaluate options and plan improvements. It covers key areas like infrastructure, security, applications and management. The methodology consists of gathering requirements, documenting the baseline, analyzing gaps, and providing recommendations. The final report outlines findings, proposed solutions, impacts, and a roadmap. Assessments help customers optimize their networks to meet business needs and take advantage of new technologies.
Juniper presented its Universal Access solution for mobile backhaul and aggregation networks. The solution includes the ACX500 for small cell backhaul, the ACX5000 series for pre-aggregation networks, and security gateway options like the SRX5000 and MX104. This provides operators a seamless end-to-end network for transporting mobile traffic from the radio access network to the core while ensuring security and performance.
This document provides an overview of data center trends and the Juniper MetaFabric architecture. It discusses key market trends in compute, storage, virtualization, networking and orchestration. It then describes the core strengths and foundational technologies that Juniper offers, including their QFX series switches, Virtual Chassis Fabric, and management solutions. Finally, it shows how the MetaFabric architecture provides a unified physical and virtual network that can span multiple data centers and clouds.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
1. Juniper Networks
SDN and NFV products
for Service Providers Networks
Evgeny Bugakov
Senior Systems Engineer, JNCIE-SP
21 April 2015
Moscow, Russia
2. AGENDA
Virtualization strategy and goals1
vMX product overview and performance2
vMX Roadmap and licensing4
vMX Use cases and deployment models3
Northstar WAN SDN Controller5
4. Branch
Office
HQ
Carrier Ethernet
Switch
Cell Site
Router
Mobile &
Packet GWs
Aggregation
Router/
Metro Core
DC/CO Edge
Router
Service Edge
Router
Core
Enterprise Edge/Mobile Edge Aggregation/Metro/Metro core
Service Provider Edge/Core
and EPC
VCPE, Enterprise Router Virtual PE, Hardware Virtualization
Virtual Route Reflector
MX SDN Gateway
Control Plane and OS: Virtual JUNOS, Forwarding Plane: Virtualized Trio
vPE, vCPE
Data center/Central Office
MX Virtualization Strategy
Leverage R&D effort and JUNOS feature velocity across all physical & virtualization initiatives
5. Physical vs. Virtual
Physical Virtual
High throughput, high density Flexibility to reach higher scale in control plane and
service plane
Guarantee of SLA Agile, quick to start
Low power consumption per throughput Low power consumption per control plan and service
Scale up Scale out
Higher entry cost in $ and longer time to deploy Lower entry cost in $ and shorter time to deploy
Distributed or centralized model Optimal in centralized cloud-centric deployment
Well development network mgmt system, OSS/BSS Same platform mgmt as Physical, plus same VM
mgmt as a SW on server in the cloud
Variety of network interfaces for flexibility Cloud centric, Ethernet-only
Excellent price per throughput ratio Ability to apply “pay as you grow” model
Each option has its own strength, and it is
created with different focus
6. Type of deployments with virtual platform
Traditional
function, 1:1
form
replacement
New applications
where physical is
not feasible or ideal
A whole new
approach to
a traditional
concept
Cloud CPE
Cloud based VPN
Service
Chaining GW
Virtual Private Cloud GW
Multi-function, multi-layer integration
w/ routing as a plug-in
SDN GW
Route Reflector
Services appliances
Lab & POC
Branch Router
DC GW
CPE
PE
Wireless LAN GW
Mobile Sec GW
Mobile GW
8. vMX overview
Efficient separation of control and data-plane
– Data packets are switched within vTRIO
– Multi-threaded SMP implementation allows core elasticity
– Only control packets forwarded to JUNOS
– Feature parity with JUNOS (CLI, interface model, service configuration)
– NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)
Guest OS (Linux) Guest OS (JUNOS)
Hypervisor
x86 Hardware
CHASSISD
RPD
LC-
Kernel
DCD
SNMP
Virtual TRIO
VFP VCP
Intel DPDK
9. Virtual and Physical MX
PFE VFP
Microcode
crosscompiled
X86
instructions
CONTROL
PLANE
DATA
PLANE
ASIC/HARD
WARE
Cross compilation creates high leverage of features between Virtual and Physical with minimal re-work
TRIO
UCODE
10. Virtualization techniques: deployment with hypervisors
Application
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, XEN,VMWare ESXi
Physical layer
VirtIO drivers
Device emulation
Para-virtualization (VirtIO, VMXNET3)
• Guest and Hypervisor work together to make emulation
efficient
• Offers flexibility for multi-tenancy but with lower I/O
performance
• NIC resource is not tied to any one application and can be
shared across multiple applications
• vMotion like functionality possible
PCI-Pass through with SR-IOV
• Device drivers exist in user space
• Best for I/O performance but has dependency on NIC type
• Direct I/O path between NIC and user-space application
bypassing hypervisor
• vMotion like functionality not possible
Application
Virtual NICs
Guest VM#2
VirtIO drivers
Application
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, XEN, VMWare ESXi
Physical layer
Device emulation
Application
Virtual NICs
Guest VM#2
Device emulation
PCIPass-through
SR-IOV
11. Virtualization techniques: containers deployment
Application 1
Virtual NICs
Physical NICs
Physical layer
Containers (Docker, LXC)
• No hypervisor layer. Much less memory and compute resource
overhead
• No need for PCI-pass through or special NIC emulation
• Offers high I/O performance
• Offers flexibility for multi-tenancy
Application 2
Virtual NICs
Container engine (Docker, LXC)
14. vMX Environment
Description Value
Sample system configuration
Intel Xeon E5-2667 v2 @ 3.30GHz 25 MB Cache.
NIC: Intel 82599 (for SR-IOV only)
Memory
Minimum: 8 GB
(2GB for vRE, 4GB for vPFE, 2GB for Host OS)
Storage Local or NAS
Sample system configuration
Sample configuration for number of CPUs
Use-cases Requirement
VMX with up to 100Mbps performance
Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].
Min # of Cores: 2 [ 1 core for VFP and 1 core for VCP]. Min memory 8G.
VirtIO NIC only.
VMX with up 3G of performance @ 512 bytes
Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].
Min # of Cores: 4 [ 2 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G.
VirtIO or SR-IOV NIC.
VMX with 10G and beyond (assuming min 2 ports of 10G)
Min # of vCPUs: 5 [1 vCPU for VCP and 4 vCPUs for VFP].
Min # of Cores: 5 [ 3 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G.
SR-IOV only NIC.
15. vMX Baseline Performance
VMX performance in Gbps
# of cores for packet processing *
Frame size (Bytes) 3 4 6 8 10
256 2 3.8 7.2 9.3 12.6
512 3.7 7.3 13.5 18.4 19.8
1500 10.7 20 20 20 20
2 x 10G ports
4 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 10
256 2.1 4.2 6.8 9.6 13.3
512 4.0 7.9 13.8 18.6 26
1500 11.3 22.5 39.1 40 40
6 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 10
256 2.2 4.0 6.8 9.8
512 4.1 8.1 14 19.0 27.5
1500 11.5 22.9 40 53.2 60
*Number of cores includes cores for packet processing and associated host functionality. For each 10G port there is a dedicated core not included in this number.
8 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 12
66 4.8
128 8.3
256 14.4
512 31
1500 78.5
IMIX 35.3
17. Service Provider VMX use case – virtual PE (vPE)
DC/CO
Gateway
Provider MPLS
cloudCPE
L2 PE
L3 PE
CPE
Peering
Internet
SMB
CPE
Pseudowire
L3VPN
IPSEC/Overlay technology
Branch
Offic
e
Branch
Offic
e
DC/CO Fabric
vPE
• Scale-out deployment
scenarios
• Low bandwidth, high control
plane scale customers
• Dedicated PE for new
services and faster time-to-
market
Market Requirement
• VMX is a virtual extension of
a physical MX PE
• Orchestration and
management capabilities
inherent to any virtualized
application apply
VMX Value Proposition
18. VMX as a DC Gateway – virtual USGW
VM VM VM
ToR (IP)
ToR (L2)
Non Virtualized
environment (L2)
VXLAN
Gateway
(VTEP)
VTEP
VM VM VM
VTEP
Virtualized Server Virtualized Server
VPN Cust A VPN Cust B
VRF A
VRF B
MPLS Cloud
VPN
Gateway
(L3VPN)
VMX
Virtual Network B Virtual Network A
VM VM VM VM VM VM
Data Center/ Central Office
• Service Providers need a
gateway router to connect the
virtual networks to the physical
network
• Gateway should be capable of
supporting different DC overlay,
DC Interconnect and L2
technologies in the DC such as
GRE, VXLAN, VPLS and EVPN
Market Requirement
• VMX supports all the overlay, DCI and
L2 technologies available on MX
• Scale-out control plane to scale up
VRF instances and number of VPN
routes
VMX Value Proposition
19. Reflection from physical to virtual world
Proof of concept lab validation or SW certification
• Perfect mirroring effect between carrier
grade physical platform & virtual router
• Can provide reflection effect of an actual
deployment in virtual environment
• Ideal to support
• Proof of Concept lab
• New service configuration/operation
preparation
• SW release validation for an actual
deployment
• Training lab for operational team
• Troubleshoot environment for a real network
issue
• CAPEX or OPEX reduction for lab
• Quick turn around when lab network
scale is required
Virtual
Physical
deployment
20. Virtual BNG cluster in a data center
BNG cluster
10K~100K subscribers
Data Center or CO
vMX as vBNG
vMX vMX vMX vMX vMX
• Potentially BNG function can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS);
• Suitable to perform heavy load BNG control-plane work while there is little BW needed;
• Pay-as-you-grow model;
• Rapid Deployment of new BNG router when needed;
• Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;
21. vMX Route Reflector feature set
Route Reflectors are characterized by RIB scale (available memory) and BGP
Performance (Policy Computation, route resolver, network I/O - determined by CPU
speed)
Memory drives route reflector scaling
• Larger memory means that RRs can hold more RIB routes
• With higher memory an RR can control larger network segments – lower
number of RRs required in a network
CPU speed drives faster BGP performance
• Faster CPU clock means faster convergence
• Faster RR CPUs allow larger network segments controlled by one RR - lower
numbers of RRs required in a network
vRR product addresses these pain point by running Junos image as an RR application
on faster CPUs and with memory on standard servers/appliances
22. VRR Scaling Results
* The convergence numbers also improve with higher clock CPU
Tested with 32G vRR instance
Address
Family
# of
advertizing
peers
active routes Total Routes
Memory
Utilization(for
receive all
routes)
Time taken
to receive all
routes
# of receiving
peers
Time taken to advertise
the routes and Mem Utils.
IPv4 600 4.2 million 42Mil (10path) 60% 11min 600 20min(62%)
IPv4 600 2 million 20Mil (10path) 33% 6min 600 6min(33%)
IPv6 600 4 million 40Mil (10path) 68% 26min 600 26min(68%)
VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%)
VPNv4 600 4.2Mil
8.4Mil
(2 paths )
19% 5min 600 23min(24%)
VPNv4 600 6Mil 12Mil (2 paths ) 24% 8min 600 36min(32%)
VPNv6 600 6Mil 12Mil (2 paths ) 30% 11min 600 11min(30%)
VPNv6 600 4.2Mil
8.4Mil
(2 paths )
22% 8min 600 8min(22%)
23. CLOUD Based Virtual Route Reflector DESIGN
Solving the best path selection problem for cloud virtual route reflector
VRR 1
Region 1
Regional
Network 2
VRR 2
Region 2Data Center
Cloud
Backbone
GRE, IGP
VRR 2 selects path
based on R1 view
R1
R2
VRR 2 selects path
based on R2 view
vRR as an “Application” hosted in DC
GRE tunnel is originated from gre.X (control plane interface)
VRR behaves like it is locally attached to R1 (requires resolution RIB config)
Client 2
Client 1
Regional
Network 1
Client 3
iBGP
Cloud Overlay w/ Contrail or
VMWare
24. VMX to offer managed CPE/centralized CPE
vMX as vCPE
(IPSec, NAT)
vSRX
(Firewall)
Branch
Offic
e
Switch
Provider MPLS
cloud
DC/CO GW
Branch
Offic
e
Switch
Provider MPLS
cloud
DC/CO Fabric + Contrail overlay
vMX as
vPE
Branch
Offic
e
Switch
L2 PE
L2 PE
PE
Internet
Contrail
Controller
Service providers want to offer a managed
CPE service and centralize the CPE functionality
to avoid “truck rolls”
Large enterprises want a centralized CPE
offering to manage all their branch sites
Both SPs and enterprises want the ability to
offer new services without changing the CPE
device
Market Requirement
VMX with service chaining can offer best of
breed routing and L4-L7 functionality
Service chaining offers the flexibility to add
new services in a scale-out manner
VMX Value Proposition
25. Cloud Based CPE with vMX
• A Simplified CPE
• Remove CPE barriers to service
innovation
• Lower complexity & cost
DHCPFirewallRouting / IP
ForwardingNAT
Modem / ONTSwitchAccess
Point
VoiceMoCA/ HPAV/
HPNA3
Typical CPE Functions
DHCP
FWRouting / IP
Forwarding
NATModem / ONTSwitchAccess
Point
VoiceMoCA/ HPAV/
HPNA3
Simplified L2 CPE
In Network CPE functions
Leverage & integrate with other network
services
Centralize & consolidate
Seamless integrate with mobile & cloud
based services
Direct Connect
Extend reach & visibility into the
home
Per device awareness & state
Simplified user experience
Simplify the device required on the customer premise
Centralize key CPE functions & integrate them into the network edge
BNG / PE in SP
Network
26. More use cases? The limit is our imagination
• Virtual platform is one more tool for network provider, and the use cases are
up to users to define
VPC GW for private,
public and hybrid cloud
Virtual Route Reflector
NFV plug-in for multi-
function consolidation
SW certification, lab validation, network
planning & troubleshooting, proof of concept
Distributed NFV Service Complex
Virtual BNG cluster
Virtual Mobile service
control GW
And more…
Cloud based VPN
vGW for service chaining
28. vMX Products family
Characteristics Target customer Availability
Trial
• Up to 90 day trial
• No limit on capacity
• Inclusive of all features
• Potential customers who want to
try-out VMX in their lab or
qualify VMX
• Early availability by
end of Feb 2015
Lab
simulation/Educ
ation
• No time-limit enforced
• Forwarding plane limited to
50Mbps
• Inclusive of all features
• Customer wants to simulate
production network in lab
• New customer to gain JUNOS
and MX experience
• Early availability by
end of Feb 2015
GA product
• Bandwidth driven licenses
• Two modes for features:
BASE or
ADVANCE/PREMIUM
• Production deployment for VMX • 14.1R6 (June 2015)
29. VMX FRS product
• Official FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R6.
• High level overview of FRS product
• DPDK integration. Min 80G throughput per VMX instance.
• OpenStack integration.
• 1:1 mapping between VFP and VCP
• Hypervisor support: KVM, VMWare ESXi, Xen
• High level feature support for FRS
• Full IP capabilities
• MPLS: LDP, RSVP
• MPLS applications: L3VPN, L2VPN, L2Circuit
• IP and MPLS multicast
• Tunneling: GRE, LT
• OAM: BFD
• QoS: Intel DPDK QoS feature-set
31. vMX with vRouter and Orchestration
Contrail
controller
NFV orchestrator
Template
based config
• vMX with vRouter integration
• VirtIO utilized for Para-virtualized drivers
• Contrail OpenStack for
• VM management
• Setting up overlay network
• NFV Orchestrator (OpenStack Heat templates)
utilized to easily create and replicate VMX
instances
33. vMX Pricing philosophy
Value based pricing
Elastic pricing model
• Price as a platform and not just on cost of bandwidth
• Each VMX instance is a router with its own control-plane,
data-plane and administrative domain
• The value lies in the ability to instantiate routers easily
• Bandwidth based pricing
• Pay as you grow model
34. Application package functionality mapping
Application package Functionality Use cases
BASE • IP routing with 32K IP routes in FIB
• Basic L2 functionality: L2 Bridging and
switching
• No VPN capabilities: No L2VPN, VPLS,
EVPN and L3VPN
• Low end CPE or Layer3
Gateway
ADVANCED (-IR) • Full IP FIB
• Full L2 capabilities includes L2VPN,
VPLS, L2Circuit
• VXLAN
• EVPN
• IP Multicast
• L2vPE
• Full IP vPE
• Virtual DC GW
PREMIUM (-R) • BASE
• L3VPN for IP and Multicast
• L3VPN vPE
• Virtual Private Cloud
GW
Note: Application packages exclude IPSec, BNG and VRR functionality.
35. Bandwidth License SKUs
• Bandwidth based licenses for each application package for the following processing capacity limits:
100M, 250M, 500M, 1G, 5G, 10G, 40G. Note for 100M, 250M and 500M there is a combined SKU with
all applications included.
100M 250M 500M
1G BASE
1G ADV
1G PRM
5G BASE
5G ADV
5G PRM
10G BASE
10G ADV
10G PRM
40G BASE
40G ADV
40G PRM
BASE
ADVANCE
PREMIUM
• Application tiers are additive i.e ADV tier encompasses BASE functionality
36. VMX software License SKUs
SKU Description
VMX-100M 100M perpetual license. Includes all features in full scale
VMX-250M 250M perpetual license. Includes all features in full scale
VMX-500M 500M perpetual license. Includes all features in full scale
VMX-BASE-1G 1G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-BASE-5G 5G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-BASE-10G 10G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-BASE-40G 40G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-ADV-1G 1G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-5G 5G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-10G 10G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-40G 40G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-PRM-1G 1G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-5G 5G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-10G 10G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-40G 40G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
38. CHALLENGES WITH CURRENT NETWORKS
How to Make the Best Use of the Installed Infrastructure?
2
3
1? How do I use my network resources efficiently?
1? How can I make my network application aware?
1? How do I get complete & real-time visibility?
39. PCE ARCHITECTURE
A Standards-based Approach for Carrier SDN
Path Computation Element (PCE): Computes
the path
Path computation Client (PCC): Receives the
path and applies it in the network. Paths are
still signaled with RSVP-TE.
PCE protocol (PCEP): Protocol for PCE/PCC
communication
PCEP
PCC
PCC
PCC
A path Computation Element (PCE) is a system
component, application, or network node that is
capable of determining and finding a suitable
route for conveying data between a source and
a destination
What are the components?What is it?
PCE
40. ACTIVE STATEFUL PCE
A centralized network controller
The original PCE drafts (of the mid-2000s) were mainly focused
around passive stateless PCE architectures:
More recently, there’s a need for a more ‘Active’ and ‘Stateful’ PCE
NorthStar is an active stateful PCE
This fits well to the SDN paradigm of a centralized network controller
What makes an active Stateful PCE different:
The PCE is synchronized, in real-time, with the network via standard
networking protocols; IGP, PCEP
The PCE has visibility into the network state; b/w availability, LSP attributes
The PCE can take ‘control’ and create ‘state’ within the MPLS network
The PCE dictates the order of operations network-wide.
Report LSP state
Create LSP state
NorthStar
MPLS Network
41. SOFTWARE-DRIVEN POLICY
Topology Discovery Path Computation State Installation
NORTHSTAR COMPONENTS & WORKFLOW
PCEP
TE LSP discovery
IGP-TE, BGP-LS
TED discovery (BGP-LS, IGP)
LSDB discovery (OSPF, ISIS)
PCEP
Create/Modify TE LSP
One session per LER(PCC)
ANALYZE OPTIMIZE VIRTUALIZE
Routing PCEPApplication Specific Alg’s
RSVP
signaling
OPEN
APIs
42. NORTHSTAR MAJOR COMPONENTS
NorthStar consists of several major components:
JUNOS Virtual Machine (VM)
Path Computation Server (PCS)
Topology Server
REST Server
Component functional responsibilities:
The JUNOS VM, is used to collect the TE-database & LSDB
A new JUNOS daemon, NTAD, is used remote ‘flash’ the lsdist0
table to the PCS
The PCS has multiple functions:
Peers with each PCC using PCEP for LSP state collection &
modification
Runs application specific Algs for computing LSP paths
The REST server is the interface into the APIs
PCEJUNOS VM
NTAD
RPD
PCS
REST_Server
KVM Hypervisor
Centos 6.5
MPLS Network
PCC
BGP-LS/IGP PCEP
Topo_Server
43. Standard, custom, & 3rd party Applications
Topology Discovery Path Computation Path Installation
Topology API Path computation API Path provisioning API
PCEP PCEPApplication specific algorithmsIGP-TE / BGP-LS
REST REST REST
NorthStar pre-packaged applications
Bandwidth Calendaring, Path Diversity, Premium
path, auto-bandwidth / TE++, etc…
NORTHSTAR NORTHBOUND API
Integration with 3rd Party Tools and Custom Applications
44. NORTHSTAR 1.0 HIGH AVAILABILITY (HA)
Active / Standby for delegated LSPs
NorthStar 1.0 supports a high availability model only for
delegated LSPs:
Controllers are not actively synced with each-other
Active / standby PCE model with up to 16 back-up
controllers:
PCE-group: All PCE belonging to the same group
LSPs are delegated to the primary PCE
Primary PCE is the controller with the highest delegation priority
Other controllers cannot make changes to the LSPs
If a PCC looses connection with its primary PCE, it will immediately
use the PCE with next highest delegation priority as its new
primary PCE
ALL PCCs MUST use the same primary PCE
[configuration protocols pcep]
pce-group pce {
pce-type active stateful;
lsp-provisioning;
delegation-cleanup-timeout 600;
}
pce jnc1 {
pce-group pce;
delegation-priority 100;
}
pce jnc2 {
pce-group pce;
delegation-priority 50;
jnc1 jnc2
PCC
PCEPPCEP
45. JUNOS PCE CLIENT IMPLEMENTATION
New JUNOS daemon, pccd
Enables a PCE application to set parameters for a traditionally configured TE LSPs and
create ephemeral LSPs
PCCD is the relay/message translator between the PCE & RPD
LSP parameters, such as the path & bandwidth, & LSP creation instructions are received from the
PCE are communicated to RPD via PCCD
RPD then signals the LSP using RSVP-TE
PCE
PCEP
PCCD
PCEP
RPD
MPLS Network
PCEP
JUNOS
IPC
RSVP-TE
46. Topology Discovery MPLS capacity planning
‘Full’ Offline Network
Planning
NorthStar NorthStar Simulation IP/MPLSview
LSP Control/Modification FCAPs (PM, CM, FM)Exhaustive Failure Analysis
REAL-TIME NETWORK
FUNCTIONS
Dynamic Topology updates via
BGP-LS / IGP-TE
Dynamic LSP state updates via
PCEP
Real-time modification of LSP
attributes via PCEP (ERO, B/W,
pre-emption, …)
MPLS LSP PLANNING &
DESIGN
Topology acquisition via
NorthStar REST API (snapshot)
LSP provisioning via REST API
Exhaustive failure analysis &
capacity planning for MPLS LSPs
MPLS LSP design (P2MP, FRR,
JUNOS config’let, …)
OFFLINE NETWORK PLANNING
& MANAGEMENT
Topology acquisition &
equipment discovery via CLI,
SNMP, NorthStar REST API
Exhaustive failure analysis &
capacity planning (IP & MPLS)
Inventory, provisioning, &
performance management
NORTHSTAR SIMULATION MODE
NorthStar vs. IP/MPLSview
47. DIVERSE PATH COMPUTATION
Automated Computation of end-to-end diverse paths
Network-wide visibility allows NorthStar to support end-to-end LSP path diversity:
Wholly disjoint path computations; Options for link, node and SRLG diversity
Pair of diverse LSPs with the same end-points or with different end-points
SRLG information learned from the IGP dynamically
Supported for PCE created LSPs(at time of provisioning) and delegated LSPs(though manual
creation of diversity group)
Warning!
Shared Risk Shared Risk
Eliminated
Primary Link
Secondary Link
CE
CE
CE
CE
NorthStar
48. PCE CREATED SYMMETRIC LSPS
Local association of LSP symmetry constraint
Symmetric
LSPs
NorthStar
NorthStar supports creating symmetric LSPs:
Does not leverage GMPLS extensions for co-routed or associated bi-directional LSPs
Unidirectional LSPs (identical names) are created from nodeA to nodeZ & nodeZ to nodeA
Symmetry constraint is maintained locally on NorthStar (attribute: pair =<value>)
Symmetric LSP
creation
49. MAINTENANCE-MODE RE-ROUTING
Automated Path Re-computation, Re-signaling and Restoration
Automate re-routing of traffic before a scheduled maintenance window:
Simplifies planning and preparation before and during a maintenance window
Eliminate the risk that traffic is mistakenly affected when a node / link goes into maintenance mode
Reduced need for spare capacity through the optimum use of resources available during the
maintenance window
After the maintenance window finished paths are automatically restored to the (new) optimum path
1
Maintenance mode tagged: LSP
paths are re-computed assuming
affected resources are not
available
X
X
X
2
In maintenance mode: LSP
paths are automatically
(make-before-break)
re-signaled
3
Maintenance mode removed: all
LSP paths are re-stored to their
(new) optimal path
NorthStar
50. GLOBAL CONCURRENT OPTIMIZATION
Optimized LSP placement
NorthStar enhances traffic engineering through LSP placement based on a network
wide visibility of the topology and LSP parameters:
CSPF ordering can be user-defined, i.e. the operator can select which parameters such as LSP priority
and LSP bandwidth influence the order of placement
Net Groom:
- Triggered on demand
- User can choose LSPs to be optimized
- LSP priority is not taken into account
- No pre-emption
Path Optimization:
- Triggered on demand or on scheduled
intervals (with optimization timer)
- Global re-optimization toward all LSPs
- LSP priority is taken into account
- Preemption may happen
High priority LSP
Low priority LSP
Global re-
optimization
NorthStarBandwidth
bottleneck!
CSPF
failure
New Path
request
51. INTER-DOMAIN TRAFFIC-ENGINEERING
Optimal Path Computation & LSP Placement
LSP [delegation, creation, optimization] of inter-domain LSPs
Single active PCE across domains, BGP-LS for topology acquisition
JUNOS Inter-AS requirements & constraints
http://www.juniper.net/techpubs/en_US/junos13.3/topics/usage-guidelines/mpls-enabling-inter-as-traffic-engineering-for-
lsps.html
Inter-AS Traffic-Engineering
NorthStar
NorthStar
Inter-Area Traffic-Engineering
AS 100
AS 200
Area 1
Area 2
Area 3Area 0
52. NORTHSTAR SIMULATION MODE
Offline Network Planning & Modeling
NorthStar builds a near real-time network model for visualization and off-line planning through
dynamic topology / LSP acquisition:
Export of topology and LSP state to NorthStar simulation mode for ‘off-line’ MPLS network modeling
Add/delete links/nodes/LSPs for future network planning
Exhaustive failure analysis, P2MP LSP design/planning, LSP design/planning, FRR design/planning
JUNOS LSP config’let generation
NorthStar-Simulation
Year 1
Year 3
Year 5
ExtensionYear 1
53. A REAL CUSTOMER EXAMPLE – PCE VALUE
Centralized vs. distributed path computationLinkUtilization(%)
0,00%
20,00%
40,00%
60,00%
80,00%
100,00%
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 148 151 154 157 160 163 166 169 172
Distributed CSPF PCE centralized CSPF
TE-LSP operational routes are used for
distributed CSPF
RSVP-TE Max Reservable BW set BW set
to 92%
Modeling was performed with the exact
operation LSP paths
Convert all TE-LSPS to EROs via PCE
design action
Objective function is Min Max link
utilizations
Only Primary EROS & Online Bypass LSPS
Modeling was performed with 100% of
TE LSPS being computed by PCE
Up to 15% reduction in RSVP reserved B/W
Distributed CSPF Assumptions Centralized Path Calculation Assumptions
54. NORTHSTAR 1.0
FRS delivery
NorthStar FRS is targeted for March-23rd:
(Beta) trials / evaluations already ongoing
First customer wins in place
Target JUNOS releases:
14.2R3 Special *
14.2R4* / 15.1R1* / 15.2R1*
Supported platforms at FRS:
PTX (3K, 5K),
MX (80, 104, 240/480/960, 2010/2020, vMX)
Additional platform support in NorthStar 2.0
* Pending TRD Process
NorthStar packaging & platform:
Bare metal application only
No VM support at FRS
Runs on any x86 64bit machine that is supported
by Red Hat 6 or Centos 6
Single hybrid ISO for installation
Based on Juniper SCL 6.5R3.0
Recommended minimum hardware
requirements:
64-bit dual x86 processor or dual 1.8GHz Intel
Xeon E5 family equivalent
32 GB RAM
1TB storage
2 x 1G/10G network interface