PCIe peer-to-peer communication can reduce bottlenecks between high-performance I/O devices like SSDs and networking cards by allowing them to transfer data directly without going through the CPU. PMC is developing an NVM Express NVRAM card using DRAM cache that is accessible via the NVMe block driver or custom character driver, and can achieve almost 1 million 4KB IOPS or 10 million 64B IOPS. The company has set up a test hardware and software environment using PCIe devices connected directly to CPU lanes running Debian Linux with custom kernel patches to demonstrate peer-to-peer capabilities.
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data CenterPROIDEA
This document discusses networking solutions from Extreme Networks for data centers. It describes the modular operating system ExtremeXOS which allows for dynamic software uploads and self-healing processes. It also discusses CLEAR-Flow for statistical measurement and security rules, Direct Attach for eliminating virtual switches, virtual machine management capabilities, and technologies like M-LAG for link resiliency. The document provides an overview of product lines like the Summit X670 top-of-rack switch and the BlackDiamond X8 core switch, highlighting their performance, scalability, and virtualization support.
Red Hat GFS (Global File System) is a cluster file system that allows nodes in a cluster to simultaneously access a shared block storage device. It employs distributed metadata and multiple journals to operate optimally in a cluster. GFS uses a lock manager to coordinate I/O and maintain file system integrity. It provides benefits like simplified data infrastructure management, maximized storage resource use, seamless cluster scaling, and high performance access to data. GFS can be deployed with different configurations to suit various needs for performance, scalability, and cost. It provides data sharing, a consistent namespace, and features required for enterprise environments.
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
This presents the CephFS performance scalability and evaluation results. Specifically, it addresses some technical issues such as multi core scalability, cache size, static pinning, recovery, and QoS.
This document discusses Linux clustering concepts and administration on Red Hat Enterprise Linux 5. It covers cluster types including storage, high availability, load balancing, and performance clusters. It also describes the components of the Red Hat Cluster Suite including the cluster infrastructure, HA service management, Global File System, Cluster Logical Volume Manager, and Linux Virtual Server for load balancing. Administration tools like Conga, System-config-cluster, and command line tools are also summarized.
The document provides information about Brocade SAN switches including their product lines, features, and specifications. It discusses various switch models ranging from 8-port to 384-port configurations supporting 1, 2, 4, 8, and 10Gbps speeds. Features covered include dynamic path selection, ISL trunking, extended fabric, hardware-enforced zoning, advanced performance monitoring, and FCIP tunneling. The document also reviews FOS enhancements, new 10Gbps blades, and concepts like NPIV and NPV.
This presentation introduces clustering and RedHat clustering. It defines a cluster as two or more computers that work together to perform a task. It distinguishes between hardware and software clusters, with hardware clusters being more expensive. The major software cluster types are high availability, load balancing, and high performance. The presentation concludes by advising attendees to download free documentation from RedHat's website to get started with RedHat clustering.
PCIe peer-to-peer communication can reduce bottlenecks between high-performance I/O devices like SSDs and networking cards by allowing them to transfer data directly without going through the CPU. PMC is developing an NVM Express NVRAM card using DRAM cache that is accessible via the NVMe block driver or custom character driver, and can achieve almost 1 million 4KB IOPS or 10 million 64B IOPS. The company has set up a test hardware and software environment using PCIe devices connected directly to CPU lanes running Debian Linux with custom kernel patches to demonstrate peer-to-peer capabilities.
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data CenterPROIDEA
This document discusses networking solutions from Extreme Networks for data centers. It describes the modular operating system ExtremeXOS which allows for dynamic software uploads and self-healing processes. It also discusses CLEAR-Flow for statistical measurement and security rules, Direct Attach for eliminating virtual switches, virtual machine management capabilities, and technologies like M-LAG for link resiliency. The document provides an overview of product lines like the Summit X670 top-of-rack switch and the BlackDiamond X8 core switch, highlighting their performance, scalability, and virtualization support.
Red Hat GFS (Global File System) is a cluster file system that allows nodes in a cluster to simultaneously access a shared block storage device. It employs distributed metadata and multiple journals to operate optimally in a cluster. GFS uses a lock manager to coordinate I/O and maintain file system integrity. It provides benefits like simplified data infrastructure management, maximized storage resource use, seamless cluster scaling, and high performance access to data. GFS can be deployed with different configurations to suit various needs for performance, scalability, and cost. It provides data sharing, a consistent namespace, and features required for enterprise environments.
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
This presents the CephFS performance scalability and evaluation results. Specifically, it addresses some technical issues such as multi core scalability, cache size, static pinning, recovery, and QoS.
This document discusses Linux clustering concepts and administration on Red Hat Enterprise Linux 5. It covers cluster types including storage, high availability, load balancing, and performance clusters. It also describes the components of the Red Hat Cluster Suite including the cluster infrastructure, HA service management, Global File System, Cluster Logical Volume Manager, and Linux Virtual Server for load balancing. Administration tools like Conga, System-config-cluster, and command line tools are also summarized.
The document provides information about Brocade SAN switches including their product lines, features, and specifications. It discusses various switch models ranging from 8-port to 384-port configurations supporting 1, 2, 4, 8, and 10Gbps speeds. Features covered include dynamic path selection, ISL trunking, extended fabric, hardware-enforced zoning, advanced performance monitoring, and FCIP tunneling. The document also reviews FOS enhancements, new 10Gbps blades, and concepts like NPIV and NPV.
This presentation introduces clustering and RedHat clustering. It defines a cluster as two or more computers that work together to perform a task. It distinguishes between hardware and software clusters, with hardware clusters being more expensive. The major software cluster types are high availability, load balancing, and high performance. The presentation concludes by advising attendees to download free documentation from RedHat's website to get started with RedHat clustering.
The document provides information on Juniper SRX platform updates, including:
1) vSRX updates - The virtual firewall platform now supports up to 80G FW throughput on a single server and 100G vSRX was announced. Support for VMware 5.5+SRIOV and features parity with physical SRX firewalls.
2) Physical SRX updates - New SRX3xx and SRX550 series for branches up to 500 users. The SRX1500 provides high performance networking and security for enterprise edge and data center edge. The SRX5400 supports advanced software security services.
3) Software updates - Sky ATP cloud-based malware analysis and SRX User Identity REST API.
RouterOS v6 will include several new features and improvements, including support for new hardware, an updated Linux kernel, additional CPU architecture support, and a reworked QoS system. It features improved performance on multi-CPU systems, enhanced interface drivers, lifted CPU core limits, and simplified simple queue configuration. New capabilities include wireless advanced channels, SCEP protocol support, and more flexible DHCP options handling.
This document discusses DPDK support for new hardware offloads. It describes the Netronome Agilio SmartNIC, which has hardware accelerators and can offload tasks like cryptography and flow processing. It discusses using the SmartNIC with DPDK and OVS for improved performance over kernel-based solutions. Full flow classification and action offloading to the SmartNIC is proposed to reduce CPU usage, along with exploring eBPF/XDP offloading possibilities and virtio offloading to enable VM migration.
This document provides an overview of IxExplorer, a traffic generation and measurement tool from Ixia. It discusses IxExplorer's key features such as generating up to 255 unique packet streams, operating within OSI layers 1-4, and measuring latency and packet sequencing. The document then reviews IxExplorer's operation, including its local and remote access modes, port ownership, generating and configuring packet streams, and transmitting streams. It also covers IxExplorer's statistical views for analyzing received data and its packet group statistic views for measuring latency and sequencing on a per-stream basis.
The document discusses the HP Virtual Connect technology. It describes the key components of a Virtual Connect infrastructure including blade servers, interconnect bays, and Virtual Connect modules. It explains how Virtual Connect simplifies server connectivity and can be managed through the Virtual Connect Manager console either for a single enclosure or multiple enclosures across the datacenter. Screenshots of the Virtual Connect Manager user interface are provided to demonstrate how network and storage resources can be assigned to servers.
The document discusses neutron hybrid mode, which allows virtual machines to use both overlay and physical networks. It provides an overview of network architectures, neutron basics, use cases, and the benefits of physical and overlay networks. Performance test results show that for buffer sizes less than MTU, tunneled VMs have the best throughput both within and across racks. Tunneled and bridged VMs have slightly higher latency than bare metal. Across the network gateway, tunneled VMs have similar or better throughput than bare metal or bridged VMs, though with slightly higher latency. The hybrid approach provides flexibility while maintaining high performance.
Netronome's Nick Tausanovitch, VP of Solutions Architecture and Silicon Product Management, Linley Data Center Conference in Santa Clara, CA on February 9, 2016.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Accelerate Service Function Chaining Vertical Solution with DPDKOPNFV
Service Function Chaining (SFC) is one of top 5 NFV use case. Supporting SFC in provider and enterprise networks requires performance assurance. Specifically, the Classifier and the Service Function Forwarder which are typically implemented in software such as virtual switches need to match line rate requirement. DPDK (Data Plane Development Kit) is an open source project comprising a set of libraries and drivers for fast packet processing. In this presentation, we will discuss our experiences accelerating SFC with DPDK. In addition, Telco and Datacenter carriers demands dynamic SFC that requires new SFC wire protocols (e.g. VxLAN-GPE and NSH) support in both data and control planes. We intend to share our experiences and future works of a high performance, NSH-aware SFC vertical solution with open-source ingredients: Openstack, Opendaylight, OpenvSwitch with DPDK acceleration.
Juniper Networks' vMX product provides a virtualized routing platform that can run the same Junos operating system as physical MX routers. The vMX uses virtualized DPDK-accelerated packet processing called vTRIO to separate the control and data planes for high performance. It supports various hypervisor and container deployments and can scale throughput from 100Mbps up to multiple 10Gbps ports depending on vCPU and core allocation. The vMX is suited for applications such as virtual PE routers, DC gateways, cloud WAN routers, and route reflectors where service providers need a virtualized solution that leverages their existing Junos feature set.
This document provides a list of competing features from the HP BladeSystem and Cisco UCS solutions. The objective of the document is to highlight weaknesses and strengths as a starting point for an enhanced competition to UCS in the Data Center area.
This document provides an introduction to high-performance computing (HPC) including definitions, applications, hardware, and software. It defines HPC as utilizing parallel processing through computer clusters and supercomputers to solve complex modeling problems. The document then describes typical HPC cluster hardware such as computing nodes, a head node, switches, storage, and a KVM. It also outlines cluster management software, job scheduling, and parallel programming tools like MPI that allow programs to run simultaneously on multiple processors. An example HPC cluster at SIU called Maxwell is presented with its technical specifications and a tutorial on logging into and running simple MPI programs on the system.
HP Virtual Connect technical fundamental101 v2.1ผู้ชาย แห่งสายลม
The document discusses challenges with traditional server blade networking approaches that require many cables or switches. It introduces the HP Virtual Connect solution which simplifies networking through the following:
- Reduces cables without adding switches to manage by connecting servers to logical networks defined in software rather than physical network infrastructure.
- Cleanly separates server enclosure connections from LANs and SANs, allowing fast addition, movement, or replacement of servers without affecting networks.
- Eliminates the need for network/storage teams to manage server connections by handling MAC/WWN assignments internally through profile management.
This document summarizes a technical deep dive presentation on vSphere Distributed Switches. It discusses the requirements, construction, alternatives, tips and real world use cases of vSphere Distributed Switches. The presenters were Jason Nash from Varrow and Chris Wahl from AHEAD, and they covered topics such as migration from standard to distributed switches, mixing 1Gb and 10Gb networking, and techniques for bandwidth management.
Open VSwitch .. Use it for your day to day needsrranjithrajaram
Slides of open vSwitch used for Fudcon 2015.
Main agenda for this talk was.. why openvswitch is a better alternative to Linux bridge and why you should start using it as the bridge for your KVM host.
Virtual Connect Enterprise Manager allows administrators to centrally manage up to 150 Virtual Connect domains from a single console. It provides a central database to administer 65,000 network addresses and uses Virtual Connect Domain Groups to simplify configuration across multiple enclosures. The software also enables movement of server profiles and failover between BladeSystem enclosures.
This document discusses Open vSwitch (OVS) and how using Data Plane Development Kit (DPDK) can improve its performance. It notes that with standard OVS, there are many components between a virtual machine and physical networking that cause scalability and performance issues due to context switches. OVS-DPDK addresses this by using polling, hugepages, pinned CPUs, and userspace I/O to bypass the kernel and reduce overhead. The document shows that using DPDK can increase OVS throughput by over 8x and reduce latency by 30-37% compared to standard OVS.
Sharing High-Performance Interconnects Across Multiple Virtual Machinesinside-BigData.com
In this deck from the Stanford HPC Conference, Mohan Potheri from VMware presents: Sharing High-Performance Interconnects Across Multiple Virtual Machines.
"Virtualized devices offer maximum flexibility: sharing of hardware between virtual machines, the use of VMware vMotion to handle migration and take snapshots. However, when performance is the most critical requirement there are other options. VMware Direct Path I/O delivers excellent performance, but only for a single virtual machine. Single root I/O virtualization (SR-IOV), on the other hand, offers the performance of pass-through mode while allowing devices to be shared by multiple virtual machines.
This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations."
Watch the video: https://youtu.be/-iYYmsBw8SU
Learn more: https://www.vmware.com
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
1. The document discusses using OpenStack for a 4G core network, including performance issues and solutions when virtualizing the EPC network functions using OpenStack.
2. Key performance issues identified include high CPU usage, competing for CPU resources, latency, throughput, and packet loss. Solutions proposed are CPU pinning, NUMA awareness, hugepages, DPDK, SR-IOV, and offloading processing to smart NICs.
3. Going forward, the next steps discussed are using OVS-DPDK for offloading, SDN, containers, and cloud architectures for 5G.
The document provides information on Juniper SRX platform updates, including:
1) vSRX updates - The virtual firewall platform now supports up to 80G FW throughput on a single server and 100G vSRX was announced. Support for VMware 5.5+SRIOV and features parity with physical SRX firewalls.
2) Physical SRX updates - New SRX3xx and SRX550 series for branches up to 500 users. The SRX1500 provides high performance networking and security for enterprise edge and data center edge. The SRX5400 supports advanced software security services.
3) Software updates - Sky ATP cloud-based malware analysis and SRX User Identity REST API.
RouterOS v6 will include several new features and improvements, including support for new hardware, an updated Linux kernel, additional CPU architecture support, and a reworked QoS system. It features improved performance on multi-CPU systems, enhanced interface drivers, lifted CPU core limits, and simplified simple queue configuration. New capabilities include wireless advanced channels, SCEP protocol support, and more flexible DHCP options handling.
This document discusses DPDK support for new hardware offloads. It describes the Netronome Agilio SmartNIC, which has hardware accelerators and can offload tasks like cryptography and flow processing. It discusses using the SmartNIC with DPDK and OVS for improved performance over kernel-based solutions. Full flow classification and action offloading to the SmartNIC is proposed to reduce CPU usage, along with exploring eBPF/XDP offloading possibilities and virtio offloading to enable VM migration.
This document provides an overview of IxExplorer, a traffic generation and measurement tool from Ixia. It discusses IxExplorer's key features such as generating up to 255 unique packet streams, operating within OSI layers 1-4, and measuring latency and packet sequencing. The document then reviews IxExplorer's operation, including its local and remote access modes, port ownership, generating and configuring packet streams, and transmitting streams. It also covers IxExplorer's statistical views for analyzing received data and its packet group statistic views for measuring latency and sequencing on a per-stream basis.
The document discusses the HP Virtual Connect technology. It describes the key components of a Virtual Connect infrastructure including blade servers, interconnect bays, and Virtual Connect modules. It explains how Virtual Connect simplifies server connectivity and can be managed through the Virtual Connect Manager console either for a single enclosure or multiple enclosures across the datacenter. Screenshots of the Virtual Connect Manager user interface are provided to demonstrate how network and storage resources can be assigned to servers.
The document discusses neutron hybrid mode, which allows virtual machines to use both overlay and physical networks. It provides an overview of network architectures, neutron basics, use cases, and the benefits of physical and overlay networks. Performance test results show that for buffer sizes less than MTU, tunneled VMs have the best throughput both within and across racks. Tunneled and bridged VMs have slightly higher latency than bare metal. Across the network gateway, tunneled VMs have similar or better throughput than bare metal or bridged VMs, though with slightly higher latency. The hybrid approach provides flexibility while maintaining high performance.
Netronome's Nick Tausanovitch, VP of Solutions Architecture and Silicon Product Management, Linley Data Center Conference in Santa Clara, CA on February 9, 2016.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Accelerate Service Function Chaining Vertical Solution with DPDKOPNFV
Service Function Chaining (SFC) is one of top 5 NFV use case. Supporting SFC in provider and enterprise networks requires performance assurance. Specifically, the Classifier and the Service Function Forwarder which are typically implemented in software such as virtual switches need to match line rate requirement. DPDK (Data Plane Development Kit) is an open source project comprising a set of libraries and drivers for fast packet processing. In this presentation, we will discuss our experiences accelerating SFC with DPDK. In addition, Telco and Datacenter carriers demands dynamic SFC that requires new SFC wire protocols (e.g. VxLAN-GPE and NSH) support in both data and control planes. We intend to share our experiences and future works of a high performance, NSH-aware SFC vertical solution with open-source ingredients: Openstack, Opendaylight, OpenvSwitch with DPDK acceleration.
Juniper Networks' vMX product provides a virtualized routing platform that can run the same Junos operating system as physical MX routers. The vMX uses virtualized DPDK-accelerated packet processing called vTRIO to separate the control and data planes for high performance. It supports various hypervisor and container deployments and can scale throughput from 100Mbps up to multiple 10Gbps ports depending on vCPU and core allocation. The vMX is suited for applications such as virtual PE routers, DC gateways, cloud WAN routers, and route reflectors where service providers need a virtualized solution that leverages their existing Junos feature set.
This document provides a list of competing features from the HP BladeSystem and Cisco UCS solutions. The objective of the document is to highlight weaknesses and strengths as a starting point for an enhanced competition to UCS in the Data Center area.
This document provides an introduction to high-performance computing (HPC) including definitions, applications, hardware, and software. It defines HPC as utilizing parallel processing through computer clusters and supercomputers to solve complex modeling problems. The document then describes typical HPC cluster hardware such as computing nodes, a head node, switches, storage, and a KVM. It also outlines cluster management software, job scheduling, and parallel programming tools like MPI that allow programs to run simultaneously on multiple processors. An example HPC cluster at SIU called Maxwell is presented with its technical specifications and a tutorial on logging into and running simple MPI programs on the system.
HP Virtual Connect technical fundamental101 v2.1ผู้ชาย แห่งสายลม
The document discusses challenges with traditional server blade networking approaches that require many cables or switches. It introduces the HP Virtual Connect solution which simplifies networking through the following:
- Reduces cables without adding switches to manage by connecting servers to logical networks defined in software rather than physical network infrastructure.
- Cleanly separates server enclosure connections from LANs and SANs, allowing fast addition, movement, or replacement of servers without affecting networks.
- Eliminates the need for network/storage teams to manage server connections by handling MAC/WWN assignments internally through profile management.
This document summarizes a technical deep dive presentation on vSphere Distributed Switches. It discusses the requirements, construction, alternatives, tips and real world use cases of vSphere Distributed Switches. The presenters were Jason Nash from Varrow and Chris Wahl from AHEAD, and they covered topics such as migration from standard to distributed switches, mixing 1Gb and 10Gb networking, and techniques for bandwidth management.
Open VSwitch .. Use it for your day to day needsrranjithrajaram
Slides of open vSwitch used for Fudcon 2015.
Main agenda for this talk was.. why openvswitch is a better alternative to Linux bridge and why you should start using it as the bridge for your KVM host.
Virtual Connect Enterprise Manager allows administrators to centrally manage up to 150 Virtual Connect domains from a single console. It provides a central database to administer 65,000 network addresses and uses Virtual Connect Domain Groups to simplify configuration across multiple enclosures. The software also enables movement of server profiles and failover between BladeSystem enclosures.
This document discusses Open vSwitch (OVS) and how using Data Plane Development Kit (DPDK) can improve its performance. It notes that with standard OVS, there are many components between a virtual machine and physical networking that cause scalability and performance issues due to context switches. OVS-DPDK addresses this by using polling, hugepages, pinned CPUs, and userspace I/O to bypass the kernel and reduce overhead. The document shows that using DPDK can increase OVS throughput by over 8x and reduce latency by 30-37% compared to standard OVS.
Sharing High-Performance Interconnects Across Multiple Virtual Machinesinside-BigData.com
In this deck from the Stanford HPC Conference, Mohan Potheri from VMware presents: Sharing High-Performance Interconnects Across Multiple Virtual Machines.
"Virtualized devices offer maximum flexibility: sharing of hardware between virtual machines, the use of VMware vMotion to handle migration and take snapshots. However, when performance is the most critical requirement there are other options. VMware Direct Path I/O delivers excellent performance, but only for a single virtual machine. Single root I/O virtualization (SR-IOV), on the other hand, offers the performance of pass-through mode while allowing devices to be shared by multiple virtual machines.
This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations."
Watch the video: https://youtu.be/-iYYmsBw8SU
Learn more: https://www.vmware.com
and
http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
1. The document discusses using OpenStack for a 4G core network, including performance issues and solutions when virtualizing the EPC network functions using OpenStack.
2. Key performance issues identified include high CPU usage, competing for CPU resources, latency, throughput, and packet loss. Solutions proposed are CPU pinning, NUMA awareness, hugepages, DPDK, SR-IOV, and offloading processing to smart NICs.
3. Going forward, the next steps discussed are using OVS-DPDK for offloading, SDN, containers, and cloud architectures for 5G.
This document discusses several types of computer networks:
- Cloud interconnection networks which connect servers hierarchically and must provide scalability, low cost, low latency and high bandwidth. InfiniBand is commonly used.
- Storage area networks which connect servers to storage devices using Fiber Channel protocol and provide block storage transfers.
- Content delivery networks which replicate and deliver content from origin servers to edge caches for improved performance and scalability.
- Overlay networks which are built on top of physical networks and are used in peer-to-peer, content delivery, and client-server systems. Scale-free networks follow a power law degree distribution and many real-world networks have this property.
This document provides an overview of key concepts in IT infrastructure architecture related to networking. It discusses the presentation and application layers, protocols like SSL/TLS, HTTP, and email protocols. It also covers infrastructure services like DHCP, DNS, NTP, and IPAM systems. Additionally, it summarizes network virtualization techniques like VLANs, VXLANs, virtual NICs, and virtual switches. Finally, it discusses software defined networking, network function virtualization, layered network topologies, spine-leaf architectures, network teaming, and the spanning tree protocol.
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...LF_DPDK
This document discusses software-based networking and network function virtualization (NFV). It introduces NetVM, an NFV platform developed by the author that provides high performance packet delivery across virtual machines using DPDK for zero-copy networking. NetVM enables complex network services to be distributed across multiple VMs while maintaining high throughput. The author also discusses OpenNetVM, an open source version of NetVM, and contributions like Flurries that enable unique network functions to run per flow for improved scalability. NFVnice, a userspace framework for scheduling NFV chains, is also introduced to improve throughput, fairness and CPU utilization.
This document provides an agenda for a presentation on HP Blade technology. The agenda includes introductions to HP Blade chassis, onboard administrators, Virtual Connect Flex-10 modules, Cisco modules, system management tools, different types of Aurora servers, storage servers, and cluster configuration. It also discusses NTP server configuration, questions, and demonstrations of HP Blade hardware components and management modules.
DPDK Summit 2015 - RIFT.io - Tim MortsolfJim St. Leger
DPDK Summit 2015 in San Francisco.
Presentation by RIFT.io's CTO Tim Mortsolf.
For additional details and the video recording please visit www.dpdksummit.com.
Madhu Rangarajan will provide an overview of Networking trends they are seeing in Cloud, various network topologies and tradeoffs, and trends in the acceleration of packet processing workloads. They will also talk about some of the work going on in Intel to address these trends, including FPGAs in the datacenter.
Platforms for Accelerating the Software Defined and Virtual Infrastructure6WIND
As network infrastructures evolve and selected elements shift from physical systems to virtual functions a new class of network appliance is required that provides high performance processing, balanced I/O and hardware or software acceleration. Such a platform must combine standard server technology and modular systems that can be configured to support line rate performance with network interfaces up to 100Gbit/s.
This webinar will discuss a class of network appliance that offers performance levels previously requiring more complex and costly architectures while integrating seamlessly with standard software frameworks such as Linux, Open vSwitch (OVS) and Intel® Data Plane Development Kit (DPDK).
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With Advanced Network and Storage Interconnect Technologies, OpenStack Israel 2015
Jakub Pavlik discusses high availability versus disaster recovery in OpenStack clouds. He describes four types of high availability in OpenStack: physical infrastructure, OpenStack control services, virtual machines, and applications. For each type, he outlines concepts like active/passive and active/active configurations, specific technologies used like Pacemaker, Corosync, HAProxy, and MySQL Galera, and considerations for shared and non-shared storage. Finally, he provides examples of high availability architectures and methods used by different OpenStack vendors.
Virtual SAN is VMware's hyper-converged infrastructure storage solution that is integrated with vSphere. It provides a software-defined, distributed storage platform that offers policy-based placement and management of virtual machine storage. Version 6.1 introduced new features like stretched clusters for disaster recovery between sites, support for high-density flash devices, and health monitoring and troubleshooting tools through integration with vRealize Operations. Future enhancements may include RAID 5 and 6 functionality over the network to improve storage efficiency as well as data deduplication and compression.
The document discusses upgrading an office network infrastructure to support increased size and implement centralized data storage and sharing. It includes specifications for hardware like routers, switches, firewalls and servers needed for the Local Area Network and connections to a remote branch office and separate office building. Diagrams show the network layout connecting 20 existing PCs and new servers through fiber optic cables, switches and routers with firewall protection to access the internet and remote offices.
A computer network allows computers to share resources and exchange information. There are several types of networks including local area networks (LANs) within a building, metropolitan area networks (MANs) within a city, and wide area networks (WANs) that span large geographical areas. Networks provide benefits like resource sharing, reliability, reduced costs, and improved communication. They connect using various wired and wireless technologies and different network topologies.
Here are the key steps to run the Ryu controller with a sample application on the Mininet virtual machine topology:
1. Ensure no other controllers are running with `killall controller`
2. Clear any existing Mininet components with `mn -c`
3. Start the Ryu controller with `ryu-manager --verbose ./simple_switch_13.py`
4. In a new terminal, start the Mininet topology with `mn --controller remote`
5. Use Mininet commands like `pingall` and `net` to test connectivity and explore the network
6. You can install additional Ryu applications and restart the controller to add new functionality
7. Use
Here are the key steps:
1. Kill any existing controllers running on the system
2. Clear out any existing Mininet topology using mn -c
3. Start the Ryu OpenFlow controller by running:
ryu-manager --verbose ./simple_switch_13.py
This starts the Ryu controller with the simple_switch_13.py application, which provides basic OpenFlow switch functionality. The --verbose flag prints debug information from the controller. We have now initialized the SDN environment with Ryu acting as the controller.
This document discusses network virtualization and its history. It provides the following key points:
1) Network virtualization aims to decouple virtual networks from physical infrastructure through techniques like tunneling and encapsulation, allowing independent address spaces and topologies.
2) Early work included overlay networks for deployment and experimentation. Virtualization is now used in data centers to isolate tenant traffic and connect virtual machines across sites.
3) The OpenVirteX project aims to advance network virtualization by exposing the entire physical topology to virtual network controllers and allowing independent address spaces and topologies through header rewriting. This would provide more flexibility than existing solutions.
Similar to CC-4153, Verizon Cloud Compute and the SM15000, by Paul Curtis (20)
This document discusses new graphics APIs like DX12 and Vulkan that aim to provide lower overhead and more direct hardware access compared to earlier APIs. It covers topics like increased parallelism, explicit memory management using descriptor sets and pipelines, and best practices like batching draw calls and using multiple asynchronous queues. Overall, the new APIs allow more explicit control over GPU hardware for improved performance but require following optimization best practices around areas like parallelism, memory usage, and command batching.
AMD’s math libraries can support a range of programmers from hobbyists to ninja programmers. Kent Knox from AMD’s library team introduces you to OpenCL libraries for linear algebra, FFT, and BLAS, and shows you how to leverage the speed of OpenCL through the use of these libraries.
Review the material presented in the AMD Math libraries webinar in this deck.
For more:
Visit the AMD Developer Forums:http://devgurus.amd.com/welcome
Watch the replay: www.youtube.com/user/AMDDevCentral
Follow us on Twitter: https://twitter.com/AMDDevCentral
This is the slide deck from the popular "Introduction to Node.js" webinar with AMD and DevelopIntelligence, presented by Joshua McNeese. Watch our AMD Developer Central YouTube channel for the replay at https://www.youtube.com/user/AMDDevCentral.
This presentation accompanies the webinar replay located here: http://bit.ly/1zmvlkL
AMD Media SDK Software Architect Mikhail Mironov shows you how to leverage an AMD platform for multimedia processing using the new Media Software Development Kit. He discusses how to use a new set of C++ interfaces for easy access to AMD hardware blocks, and shows you how to leverage the Media SDK in the development of video conferencing, wireless display, remote desktop, video editing, transcoding, and more.
An Introduction to OpenCL™ Programming with AMD GPUs - AMD & Acceleware WebinarAMD Developer Central
This deck presents highlights from the Introduction to OpenCL™ Programming Webinar presented by Acceleware & AMD on Sept. 17, 2014. Watch a replay of this popular webinar on the AMD Dev Central YouTube channel here: https://www.youtube.com/user/AMDDevCentral or here for the direct link: http://bit.ly/1r3DgfF
This document discusses AMD's DirectGMA technology, which allows direct access to GPU memory from other devices. It introduces DirectGMA and explains how it enables peer-to-peer transfers between GPUs and GPUs and FPGAs. It then provides details on implementing DirectGMA in APIs like OpenGL, OpenCL, DirectX 9, 10 and 11 to enable efficient data transfers without CPU involvement.
This Webinar explores a variety of new and updated features in Java 8, and discuss how these changes can positively impact your day-to-day programming.
Watch the video replay here: http://bit.ly/1vStxKN
Your Webinar presenter, Marnie Knue, is an instructor for Develop Intelligence and has taught Sun & Oracle certified Java classes, RedHat JBoss administration, Spring, and Hibernate. Marnie also has spoken at JavaOne.
The Small Batch (and other) solutions in Mantle API, by Guennadi Riguer, Mant...AMD Developer Central
This presentation discusses the Mantle API, what it is, why choose it, and abstraction level, small batch performance and platform efficiency.
Download the presentation from the AMD Developer website here: http://bit.ly/TrEUeC
The document is about an AMD and Microsoft Game Developer Day event held in Stockholm, Sweden on June 2, 2014. It provides the date and location of the event multiple times but no other details.
This document discusses the TressFX hair and fur rendering technique. It begins by stating that next-gen quality hair is expected in current generation titles. It then covers the key components needed for high quality hair, including antialiasing, self-shadowing, and transparency. The document discusses isoline tessellation versus a vertex shader approach and describes TressFX's deferred rendering pipeline with selective shading of only the closest fragments. It demonstrates that TressFX can achieve next-gen quality hair and fur at real-time performance through techniques like variable ratio hair simulation, extrusion into triangles in the vertex shader, selective shading, and distance-based level of detail.
Mantle allows Battlefield 4 to significantly improve CPU and GPU performance compared to DirectX 11. The game utilizes Mantle's low-level access to optimize shader compilation, pipeline state management, asynchronous compute and memory handling. Multi-GPU rendering is supported through Alternate Frame Rendering where resources are duplicated and updated asynchronously across GPUs.
Low-level Shader Optimization for Next-Gen and DX11 by Emil PerssonAMD Developer Central
The document discusses low-level shader optimization techniques for next-generation consoles and DirectX 11 hardware. It provides lessons from last year on writing efficient shader code, and examines how modern GPU hardware has evolved over the past 7-8 years. Key points include separating scalar and vector work, using hardware-mapped functions like reciprocals and trigonometric functions, and being aware of instruction throughput and costs on modern GCN-based architectures.
The document summarizes a presentation given by Stephan Hodes on optimizing performance for AMD's Graphics Core Next (GCN) architecture. The presentation covers key aspects of the GCN architecture, including compute units, registers, and latency hiding. It then provides a top 10 list of performance advice for GCN, such as using DirectCompute threads in groups of 64, avoiding over-tessellation, keeping shader pipelines short, and batching drawing calls.
The document repeatedly states that AMD and Microsoft held a Game Developer Day event in Stockholm, Sweden on June 2, 2014 to work with game developers.
Direct3D12 aims to address issues with existing APIs by providing a more direct mapping to hardware capabilities. It features command buffers that allow work to be built in parallel threads and scheduled more efficiently. Pipeline state objects avoid runtime compilation overhead. Descriptor tables provide bindless resources through pointers and reduce state changes. While this gives more control and efficiency, it also means applications have more responsibility to avoid errors. Overall, Direct3D12 is designed to better expose the capabilities of modern graphics hardware.
Direct3D 12 aims to reduce CPU overhead and increase scalability across CPU cores by allowing developers greater control over the graphics pipeline. It optimizes pipeline state handling through pipeline state objects and reduces redundant resource binding by introducing descriptor heaps and tables. Command lists and bundles further improve performance by enabling parallel command list generation and reuse of draw commands.
Holy smoke! Faster Particle Rendering using Direct Compute by Gareth ThomasAMD Developer Central
The document discusses faster particle rendering using DirectCompute. It describes using the GPU for particle simulation by taking advantage of its parallel processing capabilities. It discusses using compute shaders to simulate particle behavior, handle collisions via the depth buffer, sort particles using bitonic sort, and render particles in tiles via DirectCompute to avoid overdraw from large particles. Tiled rendering involves culling particles, building per-tile particle indices, and sorting particles within each tile before shading them in parallel threads to composite onto the scene.
Computer Vision Powered by Heterogeneous System Architecture (HSA) by Dr. Ha...AMD Developer Central
Computer Vision Powered by Heterogeneous System Architecture (HSA) by Dr. Harris Gasparakis, AMD, at the Embedded Vision Alliance Summit, May 2014.
Harris Gasparakis, Ph.D., is AMD’s OpenCV manager. In addition to enhancing OpenCV with OpenCL acceleration, he is engaged in AMD’s Computer Vision strategic planning, ISVs, and AMD Ventures engagements, including technical leadership and oversight in the AMD Gesture product line. He holds a Ph.D. in theoretical high energy physics from YITP at SUNYSB. He is credited with enabling real-time volumetric visualization and analysis in Radiology Information Systems (Terarecon), including the first commercially available virtual colonoscopy system (Vital Images). He was responsible for cutting edge medical technology (Biosense Webster, Stereotaxis, Boston Scientific), incorporating image and signal processing with AI and robotic control.
Productive OpenCL Programming An Introduction to OpenCL Libraries with Array...AMD Developer Central
This document provides an overview of OpenCL libraries for GPU programming. It discusses specialized GPU libraries like clFFT for fast Fourier transforms and Random123 for random number generation. It also covers general GPU libraries like Bolt, OpenCV, and ArrayFire. ArrayFire is highlighted as it provides a flexible array data structure and hundreds of parallel functions across domains like image processing, machine learning, and linear algebra. It supports JIT compilation and data-parallel constructs like GFOR to improve performance.
Rendering Battlefield 4 with Mantle by Johan Andersson - AMD at GDC14AMD Developer Central
Johan Andersson will show how the Frostbite 3 game engine is using the low-level graphics API Mantle to deliver significantly improved performance in Battlefield 4 on PC and future games from Electronic Arts in this presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Also view this and other presentations on our developer website at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
3. Verizon cloud development goals
• Very few different hardware components
• Consistent predictable performance
• Secure
• High performance
• Highly available
• No modification to customer applications
• No special purpose hardware
3
4. Verizon Cloud Differentiation
• Value for Performance
– User defined availability and performance
– User defined resources
• Reserved Performance
– Network, Storage and Compute
• Workload Simplicity
– Seamless integration with other deployments
– Single point of control
• Security
‒ Market leading security capabilities
‒ Embedded into every aspect of platform
• Continuum of Services
‒ Bridging private, public and hybrid clouds
‒ Allow the blending with colocation, managed services, networking
4
5. SM15000 SYSTEM
10 Rack Units, draws 3-3.5 KW
! Compute
– Up to 512 Opteron, Xeon or Atom cores in 10 RU
– 2,048 cores in a rack
– Up to 64GB DRAM/socket = 4 terabytes/system
! Networking
– 10 Gbps half duplex bandwidth to each CPU socket
– 16 x 10GbE Line Rate uplinks to the network
! Storage
– Up to 1,408 disks: HDD or SSD
– Up to 128 Terabytes of internal SSD storage
– Up to 5.3 Petabytes of storage
! Fabric
– 1.28 Tbps Freedom Supercompute Fabric
! Software
– Off the shelf OS, Hypervisors
5
6. Hardware architecture
• There are only three hardware component types. This simplifies
maintenance
– Arista 7508 a 384 port x 10Ge non-blocking L2 switch
– AMD Seamicro SM15000
– SSDs
• Network connections
6
7. Hardware diagram
Juniper
MX960
for
external
connectivity
Arista
7508
4x10Gb
links
from
arista
to
each
chassis
Up
to
90
AMD
Seamicro
SM15000s
7
8. Verizon’s use of seamicro chassis
• 160 GB of external bandwidth (network and storage)
• 54 Server cards for customer loads
• 2 Server cards for Verizon orchestration
• 8 Server cards for storage services
• ~1000000 IOPs
• 96 T usable SSD storage
8
9. Combine Hardware and Verizon software
to get
• A flat layer 2 ethernet switch
– ~12000 port 1 Gb/sec
– ~1500000 vlans
– 8.5M mac address table entries
– 11.5M traffic flows.
– Software configurable
• A storage array
– 90M IOPs
– 8.6 PB of SSD storage
• Scalable router firewall 1Gb- 400Gb/sec
• Scalable load balancers 1Gb-400Gb/sec
• Configurable IO performance
9
10. Network Packet flow
Hypervisor
presents
nic
of
specified
speed
to
VM.
Back
pressure
applied
by
hypervisor
VM
Nic
Queue
Hypervisor
text
Nic
Queue
NPU
VM
Nic
Queue
Hypervisor
Hypervisor
fairly
mixes
flows
from
different
VMs
Limited
to
max
Nic
speed
text
Nic
Queue
NPU
Shaped
to
max
speed
of
receiving
nic
Queue
Policer
Queue
Layer
2
Switching
Shaped
to
max
speed
of
receiving
Nic
Random
packet
drop
back
pressure
form
destinatio
n
queue
Layer
2
Switching
10
G
NIC
10
G
NIC
Queue
Arista
Switch
Prioritized
queue.
10
11. Networking Layer 2
• Hypervisor
– Shapes egress traffic
• NPU
– Provides true layer 2 ethernet switching
– Polices ingress flows
– Shapes egress flows
• Arista 7508
– Lots of bandwidth
• Remote congestion control
– Switch learns speeds of remote flows
– Switch performs remote drop if destination is congested
• Hardware based security
– Each customer network is on its own vlan
• Software configurable
11
12. Data I/O Stack
VM
HV
Paravirt (xvdb)
o
Storage VLAN
AoE Initiator
Ethernet
NPU
Arista
S
S
N
NPU
o
Ethernet
AoE Target
AIO
ZFS
Block
AoE Initiator
Ethernet
Ethernet
AoE Target
Block Device(s)
SCARD
12
13. Storage
• Hypervisor
– Shapes disk traffic (IOPs and Bandwidth)
– Participates in disk replication
• AoE
– Storage over layer 2 ethernet
– Allows storage targets to be any where in world
– Shared volumes
• Replication
• NPU
– Shapes read and write bandwidth
• Storage Service
– Snap shots
– Raid
• Storage card
– AoE target
13
14. Networking Layer 3+
Layer 3 and above network services just work since they are all based
on layer 2 networking.
•
•
•
•
•
Soft routers
Load balancers
Public IP (No Nat)
Tunnels
Wan optimizers
14
15. Inter-data center features
• Single user interface
• Networks can span multiple data centers
• Replicated disks can span multiple data centers
• Taking advantage of being part of a network company
15
16. Availability
• No single point of failure for network traffic
– “Bonded nics”
– “Bonded NPUs”
– Fabric reroutes itself
– Multiple paths through arista switches
• No single point of failure for replicated storage
– Raid 1 on SSDs
– Multiple storage servers
– Option to have replicated volumes span data centers
16
17. Security
• Physical security
• DDOS
• Network security
– Customer traffic on independent VLANs
– Untrusted entities (Hypervisors) firewalled from rest of system
• Storage security
– Each volume on a separate vlan
– Storage vlans firewalled (only AoE traffic, no target to target traffic)
• Management software
– Audit logs
– Security alerts
17
18. Possible Applications
• Move a current three tier app with your choice of soft router/firewall/load
balancer into the cloud
• Bridge a network from your data center to one in the cloud
• Move XEN and VMWare VMs into the cloud without modification
• Write a clustered app using shared storage
• Configure an applications performance so that you know it won’t fall over
when it is 3:00 in the afternoon and the cloud gets busy
• Write and test a new L3 protocol
• Voice
• Storage arrays
• Network devices
18