This document proposes EYWA, a virtual network architecture for cloud environments that aims to overcome scalability limitations of conventional architectures. EYWA uses virtual routers distributed across hypervisor hosts to provide high availability and load balancing for public networks without bottlenecks. It employs VxLAN to provide large private IP subnets for tenants by eliminating issues like VLAN limits and MAC flooding. Key to EYWA is an agent on each hypervisor that monitors virtual routers, caches ARP entries, and controls ARP packets according to rules to enable multiple virtual routers per tenant with a single IP address.
A study of reserach paper they investigate the problem of seamless VM migrations in the DCN. Leveraging the benefit of decoupling a service from its physical location in the emerging technique named data networking; we propose a named service framework to support seamless VM migrations. In comparison with other approaches, their approach has following advantages: 1) the VM migration is interruption free; 2) the overhead to maintain the routing information is less than that caused by classic NDN; 3) the routing protocol is robust to both link and node failures; 4) the framework inherently supports the implementation of a distributed load balancing algorithm, via which requests are distributed to VMs in balance. The analysis and simulation results verify these benefits.
Improving Quality of Service and Reducing Power Consumption with WAN accele...IJCNC
The widespread use of cloud computing services is expected to deteriorate a Quality of Service and
toincrease the power consumption of ICT devices, since the distance to a server becomes longer than
before. Migration of virtual machines over a wide area can solve many problems such as load balancing
and power saving in cloud computing environments
Report for Network Subject at my college at May,2017 and we were suppose to present the operation of MPLS inside the core network of the service provider while the costumer is using a VPN connection
Fundamentals Of Transaction Systems - Part 2: Certainty suppresses Uncertaint...Valverde Computing
The document discusses transaction systems and consistency models. It summarizes that:
- Brewer's CAP theorem states that distributed systems can only achieve two of consistency, availability, and partition tolerance.
- Many financial systems achieve all three by using private networks and 3-phase commit, challenging assumptions of the CAP theorem.
- Workflow systems can help achieve consistency across inconsistent distributed systems by driving them into acceptable states.
The DEUS project aims to develop an easy to deploy and use versatile wireless network infrastructure for dynamic environments. It identifies four network domains: a wireless backbone, wireless sensor networks, access points, and backend servers. The backbone mesh provides a secure, self-organizing transport network between components. Wireless sensor networks share features with the mesh but support multiple routing protocols. Global routing optimizes paths between sensors and connects different network domains in a transparent way. Access points deployed on the mesh provide seamless client mobility. The architectural concept forms the basis for DEUS proof of concept implementations across different use cases.
This document describes virtual local area networks (VLANs), how they work, and their advantages over traditional LANs. It discusses how VLANs allow logical segmentation of networks without requiring physical relocation of devices. VLANs use tagging of frames to associate them with broadcast domains, avoiding the need for routers in many cases. This reduces costs and improves performance by limiting unnecessary broadcast traffic compared to traditional LANs.
Data center interconnect seamlessly through SDNFelecia Fierro
Data Center Interconnect Seamlessly through SDN
Data Center Interconnect, or DCI, has become a hot topic as IT infrastructures transform from islands of connectivity to pools of resources for efficiency purposes. Properly deployed, DCI enables all computing and storage resources to be pooled, regardless of where they physically reside, and it is the quality of this abstraction and the associated visibility that counts.
Enter Pluribus Networks running on Broadcom chipsets
Join us for this On-Demand Webinar where we will discuss why Data Center Interconnect is a key opportunity to simplify any network and how Broadcom chipsets enable this with their industry-leading VXLAN capabilities.
Pluribus Netvisor®-powered switches running on Broadcom include the industry’s most powerful and open DCI technology, VXLAN, which enables all resources across the entire planet to be shared. Along with VXLAN itself, we will explain the role of visibility in a widely distributed VXLAN based environment.
In this On-Demand Webinar, we'll discuss how DCI running on Broadcom VXLAN can:
Share IT resources and increase utilization of those resources
Provide enterprise scale, simplicity and agility – reducing the cost and complexity of IT
Support modern applications including Converged Infrastructure and VDI
Mpls vpn using vrf virtual routing and forwardingIJARIIT
Multi-Protocol Label Switching (MPLS) which was introduced by Internet Engineering Task Force (IETF) is
usually used in communication networks which started attracting all the internet service provider(ISP) networks with its
brilliant and excellent features that provide quality of services (QOS)and guarantees to traffic which carries data from one
network to another network directly through labels.
Virtual Private Network (VPN) is one of the highly useful MPLS applications which allow a service provider or a large enterprise
network to offer network Layer VPN services that guarantee and carries traffic securely and privately from customer’s one to
another through the service provider’s network. To support multiple customers that Customers Request for secure, reliable,
private and ultrafast connections over the internet MPLS VPN standards include the concept of a virtual router. This feature
called a VRF table. VRF or Virtual Routing and Forwarding technology that permit a router to have various routing table or
multiple VPN at the same time that they are located in the same router but they are independent and also the VRF feature in
VPN now allows different customers to use same IP addresses connected to the same ISP. A VRF exists inside a single MPLS
router and typically routers need at least one VRF for each customer attached to that particular router.
A study of reserach paper they investigate the problem of seamless VM migrations in the DCN. Leveraging the benefit of decoupling a service from its physical location in the emerging technique named data networking; we propose a named service framework to support seamless VM migrations. In comparison with other approaches, their approach has following advantages: 1) the VM migration is interruption free; 2) the overhead to maintain the routing information is less than that caused by classic NDN; 3) the routing protocol is robust to both link and node failures; 4) the framework inherently supports the implementation of a distributed load balancing algorithm, via which requests are distributed to VMs in balance. The analysis and simulation results verify these benefits.
Improving Quality of Service and Reducing Power Consumption with WAN accele...IJCNC
The widespread use of cloud computing services is expected to deteriorate a Quality of Service and
toincrease the power consumption of ICT devices, since the distance to a server becomes longer than
before. Migration of virtual machines over a wide area can solve many problems such as load balancing
and power saving in cloud computing environments
Report for Network Subject at my college at May,2017 and we were suppose to present the operation of MPLS inside the core network of the service provider while the costumer is using a VPN connection
Fundamentals Of Transaction Systems - Part 2: Certainty suppresses Uncertaint...Valverde Computing
The document discusses transaction systems and consistency models. It summarizes that:
- Brewer's CAP theorem states that distributed systems can only achieve two of consistency, availability, and partition tolerance.
- Many financial systems achieve all three by using private networks and 3-phase commit, challenging assumptions of the CAP theorem.
- Workflow systems can help achieve consistency across inconsistent distributed systems by driving them into acceptable states.
The DEUS project aims to develop an easy to deploy and use versatile wireless network infrastructure for dynamic environments. It identifies four network domains: a wireless backbone, wireless sensor networks, access points, and backend servers. The backbone mesh provides a secure, self-organizing transport network between components. Wireless sensor networks share features with the mesh but support multiple routing protocols. Global routing optimizes paths between sensors and connects different network domains in a transparent way. Access points deployed on the mesh provide seamless client mobility. The architectural concept forms the basis for DEUS proof of concept implementations across different use cases.
This document describes virtual local area networks (VLANs), how they work, and their advantages over traditional LANs. It discusses how VLANs allow logical segmentation of networks without requiring physical relocation of devices. VLANs use tagging of frames to associate them with broadcast domains, avoiding the need for routers in many cases. This reduces costs and improves performance by limiting unnecessary broadcast traffic compared to traditional LANs.
Data center interconnect seamlessly through SDNFelecia Fierro
Data Center Interconnect Seamlessly through SDN
Data Center Interconnect, or DCI, has become a hot topic as IT infrastructures transform from islands of connectivity to pools of resources for efficiency purposes. Properly deployed, DCI enables all computing and storage resources to be pooled, regardless of where they physically reside, and it is the quality of this abstraction and the associated visibility that counts.
Enter Pluribus Networks running on Broadcom chipsets
Join us for this On-Demand Webinar where we will discuss why Data Center Interconnect is a key opportunity to simplify any network and how Broadcom chipsets enable this with their industry-leading VXLAN capabilities.
Pluribus Netvisor®-powered switches running on Broadcom include the industry’s most powerful and open DCI technology, VXLAN, which enables all resources across the entire planet to be shared. Along with VXLAN itself, we will explain the role of visibility in a widely distributed VXLAN based environment.
In this On-Demand Webinar, we'll discuss how DCI running on Broadcom VXLAN can:
Share IT resources and increase utilization of those resources
Provide enterprise scale, simplicity and agility – reducing the cost and complexity of IT
Support modern applications including Converged Infrastructure and VDI
Mpls vpn using vrf virtual routing and forwardingIJARIIT
Multi-Protocol Label Switching (MPLS) which was introduced by Internet Engineering Task Force (IETF) is
usually used in communication networks which started attracting all the internet service provider(ISP) networks with its
brilliant and excellent features that provide quality of services (QOS)and guarantees to traffic which carries data from one
network to another network directly through labels.
Virtual Private Network (VPN) is one of the highly useful MPLS applications which allow a service provider or a large enterprise
network to offer network Layer VPN services that guarantee and carries traffic securely and privately from customer’s one to
another through the service provider’s network. To support multiple customers that Customers Request for secure, reliable,
private and ultrafast connections over the internet MPLS VPN standards include the concept of a virtual router. This feature
called a VRF table. VRF or Virtual Routing and Forwarding technology that permit a router to have various routing table or
multiple VPN at the same time that they are located in the same router but they are independent and also the VRF feature in
VPN now allows different customers to use same IP addresses connected to the same ISP. A VRF exists inside a single MPLS
router and typically routers need at least one VRF for each customer attached to that particular router.
Nexus 1000V Support for VMWare vSphere 6Tony Antony
The document discusses Cisco Nexus 1000V virtual networking software. It provides details on:
1. Nexus 1000V now supports VMware vSphere 6.0 and has increased scalability, security, and simplified installation/upgrade/monitoring features in version 3.1.
2. Version 3.1 provides micro-segmentation using the Virtual Security Gateway for distributed firewall capabilities. It also simplifies management using the Cisco Virtual Switch Update Manager plug-in for vSphere.
3. Cisco is committed to supporting Nexus 1000V across multiple hypervisors including VMware vSphere, Microsoft Hyper-V, and Red Hat/Canonical KVM.
IBM WebSphere MQ: Using Publish/Subscribe in an MQ NetworkDavid Ware
The publish/subscribe model can be used across a network of IBM WebSphere MQ queue managers, whether in a manually configured topology or in an MQ cluster. This session looks in depth at designing such systems, covering a wide range of requirements from availability to scalability and how they can be solved. A basic understanding of publish/subscribe in MQ would be beneficial, such as in "IBM WebSphere MQ: Using the Publish/Subscribe messaging paradigm"
This has been superseded by http://www.slideshare.net/DavidWare1/ame-2272-mq-publish-subscribe-network-pdf
This whitepaper features the transition from traditional networking to software-defined networking or SDN. Find outlines of next-generation architectures.
A document discusses various Oracle Service Bus components including pipelines, split joins, static split joins, dynamic split joins, and technologies like AQ, MQ, MSMQ, REST, SB, Socket, Tuxedo, UMS, WS, AS/400, BAM, Coherence, databases, direct, file, FTP, HTTP, JEJB, JMS Transport, and LDAP. Pipelines route and process messages, split joins split payloads and aggregate responses. Static split joins have a fixed number of branches while dynamic split joins have a variable number based on payload contents.
M&S Global LLC aims to design an efficient WAN network to connect its metropolitan locations across large geographical areas like cities and countries. The WAN network will connect multiple LANs using Ethernet and act as a single LAN segment. It will utilize high-speed layer 2 switches and provide full mesh connectivity independent of higher layers. The WAN design will offer secure, segmented networking with services from 128 kbps to 1Gbps using technologies like Ethernet, ATM, DSL and more.
The enterprise differentiator of mq on zosMatt Leming
IBM MQ is renowned for its enterprise qualities and this presentation will show you how this is taken to the next level
when running on IBM's enterprise platform, z/OS. Learn how its integration with the z/OS platform provides the perfect
solution for your enterprise needs, whether that's through its unique shared queue HA capability or its integration to
the latest z/OS security capabilities.
This document provides an overview of Network Functions Virtualization (NFV), including its technical requirements and challenges. NFV aims to improve network flexibility and reduce costs by using virtualization to separate network functions from dedicated hardware and deploy them as software on commercial off-the-shelf servers. While NFV may lower costs and speeds up service provisioning, challenges include ensuring virtual network functions meet performance requirements, efficiently managing their dynamic instantiation and migration, and addressing security and reliability issues.
This document discusses the concept of Seamless MPLS, which aims to provide end-to-end MPLS connectivity across an entire network from access to core. The key benefits of Seamless MPLS include convergence, true service freedom allowing services to be deployed and moved freely, and enabling a network architecture that exists to enable services rather than constrain them. The document outlines a network architecture for Seamless MPLS that divides the network into autonomous regions connected by border nodes, with intra-region and inter-region MPLS connectivity established using labeled BGP. This architecture supports very large scale networks while providing robust and resilient connectivity to facilitate flexible service delivery.
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
VMM provides several options for connecting virtual machines to a physical network including VLAN-based configuration, no isolation configuration, and network virtualization. It uses logical networks, logical switches, and VM networks to abstract the physical network and provide isolated virtual networks for tenants. Extensibility options allow connecting to external management consoles and using extensions to configure networking features.
Marvell Enhancing Scalability Through NIC Switch Independent PartitioningMarvell
Marvell FastLinQ 3400 and 8400 Series 10GbE Adapters Unleash the Power of Data Center Servers
Network interface card (NIC) Switch Independent Partitioning can simplify end-to-end networking by dividing a network controller port into as many as four partitions, enabling dynamic allocation of bandwidth as needed while reducing the total cost of ownership.
This paper reviews the trends and technologies in Unified Computing, describes the Datacenter Ethernet technologies for implementing Fibre Channel over Ethernet, and describes Cisco\'s Unified Computing System (UCS)
Optimizing JMS Performance for Cloud-based Application ServersZhenyun Zhuang
IEEE CLOUD 2012
http://dl.acm.org/citation.cfm?id=2353798
Many business-oriented services will be gradually
offered in the Cloud. Java Message Service (JMS) is a critical
messaging technology in Java-based business applications, particularly
to those that are based on the Java Enterprise Edition
(Java EE) open standard. Maintaining high performance in
the horizontally scaled, and elastic, cloud environment is
critical to the success of the business applications. In this
paper, we present practical considerations in optimizing JMS
performance for the cloud deployment, where some of the
findings may also serve to improve the design of JMS container
so it adapts well to cloud computing. Our work also includes
performance evaluation on the proposed strategies.
This document discusses and compares layer-3 and layer-2 approaches to implementing IP/MPLS-based VPNs. MPLS layer-3 VPNs use a routed approach defined in RFC 2547, where customer routes are exchanged between provider edge (PE) routers using BGP. MPLS layer-2 VPNs can provide point-to-point or multi-point connectivity using virtual circuits or virtual private LAN service. The document evaluates aspects of each approach like supported traffic, scalability, and complexity to help service providers determine the best fit for their network.
Role of Virtual Machine Live Migration in Cloud Load BalancingIOSR Journals
Abstract: Cloud computing has touched almost every field of the life. Hence number of cloud application
consumers is increasing every day and so as the number of application request to the cloud provider. This leads
increment of workload in many of the cloud nodes. The motive to use load balancing concepts in cloud
environment is to efficiently utilize available resources keeping in mind that no any single system is heavily
loaded or not a single system is idle during the active phase of the request completion. Even though cloud
computing being a software facility most often, how does it actually performs well in heavily loaded
environment at processor level, is discussed in the paper. This paper aims to throw some light on what is cloud
load balancing and what is the role of Virtual machine migration in improving it.
Keywords: Cloud load balancing, Live Migration, Migration, Virtualization, Virtual machine.
virtualization is the solution to the under utilization problem. And the essence of virtualization is an abstraction layer of software called the Hypervisor.
High performance and flexible networkingJohn Berkmans
This document summarizes a paper presented at the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI '14) held from April 2-4, 2014 in Seattle, WA. The paper proposes NetVM, a platform that uses virtualization to run complex network functions at line speed on commodity servers while providing flexibility. NetVM leverages the DPDK framework to allow virtual machines to directly access packets from the NIC without kernel involvement. It introduces innovations such as inter-VM communication through shared memory, a hypervisor switch for state-dependent packet routing, and security domains. Evaluation shows NetVM can process packets at 10Gbps throughput across multiple VMs, over 250% faster than existing SR
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
Connecting Docker for Cloud IaaS (Speech at CSDN-Oct18DaoliCloud Ltd
SDN and network virtualization connecting Docker containers to provide truly large scale and elastic cloud IaaS services. By eliminating network boundaries among Docker hosts, worldwide distributed containers over Docker orchestrators can be connected in Tenant SDN Controller (TSC). Docker platform's network thus become virtualized so that this great platform now no longer has to be limited to PaaS use (unlike IaaS use, PaaS uses are user specific and hence do not demand scalability.)
Nexus 1000V Support for VMWare vSphere 6Tony Antony
The document discusses Cisco Nexus 1000V virtual networking software. It provides details on:
1. Nexus 1000V now supports VMware vSphere 6.0 and has increased scalability, security, and simplified installation/upgrade/monitoring features in version 3.1.
2. Version 3.1 provides micro-segmentation using the Virtual Security Gateway for distributed firewall capabilities. It also simplifies management using the Cisco Virtual Switch Update Manager plug-in for vSphere.
3. Cisco is committed to supporting Nexus 1000V across multiple hypervisors including VMware vSphere, Microsoft Hyper-V, and Red Hat/Canonical KVM.
IBM WebSphere MQ: Using Publish/Subscribe in an MQ NetworkDavid Ware
The publish/subscribe model can be used across a network of IBM WebSphere MQ queue managers, whether in a manually configured topology or in an MQ cluster. This session looks in depth at designing such systems, covering a wide range of requirements from availability to scalability and how they can be solved. A basic understanding of publish/subscribe in MQ would be beneficial, such as in "IBM WebSphere MQ: Using the Publish/Subscribe messaging paradigm"
This has been superseded by http://www.slideshare.net/DavidWare1/ame-2272-mq-publish-subscribe-network-pdf
This whitepaper features the transition from traditional networking to software-defined networking or SDN. Find outlines of next-generation architectures.
A document discusses various Oracle Service Bus components including pipelines, split joins, static split joins, dynamic split joins, and technologies like AQ, MQ, MSMQ, REST, SB, Socket, Tuxedo, UMS, WS, AS/400, BAM, Coherence, databases, direct, file, FTP, HTTP, JEJB, JMS Transport, and LDAP. Pipelines route and process messages, split joins split payloads and aggregate responses. Static split joins have a fixed number of branches while dynamic split joins have a variable number based on payload contents.
M&S Global LLC aims to design an efficient WAN network to connect its metropolitan locations across large geographical areas like cities and countries. The WAN network will connect multiple LANs using Ethernet and act as a single LAN segment. It will utilize high-speed layer 2 switches and provide full mesh connectivity independent of higher layers. The WAN design will offer secure, segmented networking with services from 128 kbps to 1Gbps using technologies like Ethernet, ATM, DSL and more.
The enterprise differentiator of mq on zosMatt Leming
IBM MQ is renowned for its enterprise qualities and this presentation will show you how this is taken to the next level
when running on IBM's enterprise platform, z/OS. Learn how its integration with the z/OS platform provides the perfect
solution for your enterprise needs, whether that's through its unique shared queue HA capability or its integration to
the latest z/OS security capabilities.
This document provides an overview of Network Functions Virtualization (NFV), including its technical requirements and challenges. NFV aims to improve network flexibility and reduce costs by using virtualization to separate network functions from dedicated hardware and deploy them as software on commercial off-the-shelf servers. While NFV may lower costs and speeds up service provisioning, challenges include ensuring virtual network functions meet performance requirements, efficiently managing their dynamic instantiation and migration, and addressing security and reliability issues.
This document discusses the concept of Seamless MPLS, which aims to provide end-to-end MPLS connectivity across an entire network from access to core. The key benefits of Seamless MPLS include convergence, true service freedom allowing services to be deployed and moved freely, and enabling a network architecture that exists to enable services rather than constrain them. The document outlines a network architecture for Seamless MPLS that divides the network into autonomous regions connected by border nodes, with intra-region and inter-region MPLS connectivity established using labeled BGP. This architecture supports very large scale networks while providing robust and resilient connectivity to facilitate flexible service delivery.
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
VMM provides several options for connecting virtual machines to a physical network including VLAN-based configuration, no isolation configuration, and network virtualization. It uses logical networks, logical switches, and VM networks to abstract the physical network and provide isolated virtual networks for tenants. Extensibility options allow connecting to external management consoles and using extensions to configure networking features.
Marvell Enhancing Scalability Through NIC Switch Independent PartitioningMarvell
Marvell FastLinQ 3400 and 8400 Series 10GbE Adapters Unleash the Power of Data Center Servers
Network interface card (NIC) Switch Independent Partitioning can simplify end-to-end networking by dividing a network controller port into as many as four partitions, enabling dynamic allocation of bandwidth as needed while reducing the total cost of ownership.
This paper reviews the trends and technologies in Unified Computing, describes the Datacenter Ethernet technologies for implementing Fibre Channel over Ethernet, and describes Cisco\'s Unified Computing System (UCS)
Optimizing JMS Performance for Cloud-based Application ServersZhenyun Zhuang
IEEE CLOUD 2012
http://dl.acm.org/citation.cfm?id=2353798
Many business-oriented services will be gradually
offered in the Cloud. Java Message Service (JMS) is a critical
messaging technology in Java-based business applications, particularly
to those that are based on the Java Enterprise Edition
(Java EE) open standard. Maintaining high performance in
the horizontally scaled, and elastic, cloud environment is
critical to the success of the business applications. In this
paper, we present practical considerations in optimizing JMS
performance for the cloud deployment, where some of the
findings may also serve to improve the design of JMS container
so it adapts well to cloud computing. Our work also includes
performance evaluation on the proposed strategies.
This document discusses and compares layer-3 and layer-2 approaches to implementing IP/MPLS-based VPNs. MPLS layer-3 VPNs use a routed approach defined in RFC 2547, where customer routes are exchanged between provider edge (PE) routers using BGP. MPLS layer-2 VPNs can provide point-to-point or multi-point connectivity using virtual circuits or virtual private LAN service. The document evaluates aspects of each approach like supported traffic, scalability, and complexity to help service providers determine the best fit for their network.
Role of Virtual Machine Live Migration in Cloud Load BalancingIOSR Journals
Abstract: Cloud computing has touched almost every field of the life. Hence number of cloud application
consumers is increasing every day and so as the number of application request to the cloud provider. This leads
increment of workload in many of the cloud nodes. The motive to use load balancing concepts in cloud
environment is to efficiently utilize available resources keeping in mind that no any single system is heavily
loaded or not a single system is idle during the active phase of the request completion. Even though cloud
computing being a software facility most often, how does it actually performs well in heavily loaded
environment at processor level, is discussed in the paper. This paper aims to throw some light on what is cloud
load balancing and what is the role of Virtual machine migration in improving it.
Keywords: Cloud load balancing, Live Migration, Migration, Virtualization, Virtual machine.
virtualization is the solution to the under utilization problem. And the essence of virtualization is an abstraction layer of software called the Hypervisor.
High performance and flexible networkingJohn Berkmans
This document summarizes a paper presented at the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI '14) held from April 2-4, 2014 in Seattle, WA. The paper proposes NetVM, a platform that uses virtualization to run complex network functions at line speed on commodity servers while providing flexibility. NetVM leverages the DPDK framework to allow virtual machines to directly access packets from the NIC without kernel involvement. It introduces innovations such as inter-VM communication through shared memory, a hypervisor switch for state-dependent packet routing, and security domains. Evaluation shows NetVM can process packets at 10Gbps throughput across multiple VMs, over 250% faster than existing SR
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
Connecting Docker for Cloud IaaS (Speech at CSDN-Oct18DaoliCloud Ltd
SDN and network virtualization connecting Docker containers to provide truly large scale and elastic cloud IaaS services. By eliminating network boundaries among Docker hosts, worldwide distributed containers over Docker orchestrators can be connected in Tenant SDN Controller (TSC). Docker platform's network thus become virtualized so that this great platform now no longer has to be limited to PaaS use (unlike IaaS use, PaaS uses are user specific and hence do not demand scalability.)
DEF CON 23 - Ronny Bull and Jeanna Matthews - exploring layer 2 - DOCUMENTFelipe Prado
The document discusses exploring Layer 2 network security in virtualized environments. It examines whether common Layer 2 attacks on physical switches also apply to virtual switches used in virtualized environments. The study tests MAC flooding and rogue DHCP attacks across four hypervisors (Open vSwitch, XenServer, Hyper-V, vSphere) and finds that all platforms showed degraded network performance from MAC flooding. It was also possible to eavesdrop on other VMs for Open vSwitch and XenServer. All platforms also allowed manipulation of co-resident VMs through rogue DHCP attacks. The study aims to help users understand network security risks when VMs from different customers share physical resources.
The paper explores network virtualization issues related with the Cloud Computing paradigm (mainly intended as IaaS). Finally, we consider this framework from a network monitoring perspective.
The paper is an outcome of the CoreGRID working group at ERCIM.
In computer networking, a single layer-2 network may be partitioned to create multiple distinct
broadcast domains, which are mutually isolated so that packets can only pass between them via one or
more routers; such a domain is referred to as a virtual local area network, virtual LAN or VLAN.
A virtual local area network (VLAN) is a logical group of workstations, servers and network devices that
appear to be on the same LAN despite their geographical distribution. A VLAN allows a network of
computers and users to communicate in a simulated environment as if they exist in a single LAN and are
sharing a single broadcast and multicast domain.
Scheduling wireless virtual networks functionsredpel dot com
This document discusses scheduling virtual network functions (VNFs) in wireless networks. It formalizes the problem of placing VNFs requested by mobile virtual network operators in a radio access network as an integer linear programming problem. It then proposes a heuristic called Wireless Network Embedding (WiNE) to solve the VNF placement problem. The goal is to ensure performance isolation between different network slices while efficiently utilizing resources. A proof-of-concept implementation of a management framework for enterprise WLANs is also presented.
Optimizing the placement of cloud data center in virtualized environmentIJECEIAES
In cloud mobile networks, precise assessment for the position of the virtualization powered cloud center would improve the capacity limit, latency and energy efficiency (EEf). This paper utilized the Monte Carlo oriented particle swarm optimization (PSO) and genetic algorithm (GA) to first, obtain the optimal number of virtual machines (VMs) that maximize the EEf of the mobile cloud center, second, optimize the position of the mobile data center. To fulfil such examination, a power evaluation framework is proposed to shape the power utilization of a virtualized server while hosting an amount of VMs. In addition, the total power consumption of the network is examined, including data center and radio units (RUs). This evaluation is based on linear modelling of the network parameters, such as resource blocks, number of VMs, transmitted and received powers, and overhead power consumption. Finally, the EEf is constrained to many quality of service (QoS) metrics, including number of resource blocks, total latency and minimum user's data rate.
1Running Head Network Design3Network DesignUn.docxeugeniadean34240
The document provides details on designing a wide area network (WAN) to connect the locations of an organization. It recommends using point-to-point radio or leased line connections between sites. To ensure high availability, it also recommends redundant VPN connections over the internet. The document then discusses determining bandwidth requirements for each connection based on the number of users and applications. It provides specifications for routers, switches, firewalls, and cabling to implement the WAN design across the five locations.
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
This presentations gives basic overview about networking and in depth insights about Openstack Neutron component.
Covers understanding on VLAN,VXLAN,Openstack vSwitch
Avaya Fabric Connect: The Right Foundation for the Software-Defined Data CenterAvaya Inc.
This paper focuses on a specific real-world use case for SDN - the Software-Defined Data Center. It provides Avaya’s perspective on the characteristics of the Software-Defined Data Center and the value of its Fabric Connect technology as the foundation for this solution. It also talks about how combining Avaya Fabric Connect with open-source cloud orchestration capabilities (that are being defined by OpenStack) can enable a graceful migration to the Software-Defined Data Center.
Azure Networking: Innovative Features and Multi-VNet TopologiesMarius Zaharia
Are you looking to deploy a more complex structure of resources in Azure, all secured and segregated by precise boundaries while closely communicating with each other? Following the arrival of the advanced IaaS networking features in Azure (network security groups, routing, multi-NIC, …) and their maturation in the last months, here is the moment for you to find a modern architectural vision of networking in Azure, with focus on multi-VNET / VPN topologies, and based on ARM deployment model.
The document discusses static LANs, VLANs, VSANs and their benefits. A VLAN allows devices to be connected virtually as if on the same LAN, segmenting a network for improved security, reliability and efficiency. A VSAN is a software-based virtual storage area network that provides shared storage to VMs over a network, converging traditional storage hardware into a single virtual appliance.
MidoNet Overview - OpenStack and SDN integrationAkhilesh Dhawan
The document provides an overview of MidoNet's network virtualization platform. It discusses MidoNet's distributed architecture as an alternative to the single network node approach of the OpenStack Neutron OVS plugin. MidoNet's distributed logical switching, routing, firewalling and load balancing are performed across multiple nodes for high performance, availability and scalability without relying on hardware appliances. The document also demonstrates MidoNet's integration with OpenStack Neutron and its capabilities for overlay networking, distributed logical topologies and load balancing as a service.
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Dan Mihai Dumitriu
OpenStack deployments for public or private clouds require overlay networking. Due to the scale and rate of change of virtual resources, it isn't practical to rely on traditional network constructs and isolation mechanims. Today's deployments require performance, resilience, and high availability to be considered truly production-ready. In this session, we deep dive into the MidoNet architecture, and process of sending a data packet across an OpenStack environment through a network overlay. A distributed architecture implements logical constructs that are used to build networks without a single point of failure, all while adding network functionality in a highly-scalable manner. Network functions are applied in a single virtual hop. By applying network services right at the ingress host, the network is free from unnecessary clogging and bottlenecks by avoiding additional hops. Packets reach their destination more efficiently with the single virtual hop. After this session, the audience will understand how distributed architectures allow efficient networking with routing decisions and network services applied at the edge. Also, the audience will understand how it is easier to scale clouds when the network intelligence is distributed.
From IaaS to PaaS to Docker Networking to … Cloud Networking ScalabilityDaoliCloud Ltd
This document discusses the scalability challenges of cloud networking and proposes a solution using Network Virtualization Infrastructure (NVI) technology. It reasons that NVI can provide scalable cloud networking by connecting independently orchestrated clouds, each with a modest physical scale, using network patching. This is presented as a solution to the lack of scalability in today's cloud networking practices, which are discussed and shown to involve non-virtualization and scale-up approaches that limit overall size.
The document discusses different network architectures for virtualized environments including single tenant/router, multi-tenant/multi-router, and high availability configurations using multiple routers. It specifically examines the challenges of handling ARP requests and responses when a virtual machine has the same internal IP across multiple routers due to ARP tables potentially conflicting and causing traffic to be sent to the wrong router/virtual machine.
This document describes EYWA, a virtual network architecture for IaaS that provides elastic load balancing, high availability, and scalability. It addresses problems with conventional architectures like single points of failure, limited resources and poor connectivity. EYWA uses technologies like MVRRP and VxLAN to create highly available virtual routers that provide load balancing and isolation across large layer 2 networks. The key components are virtual routers, a guest virtual network that isolates traffic, and a controller that monitors network state and proxies ARP requests.
This document describes an SDN test suite that can be run using Vagrant and VirtualBox. It lists several SDN platforms and technologies that can be tested including ONOS, OpenDaylight, RouteFlow, VXLAN with OVS, and more. For each test, it provides a link to more information and sometimes includes screenshots or diagrams of example test setups and configurations. The goal is to provide an easy way to test and experiment with different SDN controllers and technologies in a virtualized environment.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against developing mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Lee Barnes - Path to Becoming an Effective Test Automation Engineer.pdfleebarnesutopia
So… you want to become a Test Automation Engineer (or hire and develop one)? While there’s quite a bit of information available about important technical and tool skills to master, there’s not enough discussion around the path to becoming an effective Test Automation Engineer that knows how to add VALUE. In my experience this had led to a proliferation of engineers who are proficient with tools and building frameworks but have skill and knowledge gaps, especially in software testing, that reduce the value they deliver with test automation.
In this talk, Lee will share his lessons learned from over 30 years of working with, and mentoring, hundreds of Test Automation Engineers. Whether you’re looking to get started in test automation or just want to improve your trade, this talk will give you a solid foundation and roadmap for ensuring your test automation efforts continuously add value. This talk is equally valuable for both aspiring Test Automation Engineers and those managing them! All attendees will take away a set of key foundational knowledge and a high-level learning path for leveling up test automation skills and ensuring they add value to their organizations.
MySQL InnoDB Storage Engine: Deep Dive - MydbopsMydbops
This presentation, titled "MySQL - InnoDB" and delivered by Mayank Prasad at the Mydbops Open Source Database Meetup 16 on June 8th, 2024, covers dynamic configuration of REDO logs and instant ADD/DROP columns in InnoDB.
This presentation dives deep into the world of InnoDB, exploring two ground-breaking features introduced in MySQL 8.0:
• Dynamic Configuration of REDO Logs: Enhance your database's performance and flexibility with on-the-fly adjustments to REDO log capacity. Unleash the power of the snake metaphor to visualize how InnoDB manages REDO log files.
• Instant ADD/DROP Columns: Say goodbye to costly table rebuilds! This presentation unveils how InnoDB now enables seamless addition and removal of columns without compromising data integrity or incurring downtime.
Key Learnings:
• Grasp the concept of REDO logs and their significance in InnoDB's transaction management.
• Discover the advantages of dynamic REDO log configuration and how to leverage it for optimal performance.
• Understand the inner workings of instant ADD/DROP columns and their impact on database operations.
• Gain valuable insights into the row versioning mechanism that empowers instant column modifications.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Keywords: AI, Containeres, Kubernetes, Cloud Native
Event Link: https://meine.doag.org/events/cloudland/2024/agenda/#agendaId.4211
Getting the Most Out of ScyllaDB Monitoring: ShareChat's TipsScyllaDB
ScyllaDB monitoring provides a lot of useful information. But sometimes it’s not easy to find the root of the problem if something is wrong or even estimate the remaining capacity by the load on the cluster. This talk shares our team's practical tips on: 1) How to find the root of the problem by metrics if ScyllaDB is slow 2) How to interpret the load and plan capacity for the future 3) Compaction strategies and how to choose the right one 4) Important metrics which aren’t available in the default monitoring setup.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
1. EYWA: Elastic load-balancing & high-availabilitY Wired
virtual network Architecture
Wookjae Jeong
wjjung11@gmail.com
Jungin Jung
call518@gmail.com
ABSTRACT
Infrastructure as a Service (IaaS) for cloud environments provides
compute processing, storage, networks, and other fundamental
computing resources. To support multi-tenant cloud environments,
IaaS utilizes the various advantages of the virtualization, but con-
ventional virtual (overlay) network architectures for IaaS have
been a direct cause of scalability limitations in multi-tenant cloud
environments. In other words, IaaS’s virtual networks have the
limitations due to the problems of high availability and load bal-
ancing, etc. To solve these problems, we present EYWA, a virtual
network architecture that scales to support huge data centers with
high availability, load balancing and large layer-2 semantics. The
design of EYWA overcomes the limitations by accommodating
(1)a large number of tenants (about 224
= 16,777,216) by using
virtual LANs such as logically isolated network with its own IP
range in the cloud service providers’ view, and providing
(2)public network service per tenant without throughput bottle-
neck and single point of failure (SPOF) on Source and Destina-
tion Network Address Translation (SNAT/DNAT) and (3)a single
large IP subnet per tenant by using large layer-2 semantics in the
consumers’ view. EYWA combines existing techniques into a
decentralized scale-out control and data plane. The only compo-
nent of EYWA is an agent in every hypervisor host that can con-
trol packets and the agents act as distributed controller. As a result,
EYWA can be deployed into all the multi-tenant cloud environ-
ments today. We have implemented POC and evaluated the ad-
vantages of EYWA design using measurement, analysis and ex-
periments on our lab.
Categories and Subject Descriptors
C.2.1 [Network Architecture and Design]: Distributed networks,
Network topology
General Terms
Design, Performance, Reliability
Keywords
Cloud, IaaS, Virtual network, Overlay network, Data center net-
work, OpenStack, Load balancing, Load sharing
1. INTRODUCTION
After cloud computing emerged, the various service providers
[20, 22, 24, 25, 26, 27] and solutions [16, 17, 18, 19, 21, 23] for
IaaS have appeared, and the various technical limitations of IaaS
has been improved and developed by them. The physical (under-
lay) network architectures for data centers in cloud environments
have been researched [1, 2, 3, 8, 9] and standardized [13, 14] to
solve the problems of the conventional three-tier model and find
optimal network models, whereas the virtual network architec-
tures for cloud environments still have various problems and are
slow in progress up to now so we regret that there no various
outcomes up to the field of physical networks.
It became possible that cloud computing environments provide
more of Virtual Machines (VM) more quickly at low cost result-
ing from the increased computing power and decreased price of
devices, but the fundamental solutions of how to connect a large
number of VMs to the networks without throughput bottleneck
and SPOF while ensuring security and traffic isolation have not
yet appeared.
2. BACKGOUND
In this section, we first explain the models and dominant issues
for today’s virtual network architectures. Typically, when a user
creates VMs in multi-tenant cloud environments, the correspond-
ing tenant is assigned a virtual LAN for private network, a Virtual
Router (VR) and a public IP address for public network, and each
VM is assigned a private IP address. Additionally, the VR can
also be assigned public IP addresses for each VM. Looking in
detail, VM’s private IP address is assigned directly to VM’s net-
work interface but public IP addresses for tenant or VMs are as-
signed indirectly to VR such as VM Instance or Linux Network
Namespaces and VMs communicate with external networks
through the VR, that is, VM’s outbound traffic flows using SNAT
of the VR set as default gateway inside the VM, and inbound
traffic flows using DNAT of the VR. Figure 1 [16, 18] and Figure
2 [17] show how public traffic flows through the VRs in multi-
tenant cloud environments using a shared network service and
hypervisor network service model. Inter-VM communication in-
side a virtual LAN communicates directly using layer-2 protocols
or indirectly through a centralized shared network service host.
This just moves the conventional physical network architecture
to the virtual environments. As a result, the virtual networks have
the same problems of scalability limitations such as high availabil-
ity and load balancing as the physical networks.
Public Network has the following problems because VMs
communicate with external networks through SNAT/DNAT of a
single VR.
High Availability: Due to a shared network service host (e.g.,
OpenStack/Nova Network/Network Node), SPOF exists as il-
lustrated in Figure 1. In the worst case, all the VMs in the
cluster (not tenant) cannot communicate with external net-
works if the shared network service host fails. In comparison,
a hypervisor network service model limits to failure domain
per tenant as illustrated in Figure 2, and in the worst case, on-
ly all the VMs in the corresponding tenant (not cluster) cannot
communicates with external networks if the hypervisor host
running the VR fails. To solve these problems, high availabil-
ity structures using protocols such as VRRP [12], etc., have
been proposed, but Active-Standby or Active-Active structure
2. with one or two more VRs or network service hosts cannot be
a perfect solution for high availability.
Load Sharing and Balancing: A single VR or shared net-
work service host should exist as throughput bottleneck on
SNAT/DNAT and layer-4 load balancing. Eventually, due to
performance and bandwidth limitations of the single VR or
host, the number of VMs should be limited even if a large
number of VMs can be provided in a single virtual LAN. One
of these problems is layer-4 load balancing to support scale-
out public services such as web service. In order to improve
the load balancing, some IaaS services [27] provide additional
physical load balancers (scale-up) instead of the virtual in-
stances and others provide an additional load balancing ser-
vice such as Amazon Web Services (AWS) Elastic Load Bal-
ancing (ELB) [30], but all of them cannot provide unlimited
scalability.
Traffic Engineering: In a hypervisor network service model,
most of VMs’ public traffic as blue-line illustrated in Figure 2
consumes additional network bandwidth and have increased
latency to traverse a single remote VR except for several
VMs’ traffic as green-line in the same hypervisor host as a
single VR. In a shared network service model, all the public
traffic consumes additional bandwidth and has increased la-
tency to traverse a remote network service host as illustrated
in Figure 1.
Private Network is provided because a cloud tenant demands
VMs to be in a different layer-2 subnet or layer-3 network from
others in multi-tenant cloud environments. A virtual LAN per
tenant has mainly been provided by using VLAN (802.1Q) [11].
VLAN (802.1Q) limit: A cloud data center can quickly ex-
ceed the VLAN ID limit of 4,094 with enough top-of-rack
switches connected to multiple physical servers hosting VMs,
each belonging to at least one virtual LAN. This hinders ten-
ant expansion, layer-2 restrictions for VM communications
and VM mobility.
A single large IP subnet (large layer-2 network): In order
to take full advantage of layer-2 communication, a large num-
ber of VMs should be able to be deployed in a single virtual
LAN, but layer-2 issues such as Address Resolution Protocol
(ARP) broadcast, MAC flooding and Spanning Tree Protocol
(STP) should be resolved first.
3. EYWA
We propose EYWA for the final architecture as illustrated in
Figure 3 that accommodates up to a large number of tenants, and
provides public network per tenant without throughput bottleneck
& SPOF and a single large IP subnet (private network) per tenant
by eliminating all of the issues described in Section 2.
Prior to the description of this design, the VR in EYWA envi-
ronments is that SNAT/DNAT and layer-4 load balancer are inte-
grated in a single VM instance. This design is based on the hyper-
visor network service model, and assumes that utilizes DNS-based
load balancing such as AWS Route 53 [31] like AWS ELB [30]
in addition to the VR’s layer-4 load balancer. Finally, each VM
may or may not have a local VR according to policy or VR failure
and therefore EYWA defines two modes as illustrated in Figure 3.
One is the Normal Mode if a VR exists with a VM in a same
internal Virtual Switch (VSi, virtual software switch), that is, the
default gateway VR for a VM exists with the VM in a same hy-
pervisor host. The other is the Orphan Mode if a VR does not
exist with a VM in a same VSi, that is, the default gateway VR for
a VM does not exist with the VM in a same hypervisor host.
Figure 1: Traffic Flows in a shared network service host
Figure 2: Traffic Flows in hypervisor network service hosts
3.1 Public Network
Each VM in a tenant can have a large number of physically dif-
ferent VR instances as a default gateway that have a same private
IP address (e.g., 10.0.0.1) as illustrated in Figure 3.
High Availability: Active-Active structure by using the mul-
tiple VRs per tenant eliminates SPOF and meets high availa-
bility. The VRs have a same private IP address and can be
scaled out with the unlimited instances.
Load Sharing and Balancing: VMs’ default gateways are
distributed to the unlimited VRs according to policy such as
latency and performance and therefore the throughput bottle-
neck of outbound traffic disappears. Inbound traffic is also
distributed by the software load balancers inside the unlimited
VRs and external DNS-based load balancing.
Traffic Engineering: The VMs having a local VR obtain the
effect of reducing network bandwidth and latency because the
VMs have the local VR as default gateway if a local VR exist.
3.2 Private Network
Virtual Extensible LAN (VxLAN) [5]: EYWA will provide a
large number of virtual LANs (about 224
= 16,777,216) using
VxLAN. It is another overlay networking solution that can elimin-
3. Figure 3: Agent and Mode in EYWA environments
ate VLAN ID limit.
Resources: IP address is also resource. Each VR does not
consume the private IP address pool.
STP only supports 200 to 300 VMs in a virtual LAN and
STP’s requirement for a single active path between switches
also limits the ability to provide multi-pathing and ensure
networking resiliency. EYWA has a simple network fabric
without multi-path to take advantage of the virtualization such
as fault-tolerance so all the software switches’ STP option is
disabled and there is no need any more to consider the issue.
MAC flooding: When MAC flooding occurs, the switch will
flood all ports with incoming traffic because it cannot find the
port for a particular MAC address in the MAC table. The
switch, in essence, acts like a hub. In VLAN, VMs’ MAC ad-
dresses consume the limited memory set aside in the physical
switch to store the MAC addresses, whereas VMs’ MAC ad-
dresses do not consume the limited memory in VxLAN be-
cause the address is encapsulated by the host address.
ARP broadcast: The agents act as distributed proxy ARP so
ARP broadcasts are severely depleted.
3.3 VM Migrations
There is also the need to migrate VM to another host to opti-
mize usage of the underlying physical server hardware and reduce
energy costs. EYWA has the most significant advantage of live-
migrations. For example, let’s assume that a VM live migrates to
another hypervisor host because of the VR or hypervisor host’s
overload. A default gateway IP address set inside the VM is diffi-
cult to change during operation and therefore the migrated VM
will continue to use the default gateway VR as a default gateway
in most environments. In EYWA environments, there is also no
change in the migrated VM’s default gateway IP address, but the
VM can use a physically different and underloaded VR as a de-
fault gateway as illustrated in Figure 4 and therefore this get the
advantage that reduce network bandwidth and latency. All the
advantages are additionally obtained by the agent’s packet control
that allows the unlimited VRs exist simultaneously with a same
private IP address.
3.4 EYWA Agent
To achieve these advantages, an agent in every hypervisor host
monitors each tenant’s Virtual Router Port (vport) and VxLAN
Figure 4: SNAT Traffic Flows after VM live-migration
Tunnel End Point (VTEP) in the hypervisor host as illustrated in
Figure 3. The agent does not communicate with any servers or any
agents, and only operates independently.
3.4.1 VR Monitoring
The agent monitors the vport to check the local VR’s state &
bandwidth and performs health check on the local VR through the
vport connected with VSi, that is, the agent can check the local
VR’s state by monitoring ARP sessions & Gratuitous ARP
(GARP) and performing periodic health check. The local VR’s
state can be also determined passively by looking at the VR’s
ARP reply to a VM’s ARP request. Through this, the agent de-
termines to be the Normal Mode, if the local VR is running nor-
mally, and to be the Orphan Mode if not. Finally, the agent ac-
quires the QoS information by monitoring the local VR’s band-
width usage through the vport.
3.4.2 ARP Caching
An agent provides ARP cache for an IP address to the MAC ad-
dress mapping, as it will use the Proxy ARP function until ARP
cache times out. For this purpose, the agent have to store the local
VR, VMs and remote VMs’ addresses in the ARP cache by moni-
toring all the ARP sessions & GARP packets through the vport
and VTEP, and send ARP request to all the local VR and VMs
especially at regular intervals, except for remote VMs, before
cache time out. The ARP cache has to be updated consistently.
3.4.3 ARP Filtering & Proxy ARP
As explained above, a large number of VRs per tenant have a s-
4. Table 1: ARP Packet Control Rules on VTEPs
Sender IP
address
Target IP
address
Normal Mode Orphan Mode
Outbound Inbound Outbound Inbound
ARP Request
VR(10.0.0.1) VM
1-1Pass
1-2Filtering
1-3Proxy
2Filtering 3N/A
4-1Filtering
4-2Proxy
VM VR(10.0.0.1) 5Filtering
6-1Filtering
6-2Proxy
7Pass 8Filtering
ARP Reply
(unicast)
VR(10.0.0.1) VM 9N/A 10N/A 11N/A 12Pass & Filtering
VM VR(10.0.0.1) 13N/A 14Pass 15N/A 16N/A
GARP VR(10.0.0.1) VR(10.0.0.1) 17Filtering 18N/A 19N/A 20N/A
ARP Request VM VM
21-1Pass
21-2Filtering
21-3Proxy
22-1Filtering
22-2Proxy
23-1Pass
23-2Filtering
23-3Proxy
24-1Filtering
24-2Proxy
ame private IP address and therefore the agent should filter ARP
packets through the VTEP to allow VMs to discover a single
gateway VR and prevent IP address conflicts between the multiple
VRs, and acts as Proxy ARP through the VTEP to reduce ARP
broadcasts, but does not take care of ARP broadcasts between the
local instances such as the VR and VMs in the same VSi.
If the Normal Mode, the local VR is assigned as the VMs’ de-
fault gateway, and if the Orphan Mode, a fastest and underloaded
remote VR is assigned as the VM’s default gateway. For these
reasons, all the agents control ARP packets through the VTEP
according to ARP Packet Control Rules as illustrated in Table 1.
The detailed descriptions are as follows:
1. If the Normal Mode & Outbound ARP request (Sender VR->
Target VM), this packet is an outgoing ARP request from the
local host for a local VR to discover a local or remote orphan
VM, and therefore if the Target IP address is a local VM, 1-
2Filtering to prevent ARP broadcast because the local VM
sends ARP reply in person. If the Target IP address is a remote
VM, 1-3Proxy to prevent ARP broadcast or 1-1Pass as the
presence or absence of the MAC entry in the agent’s ARP cache.
2. If the Normal Mode & Inbound ARP request (Sender VR->
Target VM), this is an incoming ARP request (flowed by 1-
1Pass of a remote host) from a remote host for a remote VR to
discover an remote orphan VM. 2Filtering because ARP broad-
cast must be prevented and local VMs should not be visible to
remote VRs in the Normal Mode.
3. If the Orphan Mode & Outbound ARP request (Sender VR->
Target VM), this is an ARP request for a local VR to discover a
local or remote orphan VM, but 3N/A because there is not a lo-
cal VR in the Orphan Mode.
4. If the Orphan Mode & Inbound ARP request (Server VR-
Target VM), this is an ARP request (flowed by 1-1Pass of a re-
mote host) for a remote VR to discover an orphan VM. If the
Target IP address is a local VM, 4-2Proxy to prevent ARP
broadcast from spreading inside. If not, 4-1Filtering.
5. If the Normal Mode & Outbound ARP request (Sender VM->
Target VR), this is an ARP request for a local VM to discover a
local VR. 5Filtering because remote VRs should not be visible
to local VMs in the Normal Mode and ARP broadcast must be
prevented.
6. If the Normal Mode & Inbound ARP request (Sender VM->
Target VR), this is an ARP request (flowed by 7Pass of a re-
mote host) for a remote orphan VM to discover a VR. If the lo-
cal VR is overloaded, 6-1Filtering for QoS. If not, 6-2Proxy.
7. If the Orphan Mode & Outbound ARP request (Sender VM->
Target VR), this is an ARP request for a local orphan VM to
discover a remote VR. 7Pass to discover a remote VR.
8. If the Orphan Mode & Inbound ARP request (Sender VM->
Target VR), this is an ARP request (flowed by 7Pass of a re-
mote host) for a remote orphan VM to discover a VR, but
8Filtering because there is not a local VR in the Orphan Mode
and ARP broadcast must be prevented.
9. If the Normal Mode & Outbound ARP reply (Sender VR->
Target VM), this is an ARP reply (to 7Pass) for a local VR to
answer a remote orphan VM, but 9N/A because the ARP reply
was already processed by 6-1Filtering or 6-2Proxy.
10. If the Normal Mode & Inbound ARP reply (Sender VR->
Target VM), this is an ARP reply for a remote VR to answer a
local VM, but 10N/A because the ARP request was already
processed by 5Filtering.
11. If the Orphan Mode & Outbound ARP reply (Sender VR->
Target VM), this is an ARP reply for a local VR to answer a
remote orphan VM, but 11N/A because there is not a local VR
in the Orphan Mode.
12. If the Orphan Mode & Inbound ARP reply (Sender VR->
Target VM), these are ARP replies (to 7Pass) for multiple re-
mote VRs to answer a local orphan VM. In a normal situation,
communication mechanism does not allow more than one reply
to a single ARP request. The problem is that a large number of
underloaded VRs can send ARP reply, that is, ARP flux prob-
lem occurs. To solve this issue, the agent simply chooses the on-
ly fastest ARP reply and filters the rest (12Pass & Filtering).
13. If the Normal Mode & Outbound ARP reply (Sender VM->
Target VR), this is an ARP reply for a local VM to answer a
remote VR, but 13N/A because the ARP request was already
processed by 2Filtering.
14. If the Normal Mode & Inbound ARP reply (Sender VM->
Target VR), this is an ARP reply (to 1-1Pass) for a rem-
ote orphan VM to answer a local VR and therefore 14Pass.
15. If the Orphan Mode & Outbound ARP reply (Sender VM->
Target VR), this is an ARP reply (to 1-1Pass) for a local orphan
VM to answer a remote VR, but 15N/A because the ARP re-
quest was already processed by 4-2Proxy.
16. If the Orphan Mode & Inbound ARP reply (Sender VM->
Target VR), this is an ARP reply for a remote orphan VM to an-
swer a local VR, but 16N/A because there is not a local VR in
the Orphan Mode.
17. If the Normal Mode & Outbound GARP (Sender VR-> Target
VR), this is a GARP request for a local VR to update all VMs &
5. switches’ caches and detect IP conflicts. When a local VR start
(Orphan Mode->Normal Mode), GARP request is sent by the
local VR. 17Filtering to update only local VMs’ ARP cache and
prevent IP address conflicts with all remote VRs.
18. If the Normal Mode & Inbound GARP (Sender VR-> Target
VR), this is a GARP request for a remote VR to update all
VMs’ ARP caches and detect IP conflicts when a remote VR
start, but 18N/A because the GARP request was already pro-
cessed by the remote agent’s 17Filtering.
19. If the Orphan Mode & Outbound GARP (Sender VR-> Target
VR), this is a GARP request that a local VR sends. 19N/A be-
cause there is not a local VR in the Orphan Mode.
20. If the Orphan Mode & Inbound GARP (Sender VR-> Target
VR), this is a GARP request that a remote VR sends. 20N/A
because the GARP request was already processed by the remote
agent’s 17Filtering.
21-24. If the inter-VM communication, this packet is a ARP re-
quest for a VM to discover another VM. If an ARP request for a
local VM to discover a remote VM, 21-3Proxy or 21-1Pass
(23-3Proxy or 23-1Pass) as the presence or absence of the
MAC entry in the agent’s ARP cache. If an ARP request for a
local VM to discover another local VM, 21-2Filtering or 23-
2Filtering to prevent ARP broadcast. If an ARP request for a
remote VM to discover another remote VM, 22-1Filtering or
24-1Filtering to prevent ARP broadcast. If an ARP request for a
remote VM to discover a local VM, 22-2Proxy or 24-2Proxy.
4. EVALUATION
In this section we evaluate EYWA using a prototype running on
a 10 server testbed and 1 commodity switch (2 10Gbps ports and
24 1Gbps ports) in a single rack where 10 servers are connected in
1Gbps. The layer-4 load balancer inside the VR is based on open
source HAProxy. Our goals are first to show that EYWA can be
built from components that are available today, and second, that
our implementation solves the problems of public network de-
scribed in Section 2. The issues of private network such as east-
west traffic in a single tenant are excluded from the evaluation
because it is clearly obvious they will turn out that way.
4.1 North-South Traffic
In this section, we show that all the VMs can utilize all the
physical bandwidth without throughput bottleneck when the VMs
communicate with external servers.
4.1.1 Outbound communications
First, there are a single VR and VM belonging to a same tenant
in every hypervisor host, that is, all of them are the Normal Mode.
All the VMs send packets to external servers at full bandwidth.
The total outbound bandwidth of all the VMs is equal to the sum
of each VM’s physical bandwidth as illustrated Figure 5(a), and
the same pattern is true of the total inbound bandwidth.
4.1.2 Outbound communications in the Auto-Scaling
scenario of VRs and VMs
We evaluate on the assumption that the auto-scaling of cloud
platform can launch or terminate a new VR and VM according to
scaling policies such as network bandwidth. We only launch or
terminate the instances manually to evaluate. Test environment is
equal to section 4.1.1, but there can be a maximum of a single
VM and VR in every hypervisor host.
Figure 6 shows how the total outbound bandwidth of all the
VMs increases and decreases when the auto-scaling launches or
terminates a VR and VM separately. This will dramatically in-
crease and decrease the total outbound bandwidth, and the same
pattern is true of the total inbound bandwidth.
4.2 East-West (Inter-Tenant) Traffic
In this section, we also show that all VMs can utilize all the
physical bandwidth without throughput bottleneck when the VMs
communicate with another tenant’s VMs.
4.2.1 1-to-1 communications
Test environment is equal to section 4.1.1, but 10 hypervisor
hosts are divided equally between tenant A and B. There are also
a VR and VM belonging to a same tenant in every hypervisor host.
Each VM of tenant A sends packets to an idle VM of tenant B at
full bandwidth. The average outbound bandwidth per VM of ten-
ant A is equal to each VM’s physical bandwidth as illustrated
Figure 5(b).
4.2.2 1-to-N communications
Test environment is equal to section 4.2.1, but Each VM of ten-
ant A sends packets to all the VMs of tenant B and each VM of
tenant B also sends packets to all the VMs of tenant A at full
bandwidth. The total outbound bandwidth of host pairs is equal to
the sum of VM pair’s physical bandwidth as illustrated Figure
5(c).
4.3 North-South and East-West Traffic
In this section, we also show that all VMs can utilize all the
physical bandwidth without throughput bottleneck when the VMs
communicate simultaneously with external servers and another
tenant’s VMs.
4.3.1 Outbound and 1-to-N communications
10 hypervisor hosts are also divided equally between tenant A
and B. There are a single VR and 2 VMs (e.g., e-w and n-s VM)
belonging to a same tenant in every hypervisor host. Each e-w
VM of tenant A sends packets to all the e-w VMs of tenant B at
full bandwidth, and each n-s VM of tenant A also sends packets to
external servers at full bandwidth. The tenant B is the same. The
total outbound bandwidth of host pairs is equal to the sum of VM
pair’s physical bandwidth as illustrate Figure 7. If a single router
environments, the total outbound bandwidth of host pair is equal
to the sum of 2 VMs’ physical bandwidth in every condition.
5. RELATED WORK
Virtual network designs for multi-tenant cloud environ-
ments: OpenStack/Neutron/Distributed Virtual Router (DVR)
[33] provides a virtual LAN per tenant and distributed DNAT, but
SNAT is still centralized in Network Node. It has also no large
layer-2 semantics.
OpenStack/MidoNet [32] is an open network virtualization sys-
tem. The agent in every host is responsible for setting up new
network flows and controlling and the kernel fastpath to provide
distributed networking services (switching, routing, NAT, etc.),
All traffic from the external network is handled (routing, security
groups, firewalls, and load balancing) by the Gateways in dedicat-
ed servers. Network State Database in dedicated servers stores
high level configuration information like topology, routes, NAT
6. Figure 5: Network bandwidth on north-south or east-west communication
Figure 6: Total north-south network bandwidth increased and decreased by auto-scaled VMs and VRs
Figure 7: Network bandwidth competition on north-south and
east-west communication
settings, etc. MidoNet requires additional servers or components
except an agent in every host and has no large layer-2 semantics.
CloudStack [17] provides a HAProxy-based VR and virtual
LAN per tenant as illustrated in Figure 3, but VR is a throughput
bottleneck and SPOF. It has also no large layer-2 semantics.
AWS/Virtual Private Cloud (VPC) [29] provides a logically iso-
lated section of the AWS Cloud. Internet Gateway is a redundant
and highly available VPC component that allows communication
between instances in VPC and the Internet. Internet Gateway
serves three purposes: to provide a target in VPC route tables for
Internet-routable traffic, perform NAT for instances that have
been assigned public IP addresses, and proxy all ARP requests
and replies. Internet Gateway can be a bottleneck and cannot
proxy all ARP packets by itself in the large layer-2 network.
Physical network designs for Data Centers: Monsoon [1] im-
plements a large layer-2 network in data centers. It is designed on
top of layer 2 and reinvents fault-tolerant routing mechanisms
already established at layer 3 and the centralized directory servers
store all address information.
SEATTLE [3] also implements a large layer-2 network in data
centers. It stores the location at which each server is connected to
the network in a one hop DHT distributed across the switches.
VL2 [2] provides hot-spot-free routing and scalable layer-2 se-
mantics using forwarding primitives available today and minor,
application-compatible modifications to host operating systems,
and the centralized directory servers store all address information.
Fat-tree [9] relies on a customized routing primitive that does
not yet exist in commodity switches.
Transparent Interconnection of Lots of Links (TRILL) [13] is a
layer-2 forwarding protocol that operates within one IEEE 802.1-
compliant Ethernet broadcast domain. It replaces the STP by us-
ing Intermediate System to Intermediate System (IS-IS) routing to
distribute link state information and calculate shortest paths
through the network.
802.1aq Shortest Path Bridging (SPB) [14] allows for true
shortest path forwarding in a mesh Ethernet network context
utilizing multiple equal cost paths. This permits it to support
much larger layer 2 topologies, with faster convergence, and
vastly improved use of the mesh topology.
Layer-4 Load Balancing: Ananta [4] is a layer-4 distributed lo
ad balancer and NAT for a multi-tenant cloud environment. Its
7. components are an agent in every hypervisor host that can take
over the packet modification function from the load balancer, a
virtual switch in every hypervisor host that provide NAT function,
Multiplexers in dedicated servers that can handle all incoming
traffic and Manager in dedicated servers that implements the con-
trol plane of Ananta. It does not use DNS-based load balancing.
OpenStack/Neutron/Load Balancing as a Service (LBaaS) [40]
allows for proprietary and open source load balancing technolo-
gies to drive the actual load balancing of requests. Thus, an
OpenStack operator can choose which back-end technology to use.
AWS/Elastic Load Balancing (ELB) [30] automatically distrib-
utes incoming application traffic across multiple Amazon EC2
instances using DNS-based load balancing in the AWS cloud.
HAProxy [35] is an open source solution offering load balanc-
ing and proxying for TCP and HTTP-based applications.
Tunneling protocols: Network Virtualization using Generic
Routing Encapsulation (NVGRE) [6] uses GRE as the encapsula-
tion method. It uses the lower 24 bits of the GRE header to repre-
sent the Tenant Network Identifier (TNI). Like VxLAN, this 24
bit space allows for 16 million virtual networks.
Stateless Transport Tunneling (STT) [7] is another tunneling
protocol. The other advantage of STT is its use of a 64 bit net-
work ID rather than the 24 bit IDs used by NVGRE and VxLAN.
6. CONCLUSION
We presented the design of EYWA, a new virtual network ar-
chitecture that accommodate a large number of tenants using iso-
lated virtual LAN, provide public network per tenant without
bottleneck & SPOF on SNAT/DNAT and a single large IP subnet
per tenant using large layer-2 semantics in multi-tenant cloud
environments and therefore this benefits the cloud service provid-
ers and consumers.
EYWA is a simple design that can be realized with available
networking technologies, and without changes to hosts’ kernel,
physical and software switch. This does not also require any addi-
tional servers or components except for a distributed and inde-
pendent agent in every hypervisor host, that is, there is no need to
manage something additional and centralized. In future work,
EYWA will be integrated with open source solutions for IaaS and
support real huge cloud data centers with unlimited scalability.
The limitations of the virtual environments must be limited by
the physical environments rather than themselves. EYWA scales
with the size of the physical network and compute farm.
7. REFERENCES
[1] A. Greenberg, P. Lahiri, D. A. Maltz, P. Patel, and S.
Sengupta. Towards a next generation data center architec-
ture: Scalability and commoditization. In PRESTO Work-
shop at SIGCOMM, 2008.
[2] A. Greenberg, James R. Hamilton, Navendu Jain, Srikanth
Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz,
Parveen Patel and Sudipta Sengupta. VL2: A Scalable and
Flexible Data Center Network. In SIGCOMM, 2009.
[3] C. Kim, M. Caesar, and J. Rexford. Floodless in SEATTLE:
a scalable ethernet architecture for large enterprises. In
SIGCOMM, 2008.
[4] Parveen Patel, Deepak Bansal, Lihua Yuan, Ashwin Murthy,
Albert Greenberg, David A. Maltz, Randy Kern, Hemant
Kumar, Marios Zikos, Hongyu Wu, Changhoon Kim,
Naveen Karri. Ananta: Cloud Scale Load Balancing. In
SIGCOMM, 2013.
[5] M. Mahalingam, D. Dutt, K. Duda, P. Agarwal, L. Kreeger,
T. Sridhar, M. Bursell and C. Wright. VXLAN: A Frame-
work for Overlaying Virtualized Layer 2 Networks over Lay-
er 3 Networks, IETF Internet Draft.
[6] M, Sridharan, K. Duda, I. Ganga, A. Greenberg, G. Lin, M.
Pearson, P. Thaler, C. Tumuluri, N. Venkataramiah, Y.
Wang. NVGRE: Network Virtualization using Generic Rout-
ing Encapsulation, IETF Internet Draft.
[7] B. Davie, Ed. J. Gross, A Stateless Transport Tunneling Pro-
tocol for Network Virtualization (STT), IETF Internet Draft.
[8] Radhika Niranjan Mysore, Andreas Pamboris, Nathan Far-
rington, Nelson Huang, Pardis Miri, Sivasankar Radhakrish-
nan, Vikram Subramanya, and Amin Vahdat. PortLand: A
Scalable Fault-Tolerant Layer 2 Data Center Network Fabric.
In SIGCOMM, 2009
[9] Mohammad Al-Fares, Alexander Loukissas and Amin
Vahdat. A Scalable, Commodity Data Center network Archi-
tecture. In SIGCOMM, 2008
[10] N. Mckeown, T. Anderson, H. Balakrishnan, G. M. Parulkar,
L. L. Peterson, J. Rexford, S. Shenker, and J. S. Turner.
OpenFlow: Enabling Innovation in Campus Networks. In
SIGCOMM, 2008
[11] IEEE 802.1Q VLANs, Media Access Control Bridges and
Virtual Bridged Local Area Networks
[12] S. Nadas, Ed. Ericsson, Virtual Router Redundancy Protocol
(VRRP) Version 3 for IPv4 and IPv6. IETF RFC 5798.
[13] R. Perlman et al. TRILL: Transparent Interconnection of
Lots of Links. IETF RFC
[14] IEEE 802.1aq Shortest Path Bridging
[15] D. Allan, N. Bragg, P. Unbehagen. IS-IS Extensions Sup-
porting IEEE 802.1aq Shortest Path Bridging, IETF RFC
[16] Openstack, http://www.openstack.org
[17] Apache Cloudstack, http://cloudstack.apache.org
[18] Eucalyptus, http://www.eucalyptus.com
[19] OpenNebula, http://opennebula.org
[20] Amazon Web Services, http://aws.amazon.com
[21] Microsoft, https://www.microsoft.com
[22] Microsoft Azure, http://azure.microsoft.com
[23] Vmware, http://www.vmware.com
[24] Rackspace Open Cloud, http://www.rackspace.com/cloud
[25] Google Compute Engine, http://cloud.google.com/compute
[26] IBM Cloud, http://www.ibm.com/cloud-computing
[27] Ucloud biz, https://ucloudbiz.olleh.com
[28] OpenFlow, http://archive.openflow.org
[29] AWS Virtual Private Cloud (VPC),
http://aws.amazon.com/vpc
[30] AWS Elastic Load Balancing (ELB),
http://aws.amazon.com/elasticloadbalancing
[31] AWS Route 53, http://aws.amazon.om/route53
[32] MidoNet, http://www.midokura.com/midonet
[33] Openstack/Neutron/Distributed Virtual Router (DVR),
https://wiki.openstack.org/wiki/Neutron/DVR
[34] NetScalar VPX Virtual Appliance. http://www.citrix.com
[35] HAProxy Load Balancer, http://www.haproxy.org
[36] Linux Virtual Server, http://www.linuxvirtualserver.org
[37] Vyatta Virtual Router, http://www.brocade.com
[38] OVS Virtual Switch, http://openvswitch.org
[39] Linux Bridge, http://www.linuxfoundation.org
[40] OpenStack/Neutron/LBaaS,
https://wiki.openstack.org/wiki/Neutron/LBaaS
[41] EYWA simple POC, https://goo.gl/A1dMJ0
[42] EYWA Presentation, https://goo.gl/wMjCgI