This paper reviews the trends and technologies in Unified Computing, describes the Datacenter Ethernet technologies for implementing Fibre Channel over Ethernet, and describes Cisco\'s Unified Computing System (UCS)
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011Stephen Foskett
The notion that Fibre Channel is for data centers and iSCSI is for SMB’s and workgroups is outdated. Increases in LAN speeds and the coming of lossless Ethernet position iSCSI as a good fit for the data center. Whether your organization adopts FC or iSCSI depends on many factors like current product set, future application demands, organizational skill-set and budget. In this session we will discuss the different conditions where FC or IsCSI are the right fit, why you should use one and when to kick either to the curb.
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011Stephen Foskett
The notion that Fibre Channel is for data centers and iSCSI is for SMB’s and workgroups is outdated. Increases in LAN speeds and the coming of lossless Ethernet position iSCSI as a good fit for the data center. Whether your organization adopts FC or iSCSI depends on many factors like current product set, future application demands, organizational skill-set and budget. In this session we will discuss the different conditions where FC or IsCSI are the right fit, why you should use one and when to kick either to the curb.
An introductory view into Plexxi's Pod Switch Interconnect.
The Pod Switch Interconnect delivers the value of application-centric networking in an affordable, fix-configuration pod form factor. By bringing together SDN, photonic switching, and DevOps, Plexxi has delivered a scalable data center networking solution at a cost-effective price.
An introductory view of Plexxi's second generation of switching hardware.
The Switch 2 marries SDN, photonic switching, and DevOps together to create a data center networking solution ideal for delivering diverse application workloads.
For more information, see http://www.plexxi.com/
In multicast communication, there is one source and a group of destination.
In multicasting, the router may forward the received packet through several of its interfaces.
The source address is a unicast address, but destination address is a group address.
On-Demand Multicast Routing Protocol.
This paper presents a novel multicast routing protocol for mobile ad hoc wireless networks. The protocol, termed ODMRP (On-Demand Multicast Routing Protocol), is a mesh-based, rather than a conventional tree- based, multicast scheme and uses a forwarding group concept (only a sub- set of nodes forwards the multicast packets via scoped flooding). It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP’s scalability and performance via simulation.
Tutorial about MPLS Implementation with Cisco Router, this first of two chapter discuss about What is MPLS, Network Design, P, PE, and CE Router Description, Case Study of IP MPLS Implementation, IP and OSPF Routing Configuration
It is considered to be the most perfect solution to address the most recently faced problems in present-day networks such as
“Routing, scalability, quality of service engineering management, traffic engineering”
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor ResearchEiko Seidel
LTE-Advanced standardisation in Release 10 was completed in early 2011 and commercial network deployments using first Release 10 features are likely to be announced later this year. One of the most attractive features of LTE-A is Carrier Aggregation, where a user equipment (UE) might be scheduled across multiple carriers. Besides this, Heterogeneous Networks (HetNet) using small cells gained a lot of interest recently due to their potential to increase network capacity. This white paper provides some insight into how LTE-A HetNets with or without a centralized architecture might be deployed today and in the future, in particular in combination with Carrier Aggregation.
In multicast communication, there is one source and a group of destination.
In multicasting, the router may forward the received packet through several of its interfaces.
The source address is a unicast address, but destination address is a group address.
Implementation of multicast communication in internet
Individual hosts are configured as members of different multicast groups
One particular user may a member of many multicast groups
For a one multicast can be few members/nodes
IP Multicast group is identified by Class D address (224.0.0.0 – 239.255.255.255)
Every IP datagram send to a multicast group is transferred to all members of group
An introductory view into Plexxi's Pod Switch Interconnect.
The Pod Switch Interconnect delivers the value of application-centric networking in an affordable, fix-configuration pod form factor. By bringing together SDN, photonic switching, and DevOps, Plexxi has delivered a scalable data center networking solution at a cost-effective price.
An introductory view of Plexxi's second generation of switching hardware.
The Switch 2 marries SDN, photonic switching, and DevOps together to create a data center networking solution ideal for delivering diverse application workloads.
For more information, see http://www.plexxi.com/
In multicast communication, there is one source and a group of destination.
In multicasting, the router may forward the received packet through several of its interfaces.
The source address is a unicast address, but destination address is a group address.
On-Demand Multicast Routing Protocol.
This paper presents a novel multicast routing protocol for mobile ad hoc wireless networks. The protocol, termed ODMRP (On-Demand Multicast Routing Protocol), is a mesh-based, rather than a conventional tree- based, multicast scheme and uses a forwarding group concept (only a sub- set of nodes forwards the multicast packets via scoped flooding). It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP’s scalability and performance via simulation.
Tutorial about MPLS Implementation with Cisco Router, this first of two chapter discuss about What is MPLS, Network Design, P, PE, and CE Router Description, Case Study of IP MPLS Implementation, IP and OSPF Routing Configuration
It is considered to be the most perfect solution to address the most recently faced problems in present-day networks such as
“Routing, scalability, quality of service engineering management, traffic engineering”
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor ResearchEiko Seidel
LTE-Advanced standardisation in Release 10 was completed in early 2011 and commercial network deployments using first Release 10 features are likely to be announced later this year. One of the most attractive features of LTE-A is Carrier Aggregation, where a user equipment (UE) might be scheduled across multiple carriers. Besides this, Heterogeneous Networks (HetNet) using small cells gained a lot of interest recently due to their potential to increase network capacity. This white paper provides some insight into how LTE-A HetNets with or without a centralized architecture might be deployed today and in the future, in particular in combination with Carrier Aggregation.
In multicast communication, there is one source and a group of destination.
In multicasting, the router may forward the received packet through several of its interfaces.
The source address is a unicast address, but destination address is a group address.
Implementation of multicast communication in internet
Individual hosts are configured as members of different multicast groups
One particular user may a member of many multicast groups
For a one multicast can be few members/nodes
IP Multicast group is identified by Class D address (224.0.0.0 – 239.255.255.255)
Every IP datagram send to a multicast group is transferred to all members of group
Performance analysis of container-based networking Solutions for high-perform...IJECEIAES
Recently, cloud service providers have been gradually changing from virtual machine-based cloud infrastructures to container-based cloud-native infrastructures that consider performance and workload-management issues. Several data network performance issues for virtual instances have arisen, and various networking solutions have been newly developed or utilized. In this paper, we propose a solution suitable for a high-performance computing (HPC) cloud through a performance comparison analysis of container-based networking solutions. We constructed a supercomputer-based test-bed cluster to evaluate the serviceability by executing HPC jobs.
Unified Fabric: Data Centre Bridging and FCoE ImplementationCSCJournals
In the past decade cloud computing has become the buzzword in IT world. The implementation of cloud based computing and storage technology changed the way of how network infrastructure is built inside an enterprise. As technology has improved and the cloud based storage systems become more affordable, a number of enterprises started outsourcing their data management due to a number of reasons. But still a majority of large enterprises and SMB (small medium businesses) prefer to manage their own in-house data centers and storage area networks. The reason being is the control, security and integrity of stored data on cloud storage servers. In this paper, we will discuss the most commonly implemented SAN technology, fibre channel (FC) in comparison with the new technology called Fibre Channel over Ethernet (FCoE). These results will help SAN engineers and designers select the best technology between the two in terms of performance, scalability, cost, maintenance, space, cooling, equipment, cabling, management, adapters, labor cost and manpower. Implementation of FC and FCoE has been done to explore the different features of both technologies. Furthermore, how to build a reliable, scalable and secure storage area network has been demonstrated. This study has been carried out on Cisco Nexus, Cisco MDS and Cisco UCS platform.
Quality of Service for Video Streaming using EDCA in MANETijsrd.com
Mobile Ad-hoc network(MANET) is a collection of wireless terminals that are able to dynamically form a temporary network. To establish such a network no fixed infrastructure is required. Here, it is the responsibility of network nodes to forward each other's packets and thus these nodes also act as routers. In such a network resources are limited and also topology changes dynamically. So providing Quality of service(QoS) is also necessary. QoS is more important for real time applications for example Video Streaming. IEEE 802.11e network standard supports QoS through EDCA technique. This technique does not fulfill the requirements of QoS. So, in this project modified EDCA technique is proposed to enhance QoS for Video Streaming application. This technique is implemented in NS2 and compared with traditional EDCA.
SAN Virtuosity Series: Network Convergence & Fibre Channel over EthernetEmulex Corporation
This webcast is the fourth in the SAN Virtuosity series from Cisco, VMware, and Emulex and will explore the opportunity to save money by converging your SAN and LAN traffic with FCoE. This webcast introduces network convergence and FCoE, and provides an overview on how to plan for a successful deployment. It will highlight how key products and technologies from Emulex, Cisco and VMware answer the critical questions that must be resolved so that deployments can go forward with a high level of confidence.
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...EMC
This white paper provides an overview of Fibre Channel over Ethernet (FCoE), describes the hardware and software components that make up the ecosystem, and explains how the technology is expected to continue to mature over the next few years.
Enabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaperIvar Søvold
Transpacket (www.transpacket.com) explores the concept of Ethernet X-Haul in a newly released whitepaper. Discussed extensively in the mobile industry in connection with 5G, the idea is to have an Ethernet based converged transport network serving multiple purposes including fronthaul and backhaul. The whitepaper presents the RAN architectures under consideration for 5G, and their consequences in terms of requirements for the transport network. It further describes how an innovative Ethernet scheduling mechanism is required to support deterministic Ethernet, and to fully achieve an 5G Ethernet X-Haul. It also introduces two use cases, namely Ethernet Crosshaul, and Indoor Coverage, which demonstrate the added value of deterministic Ethernet for mobile transport applications.
1Running Head Network Design3Network DesignUn.docxeugeniadean34240
1
Running Head: Network Design
3
Network Design
University Affiliation
Course
Date
Professor
Wide Area Network Design
An enterprise network is a diverse and large network connecting most major points in a company, business or other organization. An enterprise differs from a WAN in that it is privately owned and maintained. There are a variety of WAN technologies to meet the different needs of businesses and many ways to scale the network. An enterprise must subscribe to a WAN service provider to use WAN carrier network services (Paquet, 2013).
In designing of the WAN portion of the network, the first step is to understand the specific network characteristics of the various locations from a Wide Area Network point of view and to then analyze and decide how to implement WAN connectivity at each location. The proposed WAN has specified both possible leased line as well as point to point radio connectivity between sites (Zhang, 2005). It is recommended that the wide area network connection between remote offices should be a point to point radio link. The Plant facility in one location has the capability of using satellite communications or other leased lines for this deployment. Also, for high availability it is recommended that all locations also have site to site redundant links over the Internet through the use of (virtual private network) VPN connectivity. VPN technology will enable organization to create private networks over the public Internet infrastructure that will maintain security and confidentiality. We will use VPNs to provide a virtual WAN infrastructure that connects branch offices to all or portions of their corporate network.
Different applications require varying amounts of network bandwidth the bandwidth required between these sites is dependent upon several factors that include amount of traffic (which varies depending upon the number of users connected to the network), number of hosts, number of network users, protocol being used, potential applications deployed on the network and the network design. For offices with lower bandwidth requirements the most recommended Frame Relay connection should provide no less than 768Kbps while locations with a larger bandwidth requirement will need point to point T1 connections (around 1.54Mbps). To determine which type of connection is appropriate at each location, a list of the approximate number of users and hosts at each site along with the applications they use will be we put in consideration when coming up with the equation for properly determining bandwidth requirements for the network. The required bandwidth in this case will be measured by first determining the amount of space available to transfer data. So for the T1 connections we need to divide 1.54Mbps by 8 to get the number of bytes per second available on the WAN connection. Therefore, a T1 connection will support 192 Mbps. Next, we will determine the amount of bandwidth needed for each application, which.
Welcome to International Journal of Engineering Research and Development (IJERD)
Unified Computing In Servers
1. Unified Computing in Servers
Wendell Wenjen
Wendell.wenjen@gmail.com
June 8, 2010
UCSC Extension, Network Storage Essentials, 21940
Abstract:
Unified Computing (UC) is an industry term which refers to a common network fabric for both data
(TCP/IP) and storage (Fibre Channel or iSCSI) networking. Servers which have implemented Unified
Computing use a converged network adapter (CNA) which can manage the transport of multiple
network protocols including TCP/IP, Fibre Channel over Ethernet (FCoE) and iSCSI. In addition, practical
implementation of UC also includes 10Gb Ethernet (10GbE), support for network virtualization and
implementation of Data Center Bridging (DCB). Cisco’s implementation of Unified Computing is
examined.
Converged Network Adapter
A Converged Network adapter (CNA) is a Network Interface Card (NIC) which supports IP data and also
storage network protocols such as Fibre Channel over Ethernet (FCoE) and iSCSI. CNA’s replace the Host
Bus Adapter for Fibre Channel, thereby reducing the amount of network cabling, the number of add-in
adapter cards and the power required for such cards. Since Fibre Channel cards have a data rate of
4Gbps or 8Gbps, CNA’s typically use 10GbE. Since CNA’s implement a Fibre Channel initiator just the
same as a HBA, the initiator can be implemented in the CNA hardware controller or as a software-based
initiator in the driver. While the hardware based initiator has traditionally been implemented in order
to minimize the host-server CPU loading [CHE], faster CPU’s and more efficient software initiators have
also been implemented by Intel and other CNA providers [INT].
2. In the diagram above from [INT], the architecture of Intel’s CAN using an “open” FCoE approach is
described. Intel uses a software-based FC initiator rather than a hardware based approach used in FC
HBA’s. The diagram shows that the adapter implements both FCoE and iSCSI off-load functions in the
adapter hardware for such functions such as check-sum calculation. Above the adapter, the functions
implemented by the host server are shown. Note that most of the logic for FCoE including the protocols
are implemented in software by the host system.
3. In contrast, the diagrams above from [CHE] shows Cheslio Network’s implementation of a CNA with the
FCoE implemented in hardware. The top diagram shows that the “T4” hardware implemented in the
CNA handles most of the protocol functions with a lower-level driver allowing communication to iSCSI,
FCOE, and RDMA protocols. In the lower diagram, Chelsio also implements TCP/IP Offload Engine (TOE)
to also accelerate iSCSI protocol traffic.
Data Center Bridging
Data Center Bridging (DCB), also called Converged Enhanced Ethernet (CEE) or Data Center Ethernet
(DCE) are extensions to the Ethernet specification (IEEE 802.1) which optimize the Ethernet protocol for
large scale data centers and unified computing. These enhancements include:
802.1aq -- Shortest Path Bridging
802.1Qau -- Congestion Notification
802.1Qaz -- Enhanced Transmission Selection
802.1Qbb -- Priority-based Flow Control
4. 802.1Qbb (Priority-based Flow Control) implements the 8 priority levels as specified in the existing
802.1p while adding the capability of stopping the flow of lower priority traffic while allowing higher
priority traffic to pass [JAC]. 802.1Qaz (Enhanced Transmission Selection) allows grouping of 802.1p
priorities into groups requiring the same class of service and allocating specific network bandwidth to
the group. A priority group is also defined which can override the priority of all other groups and
consume all of the network bandwidth if required. 802.1Qau provides a rate-matching mechanism
which allows an end-point which is nearing the overflow of its receive buffers to notify all sending nodes
of the congestion and requests throttling of the transmission. When the congestion is cleared up, the
receiving node notifies all sending nodes to resume their full transmission rate. 802.1aq and the related
Transparent Interconnection of Lots of Links (TRILL) optimizes the Spanning Tree routing protocol with
link state information to quickly determine the optimal route through the network, to react quickly to
network changes and to take advantage of routes by spreading traffic among multiple routes.
In order to support the loss-less nature of fibre channel networking using FCoE, an end-to-end DCB-
enabled network is required. Deployment of DCB in a unified computing environment can be done
three different ways [MAR2].
In the first method, a server with a converged network adapter (CNA) is connected to a DCB-enabled
enable unified computing switch which routes FCoE traffic to a traditional Fibre Channel switch. The FC
switch routes FC traffic to the disk storage array over Fibre Channel.
In the second method, the same server with the CNA routes FCoE storage traffic to the same DCB-
enabled UC switch. The switch routes the FC storage traffic directly to the target storage array.
5. In the third method, the CNA enabled server is connected to a DCB-enabled UC switch which is
connected to a CNA enabled disk array. Storage traffic using FCoE is routed over an IP network until it is
received by the FCoE enabled Storage Array where the IP headers are stripped off and the native FC
commands are sent to the storage array. [Diagrams from MAR2].
The figure above from [GAI2] illustrates an end-to-end transmission of a FC packet from a storage array
with ID 7.1.1 to a server with FC ID of 1.1.1. In this example, the packet first transits through a FC fabric
switch which looks up the destination of 1.1.1 and forwards it using the Fabric Shortest Path First (FSPF)
routing algorithm. The switch 3, an Ethernet switch capable of routing FCoE traffic, encapsulates the FC
packet with Ethernet headers and forwards it to Switch 4 which de-encapsulates the packet and then re-
encapsulates it for forwarding to the end destination server.
Cisco’s Unified Computing System (UCS)
In March 2009, Cisco introduced the Unified Computing System (UCS), previously called “Project
California”. UCS is one of the first implementations of a unified computing platform in both a blade and
rack server implementation. UCS is based on Intel’s” Nehalem” server platform, also called the Xeon
5500. As described in [GAI], the UCS has implemented the CNA using a custom ASIC called “Palo” which
implements FCoE and iSCSI functionality over a 10GbE transport. The UCS system also supports
virtualization features including network, storage and workload virtualization.
The blades in the UCS 5000 chassis are connected to Cisco’s UCS 2000 series or Nexus 2000 fabric
extenders which provides a switched fabric between blades as well as aggregates traffic to/from the
blade chassis and provides a 10GbE up-link to the top-of-rack switch which can be a UCS 100 Series or a
6. Nexus 2000 series switch. The Nexus 5000 switch aggregates traffic from the top-of-rack switch and
forwards traffic to a core data center switch, typically a Nexus 7000.
References:
[CHE], Chelsio, Converged LAN’s, SAN’s and Clusters on a Unified Wire, Wael Noureddine, Presented at
Linley Group Spring Conference on Ethernet in the Data Center, May 11, 2010.
[GAI] Gai, S., Salli, T, Andersson, R., “Project California: a Data Center Virtualization Server – UCS (Unified
Computing System)”, Cisco Systems, 2009, ISBN 976-0-557-05739-9.
[GAI2], Gai, S., Lemasa, G., “Fibre Channel over Ethernet in the Data Center: an Introduction”, 2007,
Fibre Channel Industry Organization.
[INT] Intel, Open FCoE, Cost Effective Access to Storage, Manoj Dhawan, Presented at Linley Group
Spring Conference on Ethernet in the Data Center, May 11, 2010.
[JAC] Jacobs, David, Converged Enhanced Ethernet: New protocols enhance data
center Ethernet, April 15, 2009,
http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1353927_mem1,00.html
[MAR] Martin, Dennis, Unified data center fabric primer: FCoE and data center bridging, January 11,
2010, http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1378613_mem1,00.html
[MAR2] Martin, Dennis, Unified fabric: Data center bridging and FCoE implementation, January 22, 2010,
http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1379716_mem1,00.html
[McG] McGillicuddy, Shamus, FCoE network roadmap: Do you need a unified fabric strategy?