Ethernet technology has emerged as a cost-effective, mature, robust, high-speed, and versatile choice for MAN/WAN networking of critical defense establishments and military installations – for e.g., army, navy, and air force bases, mission commands, remote war centers, the Pentagon, and other security agencies. Intelligent Ethernet helps to achieve IP-centric service requirements, while enabling wireless and fixed-line networks to evolve to a fast, economical, packet-switched infrastructure. The last few years have seen tremendous advancements in Ethernet architecture, its features, switch/router system design, and its integration with optical technologies. This tutorial provides a clear conceptual overview of optical Ethernet technology advances, network architectures, and benefits for military and defense network planners, network architects, and system engineers.
43. Evolved Backhaul Network Common transmission infrastructure for different technologies (TDM and packets)
44.
45.
46. MEF Services for Mobile Backhaul Services muxed at RNC UNI Needed when inter-BS communication is permitted like in LTE/802.16m (WiMAX) EVPL Service for Backhaul using Metro Ethernet Networks EVP-LAN Service for Backhaul using Metro Ethernet Networks
47. MEF Services for Mobile Backhaul EVP-Tree Service for Backhaul using Metro Ethernet Networks
48. Key Developments Valuable for Military Adoption of Optical Ethernet Metanoia, Inc. Critical Systems Thinking™
49.
50. Recent Advances in Optical Ethernet Standards: Snapshot IEEE Fast last mile access (EPON, 802.11n), HS i/fs (40G,100G) Higher-Speeds MEF E-Tree (p2mp communication for multicast) New Services IETF, MEF FCoE, Ethernet PWs, Circuit Emulation over Ethernet (MEF 8) Interworking IEEE Hierarchy via Shortest Path Bridging (PLSB) Provider Back-bone Bridging (802.1ah) Scalability IEEE LinkSec, MACSec, Authentication Security IEEE, ITU-T SG 15 Connectivity Fault Mgt. (802.1ag), Perf. Mgt. (Y. 1731) OAM IEEE, MEF, IETF SyncE (link-layer clock distribution) 1588v3 (network level time & clock distribution) Demarcation devices (MEF NID) Automatic SLA negotiation (MEF E-LMI) Ethernet as transport (PBB-TE) MPLS-TP (Transport Profile): applicable for COE New Capabilities ITU-T SG15 Linear (G.8031) & ring (G.8032) protection Reliability/ Protection Stds. Organization(s) Standard and/or Activity Area
51.
52. MAC Sec Packet Format TCI = Tag Control Info. AN=Association No. SL = Short Length (i.e. no SCI inserted) PN = Packet No. SCI= Secure Channel ID (optional)
53.
54. Ethernet OAM & Maintenance Domains Independent OAM can be run in each OAM domain for the same VLAN IEEE provides for 8 levels of Maintenance Domains – allows a level to be assigned to each entity – customer, provider, operator Access Core Access Customer Customer Service Provider Customer OAM Domain Provider OAM Domain Operator OAM Domain Operator OAM Domain Operator OAM Domain
55. Ethernet OAM: Loopback (LB) Example for Provider & Operator Domains Access Core Access Customer Customer Service Provider Customer OAM Domain Provider OAM Domain Operator OAM Domain Operator OAM Domain Operator OAM Domain Independent OAM can be run in each OAM domain for the same VLAN We show operator, provider, and customer loopback examples above E2e Ethernet path Provider LB Operator LBs Customer LB
56.
57. IEEE 1588 Synchronization Operation & Clock Offset Computation Clock Offset Computation MS delay = t2 – t1 SM delay = t4 – t3 offset = {MS_delay –SM_delay}/2 1588 Operation
58. How Optical Ethernet Meets Key Technology Requirements of Military Networks Metanoia, Inc. Critical Systems Thinking™
67. Glossary (1) Equal Cost Multi-Path ECMP End to End e2e Dense Wavelength Division Multiplexing DWDM Deep Packet Inspection DPI Department of Defence DoD Data Communication Network DCN Destination Address DA Common Off-The-Shelf COTS Communities of Interest COI Customer Edge CE Connection Admission Control CAC Backbone Virtual ID B-VID Base Transceiver Station BTS Base Station Controller BSC Backbone MAC B-MAC Backbone Edge Bridge BEB Backbone Core Bridge BCB Access Control List ACL Multi-Protocol Label Switching - Transport Profile MPLS-TP Multi Protocol Label Switching MPLS Multi-point to Multi-point mp2mp Metro Ethernet Network MEN Metro Etherent Forum MEF Label Distribution Protocol LDP Line Card LC Link Aggregation Group LAG Individual Service ID I-SID Internet Group Management Protocol IGMP Internet Engineering Task Force IETF Institution of Electrical and Electronic Engineers IEEE Hierarchical QoS H-QoS Gigabit-capable PON GPON Ethernet Virtual Circuit EVC Ethernet Passive Optical Network EPON Ethernet Local Management Interface ELMI
68. Glossary (2) Provide Link State Bridging PLSB Physical Layer PHY Provider Edge PE Provider Backbone Bridging - Traffic Engineering PBB-TE Provider Backbone Bridging PBB Provider Bridging PB Point to Multi-point p2mp Optical Transport Network OTN Out of Band OOB Optical Data Unit ODU Operations, Administration, and Maintenance OAM Non-Stop Routing NSR Non-Stop Forwarding NSF Network-facing-Provider Edge device N-PE Network Management System NMS Next-Generation Network NGN Multiple Spanning Tree Protocol MSTP Transparent Interconnection of Lots of Links https://datatracker.ietf.org/wg/trill/charter/ TRILL Time Division Multiplexing TDM Spanning Tree Protocol STP Shortest Path Tree SPT Synchronous Optical Network SONET Synchronous Digital Hierarchy SDH Source Address SA Resource Reservation Protocol - Traffic Engineering (RSVP protocol with MPLS traffic engineering extensions) RSVP-TE Rapid Spanning Tree Protocol RSTP Radio Network Controller RNC Quality of Service QoS Pseudowire PW Packet Switched Network PSN Plain Old Telephone Service POTs Passive Optical Network PON
69. Glossary (3) Virtual Private Network VPN Virtual LAN VLAN User-facing-Provider Edge device U-PE User Network Interface UNI
70. Appendix: Word on Provider Bridging (PB) and Provider Backbone Bridging (PBB) Metanoia, Inc. Critical Systems Thinking™
71.
72. Provider Bridge (IEEE 802.1ad) Architecture CE: Customer Equipment UNI: User-to-Network Interface CES: Core Ethernet Switch/Bridge P-VLAN: Provider VLAN UNI-B CES CES CE-A UNI-A UNI-C CE-C Spanning tree CE-B CES Customer Network Customer Network Customer Network
73.
74. Provider Backbone Bridging (PBB) Architecture CPE B CPE A CPE C Provider backbone network (802.1ah) CPE B CPE A 802.1ad CPE B CPE B 802.1q CPE C Provider backbone network (802.1ad) CPE D CPE D CPE C CPE A Provider backbone network (802.1ad) Provider backbone network (802.1ad) Provider backbone network (802.1ad)
Editor's Notes
Over the last few years, Ethernet technology has seen tremendous advancements in features, network architecture, switch/router system design, and integration with optical technologies. Our goal in today’s tutorial is to provide a clear, conceptual overview of optical Ethernet technology advances, network architectures, systems evolution, and benefits for military and defense network planners, network architects, and system engineers. [Do a poll to ask: 1. How many people are familiar with Ethernet technology? 2. How many are familiar with optical Ethernet technology? 3. How do they define “optical Ethernet”? (Write it down, for later reference) 4. How many are involved in Ethernet-based systems or network design? This is to get a feel for the background and knowledge-level of the audience, so we can structure our discussion accordingly.]
Today’s tutorial has 4 main parts. 1) We will first discuss the key attributes of DoD’s net-centric data strategy, and the desirable features of military-grade networks, and show what this means for the packet-switched and transport technologies themselves, the network architecture and the systems deployed in them. Our goal will be to show that optical Ethernet technology meets many of these requirements, making it a viable candidate for use in military environments and networks. 2) To do so, we will first discuss the benefits of Ethernet technology for the military & military networks, and outlineA several applications of Ethernet in the military context. Next our discussion will focus on the basics of Optical Ethernet – starting with the 3 roles of Ethernet in the network, to the relationship between “Carrier Ethernet” and “optical Ethernet.” 3) Having done this, we will examine several macro architectural models for building MAN/WAN networks using optical Ethernet, and some of the key network design principles involved in each model. 4) And, finally, we will focus on the major recent developments in optical Ethernet technology that facilitate it’s adoption by the military/defense. Towards the end, we will bring this all together, by demonstrating how optical Ethernet technology meets the requirements and possesses the attributes outlined at the beginning of the tutorial.
Yammer is military equivalent of Twitter. The DoD’s net-centric data strategy was originally formulated in the 2003-2004 time frame, initially in the form of high-level direction to various departments, and, over the years, become increasingly specific about implementation and policy. The past several years have been spent in both refining and propagating this strategy through the military establishment. Here are it’s key attributes: Visible: implies it is available to a large audience of both known and un-anticipated users. Rapid Discovery: implies that data should be quickly found. This happens by categorizing and tagging data assets with structured metadata. Understandability: facilitated by rich, descriptive meta-data Shared spaces: data should be accessible by ultra-large user groups (multiple departments or agencies in parallel), and so should be posted in spaces where quick and efficient access is possible. Post-and-Process: this was a reversal of the military’s long-standing paradigm of task-process-exploit-disseminate and it’s one-to-one, need-basis communication. Indeed, the net-centric strategy encourages the task-post-process-use paradigm, by encouraging the posting of raw/unprocessed data (with the right metadata), so it is available to many users in parallel. E.g. data from the Afghan war front collected by the military, may now be available to the FBI, CIA, and numerous regional terror fighting agencies. E.g. Insurgents attack a unit using specific methods – this info. is usable by Pentagon to create strategy, PR dept. to see casualties, FBI to capture criminals, CIA to see flow of smuggled goods – opium. 6) Repurposing of data: With the data sharing paradigm, comes the need to keep the data distinct from the applications (and agencies) that may use it. This allows users to choose multiple applications to analyze, dissect, exploit the same data. By increasing the # of users and the # of ways in which the data is utilized, the DoD’s increases its ROI on data gathering. It also reduces the cost and effort entailed in duplicating efforts on data gathering by different agencies. 7) Handle Info. Only Once: As soon as data and applications are separated, it is natural to reduce duplicative data production/handling. By validating a user’s identity, different users may be given varying levels of access to the same data, thus requiring it to be processed, tagged, and handled only once (or minimally). 8) An important precept of the strategy was to actively collect user feedback on the process, and continually refine/improve it over time.
The specific strategic goals that emerge from the key attributes laid down by the DoD were to make data: visible, accessible, understandable, trusted, interoperable, and responsive, and available to various communities of interest, and to make the strategy institutionalized in various defense establishments. So, what do these goals mean? Visible: implies that the data is easily discoverable by potential users, who can manipulate the data, analyze it, and make decisions based on it. Accessible: implies a large # of users should be able to consume it, and the data location should be easily reachable Understandable: implies use descriptive metadata to clarify the meaning and purpose of the data. Trusted: implies that the integrity and quality of the data must be vouched for by a reliable organization that gathered it/made it available. Interoperable: implies data should be usable by many, while keeping its accuracy & integrity intact. Communities of Interest: implies that data management is decentralized to dynamic groups of interest, and used/analyzed by them based on immediate operational needs. Plus, the underlying infrastructure (networks, systems) must support the ability to dynamically convene and synchronize the COI. Finally, two user-oriented goals are: Responsive: implies be attentive to user needs, especially those wrt content coverage, quality, and performance of data access. Institutionalized: implies allow for procedures/policies to be established in each dept. for efficient data sharing.
We now look at the requirements of military-grade networks to understand what additional features are needed in networks that are designed for military/defense use.
Reliable built in redundancy. Here is a chart summarizing the 9 key requirements/attributes of military-grade networks, from our perspective. I’m going to take a little time to explore each of these in detail, since they will impact the choice of technology, and the design of networks and systems deployed for the military. i Rugged: Military networks and network gear must operate in harsh environments – extreme weather (heat, cold, moisture) or demanding conditions (confined spaces, vibration, shock, corrosion, gases) and EMI/EFI (Electro-Frequency Interference). As a result, the gear must be able to withstand these extremes. ii Secure: The military needs high-integrity data that is un-tampered, and uncorrupted, since critical mission or tactical decisions depend on it or sensitive info. (radar data, video image, communications) is being conveyed on it. iii) Reliable: critical and time-sensitive info. implies networks should be highly reliable, and have the ability to recover quickly from network failures. iv) Hard QoS: Warfighters increasingly use rich video/sensor data, and military commun. is moving to multi-media – this increases b/w requirements and needs data movement with the right fidelity. Applications like cluster computing, grid computing, and virtualization, all rely on moving data with the right quality through the network. Since timing/sync. info. is also transmitted over the network, it needs tight tolerances on delay, jitter, and loss. v) Fast connection setup: An important capability needed is the ability to do dynamic and quick call setup. This is needed to build COI’s over he underlying infrastructure. (One needs signaling and ELMI extensions, for example, to implement this.)
Manageable: With mixed communications, multiple media, large user bases, and dynamic content, having the ability to manage networks, diagnose problems, and control resources is key. Highly Available: Critical defense communications need 6 nine’s reliability or a near-zero downtime, because loss of real-time info. from the battlefield, command centers or between warfighters can mean the difference between victory and defeat! Diverse Access: Defense networks (Army, Navy, Air Force, Marines) utilize a very wide variety of technologies, from wireline to wireless ranging from 64Kbps dial-up links, to VSAT and satellite links, to optical fiber, microwave, Wi-Fi etc. This is because this mix of last-mile technologies allows the network to extend to the individual serviceman, vehicle, airborne end-node, or an aircraft, ship or armoured truck. Plus, defense establishments are spread over widely varying geographic regions, with much different access technologies – copper, fiber, coax, wireless. So, clearly a wide variation in access types must be accommodated Support of legacy and advanced services/applications: Again, defense networks utilize technologies ranging from the legacy to the ultra-modern, from T1 lines to mult-gigabit links, and from simple devices to complex Ethernet/IP routers. Thus, any technology adopted for military networks, must support both legacy and advanced services, and be able to transition the legacy services to more advanced services over a period of time.
Having outlined the goals of the DoD’s net-centric strategy, as well as the key attributes of military-grade networks, we now map these attributes to the features/requirements imposed on the underlying technology, the system and network architecture. We look at this in two parts – focusing first on the implications of the net-centric strategy, and then on the implications of military-grade requirements.
Scalability - the need for many locations, many end-nodes, and ultra-large user populations implies scalability is a must. Technology: For the underlying technology this means haivng a large address space, the ability to create/manage hierarchy, and the presence (preferably) of an automated control plane that facilitates discovery, topology learning & control. Systems: must have large memory and processing for their address/routing tables on line cards and control cards, and the capacity to support a large number of tunnels to create paths between many locations/nodes. Architecture: Network architecture should support many end-nodes, be designed to easily support hierarchy and traffic engineering (for tunnels). Additionally, architecture must be conducive to a wide geographic reach from access, metro to core. Security —the need for data integrity/trust implies support of security mechanisms/techniques. Technology: must preferably support link, segment and e2e security, the ability to isolate groups/classes of users, and the ability to detect breaches of configured policies designed to enforce security. Systems: must support encryption, authentication, and access control lists, operations like DPI on the line cards, user isolation via queueing mechanisms, and intelligent memory management to both isolate and prevent contamination if one user misbehaves, finally the ability to guard against attacks on security, e.g. DoS. Manageability -with a large user population accessing remote data, both the network and data need to be manageable. Technology: must provide robust OAM tools and management interfaces/protocols System: must support OAM tools/mechanisms, must be designed so that invoking OAM mechanisms does not tax forwarding performance, and allow for remote access/management in a flexible way Architecture: must provide for an OOB control, such as that over an independent, robust DCN
4. Dynamic Setup & Control – the need for having multiple COIs dynamically access data, the use of multiple applications to process common data requires the ability to dynamically setup communities and paths between end-nodes. Technology: should have a signaling mechanism to enable automated path setup, and/or an ability to do so efficiently in the management plane via the NMS. System: should support discovery – to facilitate creation of the topology, and signaling protocol stack in its control plane, support for dynamically joining/leaving multicast groups is valuable for COIs. Architecture: allow for an OOB signaling network, and the setup of diverse virtual topologies over the same infrastructure (implies a rich connectivity, many alternate paths, ability to provide sufficient physical redundancy, and quick recovery from failures/errors)
Let us now look at what implications military-grade network requirements have on the trio of technology, system design, and network architecture. Rugged – the technology should be operable over multiple media, in different environments. Where possible it should be deliverable over robust media – e.g. fiber, but should also work over wireless or copper. Technology should be robust to harsh environments – so it should either not be affected by extreme conditions, or have means to recover from their impact – heat, temperature, moisture (that is, the media used for the technology should be robust to this). System: designed to work with conduction cooling, have a means of off-loading complex processing to a centralized engine (to reduce load on main line cards, thus making them compact, simpler, low power, and hence easy to cool). Architecture: should be able to integrate diverse media, and where possible use the most robust media for the need – e.g. fiber, where feasible, wireless where needed, etc. 2. Secure Technology: must have widely available standards for encryption, tunnels for traffic/user isolation, and ability to raise alarms when tampering occurs System: build data plane and control plane robust to DDoS and security attacks, incorporate hardware-based, fast encryption, have memory partitioning and queue management to isolate users, and segregate the impact of one non-performing, compromised or rogue user on others. Architecture: The network architecture and the DCN must both be robust to hacking/tampering. Be able to detect breaches, and rapidly propagate alarms. 3. Reliable Technology: Must have signaling to enable restoration, allow for setup/control of multiple/alternative paths, and have OAM capabilities for loopback, connectivity check, and path tracing. System: must have hardware/software redundancy and support NSF, NSR, and hitless software upgrades, and also quick failure/defect detection and propagation.
4. Hard QoS Technology: Support virtualization of network – tunnels, VLANs, provide for packet headers that can mark priority to segregate traffic and support performance measurement OAM to track perf. metrics. Support tunnels, VLANs, having ability (in the technology -- packet hearders, OAM etc.) to mark, seggregate traffic, prioritize traffic, group/aggregate traffic (via technology-specific mechanisms). System: Isolate traffic via queues/scheduling, have separate tables for different priorities, classses, applications. Ability to signal, control and manage tunnels. Must allow for traffic isolation via queues, scheduling, separate tables/memory to keep traffic of different priorities, applications, classes seggregated. Have ability to signal tunnels, and control and manage tunnels via the system -- allowing for virtual networks to run on the same infra. (e.g. voice, BE, video) Support advanced traffic management -- queueing, marking, dropping, policing, shaping, hierarchical scheduling Architecture: Support provisioning and dimensioning, CAC to regulate traffic input, and TE to support smart traffic placement.
7. Diverse Last Mile Access Technology: Should be cheap, easily upgradable, support capability to aggregate, and yet keep segregated, different traffic types, using techniques such as PWs, VLANs, aggregation. System: The systems should be muli-service, able to support a variety of interfaces -- TDM, ATM, FR, IP, and other protocols, at a number of different data rates, and be able to aggregate this traffic appropriately. This requires not only corresponding processing h/w and software, but also the ability to process/examine this data and queue/route it appropriately. Architecture: Arch. should provide for aggregation points or on-ramps, where diverse traffic can be terminated and eventually transferred to a common IP/MPLS or Ethernet core, thus simplifying the management and operation of the core networks.
At this point, we have discussed the requirements for military-grade networks and what that means for the technology, the system design and the network architecture. Now we will focus on Ethernet, and first give you the characteristics of Ethernet and optical Ethernet, how it is used in different parts of the network, and at the end we will focus on how optical Ethernet meets the requirements outlined earlier. Before we delve in to the details of optical Ethernet, it is useful to look at Ethernet itself, examining why it is a beneficial technology to use, and its various applications in the military environment today.
These are pretty self-explanatory and we walk through them.
Switched Ethernet has 3 roles – inside systems/devices, as a networking infrastructure in the network, and as a fabric/interconnect within military systems, warfare systems E.g. 1-10 Gb/s is used between subsystems. Ethernet transport is used to evolve wireless backhaul, support IP-centric networks Lately it is also used to transport sensor/radar and video data, and for distribution of timing and synchronization info. Real-time video can be transmitted over Ethernet by tagging packets, classifying traffic, and managing queueing. Examples of Ethernet used in Military: Marines/Air-Force Ethernet switches used in the Apache helicopter – provides LAN connectivity to on-board IP-enabled computing devices Navy - used aboard aircraft carriers- Ronald Reagan – provides sophisticated remote monitoring capabilities and an IP-based networking platform for the SDSS system Ratheon’s Ship Self-Defence System (SDSS) Army: Future Combat Systems Integrated Computer Systems program uses: Used Switched Ethernet to control the NLOS-LS platform (Non Line-of-Sight Launch System) using Gigabit Ethernet switches. WIN-T – Warfighter Information Network – Tactical – uses ruggedized off-the-shelf Ethernet switches and IP routers from Cisco, NET, etc.
Thus, we have so far looked at the key requirements of military network and DoD’s net-centric strategy and what that means for the underlying technology, its corresponding systems, and network architecture, and we looked at Ethernet in a general way. We will now move to discussing the characteristics of optical Ethernet and how it is used in different parts of the network (network architecture), and then we will wrap-up by demonstrating how it meets the requirements outlined earlier.
Before defining the term “optical Ethernet,” it is useful to point out that the term “Ethernet” itself can apply to any one of the three roles of Ethernet technology: as a service , as a transport technology , and as a PHY layer . One is at the PHY level, physical layer, one is at the transport layer, and one is at the service layer. At the PHY layer we have the 1/10/100 G interfaces, wireless interfaces, and optical interfaces. These are standardized in the IEEE and ITU. Ethernet PHY refers to the framing and timing of the actual bits of the Ethernet frame, and their transmission over a physical medium – copper wire, coaxial cable, or optical fiber – to connect switches at the physical layer. Some common Ethernet PHYs are the 1 GE (IEEE 802.3z), 10 GE (IEEE 802.1ae), and 100 GE (IEEE 802.3ba) Ethernet PHYs. Note that Ethernet frames can also be embedded in other PHY framing standards, such as those in the ITU-T’s G.709 OTN (Optical Transport Network) standard.
Ethernet transport makes it possible to realize connection-oriented Ethernet (COE) . COE, in essence, refers to the collection of control-plane protocols and data-plane settings that create a connection-oriented capability for transferring the frames of an Ethernet service. We mention that Ethernet transport could be provided either by enhancing Ethernet technology (e.g. as is done in Provider Backbone Bridging with Traffic Engineering, PBB-TE, in the IEEE 802.1Qay standard) or by a different technology (e.g. using MPLS-TP technology being developed jointly by the IETF & the ITU-T). Both of these forms of transport involve switching/routing data frames, and are, therefore, referred to as Layer 2 (or L2) transport. It is also possible to embed Ethernet frames in a different transport networking layer, such as the one provided by the ITU-T’s G.709 OTN (Optical Transport Network) standard. This form of transport involves switching/routing traffic at the optical channel data unit (ODU) level and is therefore referred to as Layer 1 (or L1) transport. Ethernet PHY refers to the framing and timing of the actual bits of the Ethernet frame, and their transmission over a physical medium – copper wire, coaxial cable, or optical fiber – to connect switches at the physical layer. Some common Ethernet PHYs are the 1 GE (IEEE 802.3z), 10 GE (IEEE 802.1ae), and 100 GE (IEEE 802.3ba) Ethernet PHYs. Note that Ethernet frames can also be embedded in other PHY framing standards, such as those in the ITU-T’s G.709 OTN (Optical Transport Network) standard.
Optical Ethernet Network as a network spanning a MAN/WAN that offers a carrier-grade Ethernet service, running over a connection-oriented Ethernet (COE) transport infrastructure over an optical PHY (cf. Figure 2). The optical PHY could be provided either by the OTN’s optical channel (OCh), or by an Ethernet PHY running over optics, and may be multiplexed onto a given fiber using CWDM/DWDM technology. (L1 can have Ethernet MAC media dependent layer as well. That is the media access control of the Ethernet MAC is really part of L1.) This shows how we take Ethernet and enhance it to make it behave as a transport technology. E.g. PBB-TE is an enhancement to Ethernet to make it transport oriented. Or, we can use other technologies to make it connection oriented, e.g. MPLS-TP. We can have Ethernet ride directly over dark fiber (Ethernet PHY) or we can put it on widely available technologies such as OTN ODU, so that WDM can be used. (Thus, we can have Ethernet ride over transport and PHY, or we can have Ethernet go over L2 transport, which then rides one L1 transport and PHY.)
Carrier Ethernet was formalized by the work of the MEF in 2004-05, and is, in fact, the service component of Optical Ethernet. MEF defines it with 5 characteristics.
Standardized services refers to having a uniformly accepted definition of core services that serve as the building block for applications running atop them (more on these below). Scalability refers to a service that scales to millions of UNIs (end-points) and MAC addresses, spanning access, local, national, and global networks, with the ability to support a wide bandwidth granularity and versatile QoS options. Reliability refers to the ability to detect and recover from errors/faults without impacting customers, typically with rapid recovery times, as low as 50ms. Hard QoS implies providing end-to-end performance based on rates, frame loss, delay, and delay variation, and the ability to deliver SLAs that guarantee performance that matches the requirements of voice, video, and data traffic over heterogeneous converged networks. Service management implies having carrier-class OAM, and standards-based, vendor-independent implementations to monitor, diagnose, and manage networks offering Carrier Ethernet service.
The services defined by the MEF are in terms of an Ethernet Virtual Connection (EVC), which is defined as an association of two or more User Network Interfaces (UNIs) at the edge of a metro Ethernet network (MEN [1] ) cloud (i.e. subscriber sites), where the exchange of Ethernet service frames is limited to the UNI’s in the EVC. The MEF defines 3 standardized services: E-Line (a point-to-point EVC), E-LAN (a multipoint-to-multipoint EVC), and E-Tree (a point-to-multipoint “rooted” EVC, where the root(s) can communicate with any of the leaves, but the leaves must communicate with each other only via the root). Thus, an Ethernet Private Line service is built using a point-to-point EVCs, while an Ethernet Private LAN service is built using mp2mp EVCs. [1] Even though the MEF specifications refer to MENs (metro Ethernet networks) this is now a generic term that refers to the Carrier-Ethernet service enabled network, which can span a variety of access, metro, and long-haul networks.
Here we illustrate the 3 services defined by the MEF, explained earlier.
Here we are illustrating the service, transport and PHY components of an optical Ethernet
We just described the characteristics of optical Ethernet, which can be used in different parts to provide e2e connectivity. Now these optical Ethernet technology can be used in different parts of the network, access, aggregation and core to provide e2e connectivity.
There are many permutations and many types of networks one can build with optical Ethernet. In this section, we will focus on some key examples of building networks using optical Ethernet, starting with use cases where only some part of the network is Ethernet, and going to cases where all parts of the network are built using Ethernet. We will also go along giving an assessment of where Ethernet best fits, and where it makes sense.
Here is a high-level look at the applicability of optical Ethernet to the access, metro/aggregation and core network segments. We look at which optical Ethernet means for each, how it applies, and how it relates. Note: Explain here briefly for the audience what LAG really is. A way to increase link capacity and/or guard against link failure. [ELMI has 3 versions. V1 is fully configured, where you just tell the PE node what you need. V2 is an option where you learn the preconfigured options from the PE and you, as the CE, just pick one of the preconfigured profiles. V3 is fully negotiable (or has the max. options), and was defined with OIF assistance.
IEEE 802.1 is doing a LAG across multiple systems – or multi-chassis LAG.
Here we show examples of how Ethernet is used. Assume that initially only the access is Ethernet. The metro/core are IP/MPLS. We can do it today, via MPLS LSPs, using labels. So, e2e it looks like Ethernet Q-in-Q service, with IP/MPLS transport.
This can be done today, with existing gear, but this is not scalable and we don’t take advantage of Ethernet aggregation here. You need IP/MPLS LSPs e2e, between each customer end-point, leading to potentially millions of PWs/tunnels, which becomes a configuration and management issue.
Here we are pushing Ethernet into the aggregation/metro. What we are doing here is adding another provider header to the Ethernet header, thus hiding the customer MAC address completely from the provider network. So, we have a PBB service end-to-end. This is attractive, because customer traffic is aggregated in the metro/core (yellow part), thus hiding the customer addresses. So, all the core sees are only the u-PE addresses (not all customer addresses), which are potentially an order of magnitude or more less than customer addresses. Meanwhile, IP/MPLS is confined to the core (blue). Due to the u-PE boxes being less than CE boxes, the number of LSPs and PWs needed in the core is also substantially reduced. Thus, this is showing how Ethernet moves into the metro.
Access+Metro solves the manageability and scalability problem, by using PBB to aggregate the customer MACs into the provider MACs, and restricts PW to only the core. It also restricts the core to only see the provider MACs (which is not the case with H-VPLS, which also solves the # PW explosion problem, but not the MAC explosion problem).
Here is the example where it is all Ethernet – from access, metro, to the core. The all Ethernet core here is a traffic engineered core, using PBB-TE, where the paths are pinned using configuration, and no MAC learning or STP is needed in the core. This allows OAM to run e2e, in segments, and have uniform technology throughout.
We also discuss Ethernet use in mobile technology.
Today mobile backhaul uses these mix of technologies, as shown, where most of them are based on TDM or ATM technologies, but gradually evolving to IP/Ethernet.
Before we look at the architectures, here are some definitions of backhaul components. The key observation is that as wireless services themselves move to packet-based data, it automatically creates a push to have the transport technology be packet-based as opposed to TDM, since that is a better match for the packet-based data that mobile networks are now carrying.
This shows the traditional backhaul. Where the access is TDM/ATM, which are transported from the cell site on a TDM link over a SONET/SDH network, and split at the back end into voice and data (ATM) traffic, and directed to the BSC and/or the RNC respectively. The key is that the backhaul network is all TDM, and there are separate transmission facilities for packet/data and voice/TDM at the back end.
The new evolved network will look like the following: where the access links can be TDM/ATM or, for 4G networks, can themselves use Ethernet connections, as shown. This traffic is all aggregated at the cellsite gateway, which is a CE router/switch now. The transmission to the backhaul network is now Ethernet, and the traffic is carried via PWs or via circuit emulation. The network itself becomes a Carrier Ethernet network, and the xconnects are replaced by switches and routers. At the backend, the packet and TDM data is sent to the BSC, and from there to the wireless core network. Thus, all traditional legacy and new packet traffic can be transported over a common packet Ethernet network, which is scalable, configurable, flexible, and uses standard technology.
The steps required for signaling of a PW in VPLS are the following: Bind AC to forwarder, and configure VPN ID to be globally unique. Disovery of remote PE’s in the same VPLS, either via BGP or via configuration or via a discovery server using RADIUS. Establish targeted LDP sessions to peers. Issue label mapping, perform checks at reception, and record label.
Here is the detail of how the TDM or packet data is actually transported over the Ethernet packet network. Here is a case where we have TDM access, and TDM data is carried over a PW over the Carrier Ethernet network. The PW can be used in the mobile backhaul scenario as a “wire” to carry all different types of data – TDM (chopped into packets via SATOP or CESoPSN), ATM, Ethernet, IP, ... The advantage now is the we can use the same packet network to perform synchronization, via IEEE 1588, which allows the distribution of clock all the way to the cell site/remote site. Note that this architecture and example, while cast in the mobile context, can in fact be any backhaul. The cell site could be a remote radar array location or a missile location, and the backhaul could be radar data or commands to operate the missile. So this architecture is for any data backhaul at all actually!
Here we are putting the MEF services in the mobile context.
New advancements which make Optical Ethernet more attractive for military applications, as there have been developments in all these key areas over the last few years.
Here we simply walk them through the table, giving them an overview of what is happening in each area, which standards are being developed in each area, and which industry body is leading those developments. Now that we’ve summarized the key developments, we will talk about a few key ones that we think are extremely relevant for military needs – e.g. security, timing & synchronization, and OAM.
Ethernet standards provide for e2e security via LinkSec, which comprises of MACSec and KeySec. KeySec is the key distribution/exchange protocol. MACSec defines the communication protocol between the 2 end nodes, and allows for 3 properties – authentication, data integrity, and confidentiality between the trusted end nodes. The security association is done via KeySec, and is used by MACSec.
The MACSec format is illustrated here. You have the LinkSec header just before the VLAN. It has its own Ethertype (akin to protocol ID) to announce that LinkSec is in use. All of the fields after the LinkSec header are encrypted, and thus unreadable by intermediate nodes. At the end of the frame, is the Integrity Check Value (ICV) like a CRC checksum, and creates tamper-proof transmission. Discovers change of bit in transmission. ICV ensures that no false packets can be inserted on the wire, and defuses wire-tapping, since you would know if your communication was corrupted by the insertion of bits.
Ethernet today is a technology with some of the most comprehensive OAM capabilities, in our view. It has tools for a number of different critical areas, such as continuity check, link trace, loop back etc. Might be good to explain Link Trace on the whiteboard, if time permits . Might be good to contrast it with IP ping/MPLS ping. That in Ethernet it does it with one packet with a multicast MAC address, and then how you construct the path. You need to know a priori the spanning tree and the MAC addresses of the nodes along the path. So LT is used to verify the path. You know how to stop by the TTL, and by the Target MAC address, put inside the LT (this is why you need a priori information about the target). In addition, it has performance measurement to ensure that SLAs can be met.
The speciality is the Ethernet OAM allows independent OAM to be run on different domains, even for the same flow (or VLAN). So you can do e2e between customer boxes, at the operator level or at the service provider level. Thus, each entity (operator, customer, provider) can run their own OAM, which is very valuable. The 8 domain levels can be assigned to each entity. Eg. Level 7-8 – customer, 5-6 to provider, and 3,2,1 to the operator, as specified in 802.1ag.
Provider : gives services to the end-customer Operator: provides the physical infrastructure for the provider, in a given segment – can run its own OAM. Here is an example of a loopback running on multiple levels on the same service.
Boundary is like a regenerator – once you’ve done many hops, at the boundary node, you generate a new clock using a PLL. Before that, you are using transparent clocking. Boundary clock becomes a “master” for the downstream segment of the network. For Ethernet, we can now use packet-based sync., and we don’t need to run a wire everywhere. The same packet network can enable synchronization of both clock and time-of-day. How it works is that we have a GM clock, the provides synchronization messages, which are received by the slave or boundary clock, running their own PLL loops. These clocks use the sync. messages and other messages (detailed later) to compensate for the jitter in the systems and within the network. Thus, they all have their clocks in sync. with the master. Master/Grandmaster is the one that gets the GPS. Note the delay request and delay response can actually go through different paths, which is why you take their average when computing offset adjustment.
How this works is that the master sends a Sync. message at time t1 and encodes that time into the packet and/or in a subsequent follow-up message (this is used in 2-step operation, when the MAC in the master does not have the ability to timestamp the Sync. message exactly at the instant that it leaves the Master). The slave receives it at time t2 and records this time. It then sends and Delay Req. message records time t3 of departure, and the time t4 of the response to the request. At this point, we know the MS and SM delays, and take their average to compute the offset or correction. This explanation has glossed over some details, such as how to account for the jitter accumulated within each node, but the standard has a provision to account for that, and to allow the slave to subtract this total jitter from its calculations, thus allows for upto nanosecond accuracy between the two clocks. We must be able to tell them how accurate IEEE 1588 synchronization is – I believe it’s accurate to sub-nanosecond levels, precisely because of the correction factors built into the protocol.
In this concluding part, we will now bring all of what we’ve previously discussed together, and demonstrate how optical Ethernet is able to meet a majority of the military network requirements we started with.
This is just walking them through the table, and pointing out the capabilities that support each requirement.
MEF8 puts the circuit traffic directly into Ethernet packets (after chopping it up, and adding a specific Ethertype)
Ethernet has a lot of capabilities and many other capabilities are being added to it to make it a very versatile technology usable in many applications from access to aggregation to core, and in data center applications as well. It could be used for mobile backhaul applications. Further you can mix-and-match it with other technologies, due to its interoperability. It is a very well established and well-known technology, has a lot of capabilities – has been here for many years, and is cheaper compared to other technologies. Therefore, it is very suitable for net-centric military applications. We are not saying that it must be used everywhere, but where it makes sense, and where its characteristics fit the application or network segment, it is certainly a strong candidate, it has the ability to be used. It, therefore, adds value in many applications.