Shared bandwidth reservation of backup paths of multiple


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Shared bandwidth reservation of backup paths of multiple

  1. 1. International Journal Volume 1, NumberEngineering (IJCET), ISSN 0976 – 6367(Print), International Journal of Computer Engineering and Technology of Computer 1, May -June (2010), © IAEME ISSN 0976 – 6375(Online)and Technology (IJCET), ISSN 0976 – 6367(Print) IJCETISSN 0976 – 6375(Online) Volume 1 ©IAEMENumber 1, May - June (2010), pp. 92-102© IAEME, SHARED BANDWIDTH RESERVATION OF BACKUP PATHS OF MULTIPLE LSP AGAINST LINK AND NODE FAILURES P. Praba Research Scholar Anna University of Technology Coimbatore Coimbatore -641 047 Dr. S. Balasubramanian Research Supervisor Anna University of Technology Coimbatore Coimbatore -641 047 ABSTRACT Resiliency has always been a main concern in the area of network engineering and maintenance. The main task of the Multi Protocol Label Switching (MPLS) recovery mechanism lies in controlling the traffic flow in the network and finding a backup path to reroute the traffic in the event of network failure. The bandwidth reserved on each link along the backup paths can be shared by all the affected service LSPs provided the protected failure points are not expected to fail simultaneously. To support backup path selection, some algorithms require extra information to be carried by routing and signaling protocols. In this paper, the efficiency of distributed bandwidth management of restoration backup path selection algorithms used for additional information distribution and collection are analyzed. INTRODUCTION As service providers move all their applications to IP/MPLS backbone networks, dynamic provisioning of bandwidth guaranteed paths with fast restoration capability becomes more and more crucial. In recent times, MPLS has gained a lot of attention due to increased restoration flexibility and high reliability for services. In MPLS [1], the ingress LSR encapsulates the packet with labels and forwards the packets along 92
  2. 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEMElabel switched paths (LSPs). These LSPs behave like virtual traffic trunks to carry theflow aggregates belonging to same “forwarding equivalence classes”. The flowaggregates when combined with explicit routing of bandwidth guaranteed LSPs enablesservice providers to traffic engineer their networks [2] and to dynamically provisionbandwidth guaranteed paths. Fast restoration allows backup paths to be setupsimultaneously with the active path thus ensures quick restoration of a LSP upon failure Recently there has been a great deal of work addressing restoration functionalityand schemes in IP/MPLS networks [3]–[6], [11]–[14], [16] as well as how to managerestoration bandwidth among network and select optimized restoration paths [7]–[10].However, all of them deal with path-based end-to-end LSP restoration and routingprotocol fast re-convergence [13], [14]. When MPLS restoration times must becomparable to SONET restoration times, the MPLS local restoration [2] is a fasteralternative to path restoration. Local restorability means that upon a link or node failure,the first node upstream from the failure must be able to switch the path to an alternatepreset outgoing link so that path continuity with bandwidth guarantees is restored by astrictly local decision. Local restoration implies that to successfully route a path set-uprequest, an active path and a bypass backup path for every link and node used by theactive path must be determined. The best related paper is [6], which deals with backupbandwidth sharing for local restoration. Since the backup LSPs are pre-established, eventhough they do not consume bandwidth before failure happens, the network has to reserveenough restoration bandwidth to guarantee the LSP restoration for any failure. Withoutbandwidth sharing among backup LSPs for different service LSPs, the network may needto reserve much more bandwidth on its links than necessary Internet Engineering Task Force (IETF) has published RFC 4090 [2] to addressthe necessary signaling extensions for supporting MPLS fast reroute, and how to use pathmerging technique to share the bandwidth on common links among backup paths of thesame service LSP. However, no solution is provided on how to share bandwidth amongbackup paths of different service LSPs so far. The restoration models considered in thispaper is based on the observation that the restoration bandwidth can be shared bymultiple backup LSPs so long as their protected service LSP path segments are notsusceptible to simultaneous failure[6], [8], [10], [21], [22]. 93
  3. 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME This paper discusses the efficiency of distributed bandwidth management ofrestoration path selection algorithms. This paper is organized as follows. Section II givesan overview of the options and features of restoration models. Section III gives a reviewof restoration path selection algorithms.RESTORATION MODEL – AN OVERVIEWA. Online Routing Dynamic routing of LSP segment is handled by on-line algorithms that routes boththe active path and the backup path for each link or node and simultaneously it optimizesnetwork resource utilization. The objective of the algorithm is to compute an active pathand a backup path for each node or link in the active path. The connection setup fails ifsufficient bandwidth is not available to setup the active path or the backup path. Onlysingle link or node failure is considered in this paper.B. Restoration modes Two ways of restoration are end-to-end or path restoration and local restoration. Inend-to-end restoration, a disjoint backup path from the active path is provided from thesource to the destination for each request. When there is a link or node fails, the failureinformation has to propagate back to the source which in turn switches all the demands tothe backup path as illustrated in Figure 1 and in Figure 2. This information propagation tothe source makes it unacceptable for many applications. In the case of local restoration,the backup paths are setup locally and therefore failure information does not have topropagate back to the source before connections are switched to the backup path. S T S T Primary Path Backup Path Fault Indication Signal Figure 1Path Restoration Figure 2 Path Restoration on Linkfailure 94
  4. 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME Backup path for the failure of node aand links i and j. S a b c T i j l k Backup path for the failure of linkl Figure 3 Backup Path for Single Element Failure Restoration has to protect against single link or single element (node or link)failure. The amount of resources needed to provide backup for the second mode will begreater since a node failure results in multiple link failures. In the single link failure,when a link fails, the two nodes that are at the end point of the link detect the failures,and they immediately switch all the demands on to the backup path. In a single elementfailure model, when a node fails assume that all the links incident on this node fails. Thisis detected as in the link failure case and the demands are routed across the failed node.The algorithms discussed in this paper can handle all single link and node failures anddoes not provide a backup if the source or the destination of the traffic fails as illustratedin Figure 3C. Shared Reservation A protected LSP in a MPLS network has an active path and bandwidth reservedalong the backup paths for each link or node along the active path. The bandwidthreserved on each link along the backup paths can be shared across multiple backup pathsof the same LSP and multiple backup paths of the different LSPs provided the protectedfailure points are not expected to fail simultaneously. This type of bandwidth sharing isreferred to as Inter-demand sharing and intra-demand sharing. 95
  5. 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME The design objective is that the bandwidth reserved on the backup paths must besufficient to recover all affected service LSPs in the event of any link/node failure. Fig. 4shows a simple example illustrating a shared reservation, which comes from [7]. Thefigure shows a network with 6nodes and 7 links. Suppose LSP 1 request asking for oneunit of bandwidth from A to B arrives at node A. Node A selects A-B for the service pathand A-C-D-B for the backup path. When these LSPs are established, one unit ofbandwidth is allocated on link AB, and one unit of bandwidth is reserved on each of linksAC, CD, DB. Subsequently, another protected LSP 2 request asking one unit bandwidthfrom E to F arrives at node E. Node E selects E-F for the service path and E-C-D-F forthe restoration path. In this example, when setting up the protected LSP, it is unnecessaryto reserve an additional unit of bandwidth on link CD because the two service paths ABand EF are failure disjoint, i.e., they are not subject to simultaneous failure under thesingle failure assumption. Thus, a total of five units of bandwidth are reserved to protectboth service paths against any single link failure whereas without shared reservations, atotal of six units of bandwidth would be needed. Sharing the reserved bandwidth amongthe backup paths of different protected LSPs reduces the total reserved bandwidthrequired. LSP 1 A B Backup Path C D Backup Path E F LSP 2 Figure 4 Shared Reservation ExampleD. Partial and Complete Routing Information In MPLS networks, a link state routing protocol, such as Open Shortest Path Firstprotocol (OSPF) or Intermediate System-Intermediate System protocol (IS-IS), is used to 96
  6. 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEMEdistribute network topology information to each node in a particular network routingdomain. Traditionally, these protocols have only propagated link up/down status amongthe nodes. To support path computation in MPLS networks, both OSPF and IS-IS havebeen extended to propagate additional information about each link, such as availablebandwidth, bandwidth in service, etc. [17], [18]. The following link state information isflooded by the routing protocol, which is the so-called partial information from paper[8].Note that these information are distributed via OSPF/IS-IS traffic engineering extensions. 1) Reserved bandwidth R[e]: This bandwidth is not used on unidirectional link when the network is in a non-failure condition, but may be used by the restoration process upon failure; 2) Available bandwidth A[e]: This bandwidth is free to be allocated to service bandwidth or reserved bandwidth as new LSP requests arrive. Note that the total bandwidth on unidirectional link e is R[e]+A[e]+S[e] where S[e] is total consumed bandwidth for service paths on link. The three types of bandwidth share a common bandwidth pool; 3) Administrative weight W[e]: This quantity is set by the network operator and may be used as a measure of the heuristic “cost” to the network. A path with a smaller total weight is typically preferable. This information is used to select the service path for each LSP request. However, tosupport backup path selection, some algorithms require extra information to be carried byrouting and signaling protocols [7], [10]. There are three possible scenarios based on theinformation available to the routing algorithm to achieve maximum bandwidth sharing inthe backup paths. The first scenario that we consider is what we call the no information scenario. In thiscase, we assume that the only information that the routing algorithm has about thenetwork is the residual (available) bandwidth on each link. In this scenario, for each linkthe amount of bandwidth utilized separately by the active and the backup paths is notknown. Only the total used bandwidth is known.In the second scenario, we assume that that the routing algorithm has completeinformation, i.e., it knows the routes for the active and backup paths of all theconnections currently in progress. 97
  7. 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME In the third partial information scenario, the information available to the routingalgorithm is slightly more than that in the no information scenario. The additionalinformation in this scenario is that for each link instead of knowing only the totalbandwidth usage, we now separately know the total bandwidth used by active paths, andthe total bandwidth used by backup paths.FAST RESTORATION ALGORITHMS MPLS fast reroute is a relatively new scheme and it is still undergoing IETFstandardization process [2]. The internet draft [2] focus on the fast reroute schemedefinition and backup establishment signaling protocol, which leaves how to select thebackup paths and implementation open for vendor/carrier innovation. Much work hasbeen done on end-to-end and segment shared backup path selection, but few papers havebeen published on MPLS fast reroute backup path selection. The basic idea of backuppath sharing and backup path control message carrying service path information has beenused before [3], [22], but both apply for end-to-end backup path sharing only. A practicalalgorithm should maximize restoration bandwidth sharing among the backup paths of thesame or different service paths without scarifying performance and scalability. Papers [6][8] classified the routing information model as minimal information,partial information and complete information in end-to-end restoration. In [6], a fewalgorithms to compute backup paths for link/node failure on the service path with thegoal of maximizing backup bandwidth sharing were presented. The key idea of [6] is that restoration problems can be formulated as integerprogramming problems and thus uses heuristic algorithm to solve it. The cost of backuppaths can be determined by solving shortest path problems, one for each link in thenetwork.Let be the cost of using link (u, v) on the backup path if link (i, j) is used inthe active path. Fij represent the total amount of bandwidth reserved for the demand thatuse (i, j) on the active path and Guv represent the total amount of bandwidth reserved forthe demand that use (u, v) on the backup path. For a current demand of b units ofbandwidth is defined as 98
  8. 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME 0 if Fij + b Guv and (i, j) (u,v) = Fij + b – Guv if Fij + b > Guv and Ruv Fij + b – Guv and (i, j) (u,v) Otherwise The backup for link (i, j) can start at node i but can end at any node on the pathfrom j to t. This case is handled by executing shortest path algorithm backwards startingat the sink. To maintain intra domain sharing, each node maintains a m-vector thatgives the amount of bandwidth reserved for the current demand for backing up all thelinks from the given node to the destination.Modifying the cost of the links in thecomputation of the backup costs can be used to account for node failures. Cost of usinglink (u, v) for partial information case is defined as 0 if Fij + b Guv and (i, j) (u,v) = Fij + b – Guv if Fij + b > Guv and Ruv Fij + b – Guv and (i, j) (u,v) Otherwise However, their algorithms assume partial aggregated information and completeinformation [6] available. Also papers [6], [8] find that complete information model isable to achieve optimized backup selection but the complete information disseminationmay not be scalable. Thus, complete information model fits for centralized algorithm.Partial information can be easily collected via current routing information extension.Thus, partial information model fits for distributed algorithm. A distributed algorithm for complete information model to MPLS fast reroute hasbeen proposed by [15]. The distributed bandwidth management method assumes the useof RSVP-TE extensions [19] instead of routing protocol to collect extra information andachieves restoration bandwidth sharing among the backup paths of different service pathsindependent of the backup path selection algorithm. (CR-LDP) [20] also requires similarextensions. The paper also designed internal node data structure and described how tomaintain those data structure at each node during LSP creation and deletion. This schemescales well due to two facts: (1) it does not require each LSR to maintain detailedinformation on each LSP’s service path and restoration paths. Only two arrays, i.e., 99
  9. 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME and , are maintained at each node. ismaintained by the responsible node of each unidirectional link along a backup path tokeep track of the bandwidth on that has been reserved to protect against the failure ,which will be updated during the signaling process of LSP creation and deletion. is maintained by the responsible node of each failure along a service pathto keep track of the bandwidth on that has been reserved to protect against the failureitself. Then the sharable restoration bandwidth on each unidirectional link would be Where is a conflict matrix that represents the amount of trafficthat will be rerouted on unidirectional link when failure occurs. The first of isdisseminated via routing protocol exchange. To distribute and collect the second partinformation, three new signaling messages: TELL, ASK and REPLY were added. Thenthe algorithm will reset link weights based on and select the disjointshortest path as the restoration path, where is bandwidth requirement of the trafficrequest. Similar idea has been applied in [7], [10], which assumed end-to-end sharedmesh restoration.CONCLUSION We analyzed the problem of fast local restoration that requires efficientdistributed bandwidth management to setup bypass path for every node or link traversedby the service path. The objective of routing messages or signaling extensions is tomanage the bandwidth usage for each connection and to optimize the use of networkresources while protecting against single link or node failure. Approaches based onefficient distributed routing and signaling protocols were compared. And we concludethat significant cost savings could be achieved through sharing of bandwidth amongbackup paths of different service LSPsREFERENCES[1] Rosen et al., “Multiprotocol Label Switching Architecture”, IETFRFC 3031, 2001.[2] P. Pan, G. Swallow, and A. Atlas, “Fast Reroute Extensions to RSVP-TE for LSP Tunnels”, IETF RFC 4090, 2005. 100
  10. 10. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME[3] S. Kiniet al., “Shared Backup Label Switched Path Restoration”, IETFInternet draft, work in progress, May 2001.[4] V. Sharma and F. Hellstrand, “Framework for Multi-Protocol Label Switching MPLS-Based Recovery”, IETF RFC 3469, 2003.[5] C. Gruber, “Bandwidth requirements of MPLS protection mechanisms”,7th INFORMS TELECOM, Boca Raton, FL, 2004.[6] M. Kodialam and T. Lakshman, “Dynamic routing of locally restorable bandwidth guaranteed tunnels using aggregated link usage information”, IEEE INFOCOM, 2001.[7] Li, D.Wang, C. Kalmanek, and R. Doverpike, “Efficient distributed path selection for shared restoration connections”, IEEE/ACM Transactions on Networking, vol. 11, no. 5, pp. 761–771, October 2003.[8] M. Kodaliam and T. Lakshman, “Dynamic routing of bandwidth guaranteed tunnels with restoration”, IEEE INFOCOM 2000.[9] Y. Liu, D. Tipper, and P. Siripongwutikorn, “Approximating optimal space capacity allocation by successive survivable routing”, IEEEINFOCOM 2001.[10] C. Qiao and D. Xu, “Distributed Partial Information Management(DPIM) for survivable networks”, INFOCOM 2002.[11] R. Doverspike and J. Yates, “Challenges for MPLS in optical network restoration”, IEEE Communication Magazine, February 2001.[12] Y. Xiong and L. Mason, “Restoration strategies and spare capacity requirements in self-healing ATM networks”, IEEE/ACM Transactions on Networking., vol. 7, no. 1, February 1999.[13] Basuand J. Riecke, “Stability issues in OSPF routing”, ACMSIGCOMM 2001.[14] Alaettinoglu, V. Jacobson, and H. Yu, “Towards Millisecond IGP Convergence”, IETF Internet draft, draft-alaettinoglu-isis-convergence-00.txt, November 2000.[15] Dongmei Wang and Guangzhi Li, “Efficient Distributed Bandwidth Management for MPLS Fast Reroute”, IEEE/ACM Transactions on Networking, vol. 16, no. 2, April 2008. 101
  11. 11. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May -June (2010), © IAEME[16] R. Sabella, M. Settembre, G. Oriolo, F. Razza, F. Ferlito, andG. Conte, “A multilayer solution for path provisioning in new generation optical/MPLS networks” ,J. Lightw. Technol., vol. 21, no.5, pp. 1141–1155.[17] D. Katz et al., “Traffic Engineering (TE) Extensions to OSPF Version 2”, IETF RFC 3630, September 2003.[18] Smitet al., “Intermediate System to Intermediate System (IS-IS) Extensions for Traffic Engineering (TE)”, IETF RFC 3784, June 2004.[19] D. Awducheet al., “RSVP-TE: Extensions to RSVP for LSP Tunnels”, IETFRFC3209, December 2001.[20] J. Ash et al., “LSP Modification Using CR-LDP”, IETF RFC 3214, January 2002.[21] P. Ho, J. Tapolcai, and H. Moufth, “On achieving optimal survivable routing for shared protection in survivable next-generation internet”, IEEE Trans. Reliability, vol. 53, no. 2, pp. 216–225, June 2004.[22] S. Han and G. Shin, “Fast restoration of real-time communication service from component failure in multi-hop networks”, ACM SIGCOMM1997. 102