Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Fast Convergence in IP Network

573 views

Published on

Fast Convergence in IP Network

Published in: Internet
  • Be the first to comment

  • Be the first to like this

Fast Convergence in IP Network

  1. 1. Fast  Convergence  in  IP  Network   Md.  Abdullah-­‐Al-­‐Mamun   mamun@fiberathome.net   Sumon  Ahmed  Sabir   sumon@fiberathome.net      
  2. 2. Why  ?   Ensure  uninterrupted  Service,  It  can  be       1.  Voice  call     2.  Video  Call   3.  Business  CriLcal  Traffic(Flexi  load,  Banking   TransacLons)   4.  Mission  CriLcal  Traffic(….)      
  3. 3. Target  Network     •  Transport  Network  which  is  carrying  Voice  and   Video  Traffic   •  Transport  Network  which  is  carrying  Business  and   Mission  CriLcal  Traffic     •  High  volume  Backbone  Network     •  AggregaLon  Point  of  any  network    
  4. 4. Design  ConsideraLon       •  Network  Topology     •  IGP  SelecLon     •  IP  Plan     •  Type  of  Service  Delivery     •  Service  Takeover  and  handover  Point    
  5. 5. BeXer  IP  Plan  BeXer  Convergence     •  Domain/Area  Based  IP  Plan  must  be   taking  place  to  minimize  the  prefixes   •  Prefix  Summery  or  Area  summery  is   very  effecLve  to  aggregate  individual   small  prefixes  within  the  Area       •  OSPF  Area  based  route  filtering  can   achieve  to  make  the  FIB  table  lesser         router  ospf  65000    router-­‐id  10.255.255.15    ispf    area  1  range  10.10.0.0  255.255.252.0    area  1  filter-­‐list  prefix  BB-­‐ROUTE-­‐IN  in   ip  prefix-­‐list  BB-­‐ROUTE-­‐IN  seq  5  permit  10.255.255.0/24  le  32   ip  prefix-­‐list  BB-­‐ROUTE-­‐IN  seq  10  permit  10.0.0.0/22  le  32   ip  prefix-­‐list  BB-­‐ROUTE-­‐IN  seq  150  deny  0.0.0.0/0  le  32  
  6. 6. IGP  Fast  Convergence     •  Failure  Detec,on       •  Event  Propaga,on       •  SPF  Run     •  RIB  FIB  Update         •  Time  to  detect  the  network   failure,  e.g.  interface  down   condiLon.   •  Time  to  propagate  the  event,   i.e.  flood  the  LSA  across  the   topology.   •  Time  to  perform  SPF   calculaLons  on  all  routers  upon   recepLon  of  the  new   informaLon.   •  Time  to  update  the  forwarding   tables  for  all  routers  in  the   area.  
  7. 7. Fast  Failure  Detec,on  Process     There  are  three  methods  to  detect  the  link  failure     1.  IGP  keepalive  ,mes/  fast  hellos  with  the  dead/hold  interval  of  one  second   and  sub-­‐second  hello  intervals.  It  is  CPU  hungry       2.  carrier-­‐delay  msec  0,  Physical  Layer     3.  BFD,  Open  Standard  more  reliable  rather  than  IGP  Keepalive  fast  hello           iinterface gi0/0/0 ip  ospf  bfd bfd interval 50 min_rx 50 multiplier 3
  8. 8. Event  Propaga,on     APer  Link  Down  Event   Remarks   Command     LSA  generaLon  delay     Lmers  throXle  lsa  iniLal  hold   max_wait   Lmers  throXle  lsa  0  20  1000   LSA  recepLon  delay     This  delay  is  a  sum  of  the   ingress  queueing  delay  and   LSA  arrival  delay   Lmers  pacing  retransmission   100   Processing  Delay     Lmers  pacing  flood  (ms)  with   the  default  value  of  55ms   Lmers  pacing  flood  15   Packet  PropagaLon  Delay     12usec  for  1500  bytes  packet   over  a  1Gbps  link   N/A  
  9. 9. SPF  CalculaLon     •  The  use  of  Incremental  SPF   (iSPF)  allows  to  further   minimize  the  amount  of   calculaLons  needed  when   parLal  changes  occur  in   the  network   •  Need  to  enable  ispf  under   ospf  process   Router  ospf  xxx   Ispf     show  ip  ospf  staLsLcs   OSPF  Router  with  ID   (10.255.255.15)  (Process   ID  65000)      Area  0:  SPF  algorithm   executed  18130  Lmes      
  10. 10. RIB/FIB  Update   Link/Node   Down     SPF   Calcula,on     RIB  Update     FIB  Update   Communic a,on     Lesser  Number  of  Prefixes  lesser  Lme  to  converge   the  RIB  and  FIB    
  11. 11. RIB/FIB  Update   •  Aher  compleLng  SPF  computaLon,  OSPF  performs   sequenLal  RIB  update  to  reflect  the  changed  topology.  The   RIB  updates  are  further  propagated  to  the  FIB  table   •  The  RIB/FIB  update  process  may  contribute  the  most  to   the  convergence  Lme  in  the  topologies  with  large  amount   of  prefixes,  e.g.  thousands  or  tens  of  thousands   •  Plaiorm  what  you  are  using,  higher  capacity  CPU  and  RAM   will  cater  beXer  performance.          
  12. 12. Final  CalculaLon     Event   Time(ms)   Remarks   Failure  Detec,on  Delay:  Carrier-­‐delay  msec  0     0   about  5-­‐10ms  worst  case  to  detect   In  BFD  Case     150   Mul,player  3  is  last  count:  50ms  interval       Maximum  SPF  run,me   64   doubling  for  safety  makes  it  64ms   Maximum  RIB  update   20   doubling  for  safety  makes  it  20ms   OSPF  interface  flood  pacing  ,mer   5   does  not  apply  to  the  ini,al  LSA  flooded   LSA  Genera,on  Ini,al  Delay   10   enough  to  detect  mul,ple  link  failures   resul,ng  from  SRLG  failure   SPF  Ini,al  Delay   10   enough  to  hold  SPF  to  allow  two   consecu,ve  LSAs  to  be  flooded   Network  geographical  size/Physical  Media(Fiber)   0   signal  propaga,on  is  negligible   In  BFD  Case:  270ms   In  Carrier  delay  msec:  120ms       Final  FIB  UPDATE  Time:  Maximum  500ms.  It  is  sub-­‐second  convergence            
  13. 13. ConfiguraLon  Template       router  ospf  10   Suppress  transit  link  prefixes     Lmers  lsa  arrival  50     Lmers  throXle  lsa  all  10  100  1000     Lmers  throXle  spf  10  100  1000     Lmers  pacing  flood  5     Lmers  pacing  retransmission  60     ispf  
  14. 14. Packet  drop  observes   Need  more  fine  Tune  and  so  many  methods  are  available   but  most  of  them  are  MPLS  based  Traffic  Engineering,   below  Two  are  the  best…………         i.  RSVP  Based  link/node  protecLon  route     ii.  LDP  Based  LFA-­‐FRR          
  15. 15. Why  LFA-­‐FRR   •  Provide  local  sub-­‐100ms  convergence  Lmes  and   complement  any  other  fast  convergence  tuning   techniques  that  have  been  employed   •  LFA-­‐FRR  is  easily  configured  on  a  router  by  a  single   command,  calculates  everything  automaLcally   •  Easy  and  lesser  complex  than  RSVP  Based  Traffic   Engineering.        
  16. 16. Prerequisite     •  Need  MPLS  LDP   ConfiguraLon     •  Need  BFD  ConfiguraLon  to   trigger  Fast  Reroute     •  Need  some  Fast  Reroute   configuraLon  under  OSPF   Process   •  Need  some  special   configuraLon  based  on   plaiorm         mpls  ldp  discovery  targeted-­‐hello  accept   router  ospf  Y   router-­‐id  xxxxx   ispf   prefix-­‐priority  high  route-­‐map  TE_PREFIX    fast-­‐reroute  per-­‐prefix  enable  area  y  prefix-­‐ priority  high    fast-­‐reroute  per-­‐prefix  remote-­‐lfa  tunnel  mpls-­‐ ldp     ip  prefix-­‐list  TE_PREFIX  seq  5  permit  a.b.c.d/32   !   route-­‐map  TE_PREFIX  permit  10    match  ip  address  prefix-­‐list  TE_PREFIX      
  17. 17. How  to  work     1.  Ini,ally  prefix  172.16.1.0/24  was  communicated  through  B-­‐A-­‐B1-­‐B3   2.  Once  the  link  fails  between  B-­‐A  then  prior  computed  LFA  Tunnel  Triggered  by   BFD     3.  Immediate  Target  Prefix(es)  are  passed  through  B-­‐D  LFA  Tunnel  within  150ms     4.   Pack  drop  does  not  observe  because  B  router  does  not  wait  for  IGP  convergence                  
  18. 18. LFA-­‐FRR  Design  ConsideraLon     •  In  a  Ring  Topology     •  Lesser  Prefix  make  quicker  convergence   •   Specific  Prefix  with  higher  priority  will  show  best  performance   without  any  service  interrupLon  and  packet  drop.       ROBI39-­‐DHKTL25#sh  ip  int  brief   Loopback1                                10.253.51.91          YES  NVRAM      up                                        up   MPLS-­‐Remote-­‐Lfa124            10.10.202.69          YES  unset      up                                        up   show  ip  cef  10.255.255.29   10.255.255.29/32      nexthop  10.10.202.65  Vlan10  label  [166|1209]          repair:  aeached-­‐nexthop  10.253.51.94  MPLS-­‐Remote-­‐Lfa124  
  19. 19. Before/Aher  LFA  FRR   Xshell:> ping 10.252.51.111 –t Reply from 10.252.51.111: bytes=32 time=2ms TTL=253 Reply from 10.252.51.111: bytes=32 time=4ms TTL=253 Reply from 10.252.51.111: bytes=32 time=2ms TTL=253 Reply from 10.252.51.111: bytes=32 time=2ms TTL=253 Request timed out. Reply from 10.252.51.111: bytes=32 time=61ms TTL=253 Reply from 10.252.51.111: bytes=32 time=86ms TTL=253 Reply from 10.252.51.111: bytes=32 time=70ms TTL=253 Reply from 10.252.51.111: bytes=32 time=147ms TTL=253 Reply from 10.252.51.111: bytes=32 time=2ms TTL=253 Reply from 10.252.51.111: bytes=32 time=2ms TTL=253 Reply from 10.252.51.111: bytes=32 time=1ms TTL=253 Reply from 10.252.51.111: bytes=32 time=1ms TTL=253 Reply from 10.252.51.111: bytes=32 time=27ms TTL=253 Reply from 10.252.51.111: bytes=32 time=32ms TTL=253 Reply from 10.252.51.111: bytes=32 time=1ms TTL=253 Reply from 10.252.51.111: bytes=32 time=2ms TTL=253 Reply from 10.252.51.111: bytes=32 time=2ms TTL=253 Reply from 10.252.51.111: bytes=32 time=1ms TTL=253
  20. 20. BGP  Convergence     BGP  Prefix  Independent  Convergence(PIC)  is  the  best   choice  for  Fast  convergence      
  21. 21. Why?   The  BGP  PIC  Edge  for  IP  and  MPLS-­‐VPN  feature   improves  BGP  convergence  once  a  network  failure.  
  22. 22. Prerequisites     •  BGP  and  the  IP  or  MulLprotocol  Label  Switching  (MPLS)   network  is  up  and  running  with  the  customer  site   connected  to  the  provider  site  by  more  than  one  path   (mulLhomed).     •  Ensure  that  the  backup/alternate  path  has  a  unique  next   hop  that  is  not  the  same  as  the  next  hop  of  the  best   path.     •  Enable  the  BidirecLonal  Forwarding  DetecLon  (BFD)   protocol  to  quickly  detect  link  failures  of  directly   connected  neighbors.    
  23. 23. How  To  Work:  PE-­‐CE  Link/PE  Failure     •  eBGP  sessions  exist  between  the  PE  and  CE  routers.     •  Traffic  from  CE1  uses  PE1  to  reach  network  x.x.x.x/24  towards  the  router  CE2.  CE1  has  two   paths:     •  PE1  as  the  primary  path  and  PE2  as  the  backup/alternate  path.     •  CE1  is  configured  with  the  BGP  PIC  feature.  BGP  computes  PE1  as  the  best  path  and  PE2  as  the   backup/alternate  path  and  installs  both  routes  into  the  RIB  and  CEF  plane.  When  the  CE1-­‐PE1   link/PE  goes  down,  CEF  detects  the  link  failure  and  points  the  forwarding  object  to  the   backup/alternate  path.  Traffic  is  quickly  rerouted  due  to  local  fast  convergence  in  CEF.  
  24. 24. How  to  Work:  Dual  CE-­‐PE  Line/Node   Failure     •  eBGP  sessions  exist  between  the  PE  and  CE  routers.  Traffic  from  CE1  uses  PE1  to  reach  network  x.x.x.x/24   through  router  CE3.     •  CE1  has  two  paths:  PE1  as  the  primary  path  and  PE2  as  the  backup/alternate  path.     •  An  iBGP  session  exists  between  the  CE1  and  CE2  routers.   •  If  the  CE1-­‐PE1  link  or  PE1  goes  down  and  BGP  PIC  is  enabled  on  CE1,  BGP  recomputes  the  best  path,  removing   the  next  hop  PE1  from  RIB  and  reinstalling  CE2  as  the  next  hop  into  the  RIB  and  Cisco  Express  Forwarding.  CE1   automaLcally  gets  a  backup/alternate  repair  path  into  Cisco  Express  Forwarding  and  the  traffic  loss  during   forwarding  is  now  in  subseconds,  thereby  achieving  fast  convergence.    
  25. 25. How  to  Work:  IP  MPLS  PE  Down     •  The  PE  routers  are  VPNv4  iBGP  peers  with  reflect  routers  in  the  MPLS  network.     •  Traffic  from  CE1  uses  PE1  to  reach  network  x.x.x.x/24  towards  router  CE3.  CE3  is  dual-­‐homed  with  PE3  and   PE4.  PE1  has  two  paths  to  reach  CE3  from  the  reflect  routers:  PE4  is  the  primary  path  with  the  next  hop  as  a   PE4  address.     •  PE3  is  the  backup/alternate  path  with  the  next  hop  as  a  PE3  address.     •  When  PE4  goes  down,  PE1  knows  about  the  removal  of  the  host  prefix  by  IGPs  in  subseconds,  recomputes  the   best  path,  selects  PE3  as  the  best  path,  and  installs  the  routes  into  the  RIB  and  Cisco  Express  Forwarding  plane.   Normal  BGP  convergence  will  happen  while  BGP  PIC  is  redirecLng  the  traffic  towards  PE3,  and  packets  are  not   lost.  
  26. 26. ConfiguraLon  Template     router  bgp  3    no  synchronizaLon   neighbor  10.6.6.6  remote-­‐as  3    neighbor  10.6.6.6  update-­‐source  Loopback0   no  auto-­‐summary    !    address-­‐family  vpnv4      bgp  addi,onal-­‐paths  install      neighbor  10.6.6.6  acLvate      neighbor  10.6.6.6  send-­‐community  both   exit-­‐address-­‐family    !    address-­‐family  ipv4  vrf  blue      import  path  selec,on  all      neighbor  10.11.11.11  remote-­‐as  1      neighbor  10.11.11.11  acLvate    exit-­‐address-­‐family        
  27. 27. Conclusion     IGP  Fine-­‐tune    100%  Dynamic  and  simplified  but  does  not  meet  required  expectaLon  of  convergence   LFA   LFA  Tunnel  Pre-­‐computed,  pre-­‐installed   Prefix-­‐independent   Simple,  deployment  friendly,  good  scaling   But     Topology  dependant   IPFRR  IGP  computaLon  is  very  CPU-­‐intensive  task   BGP  PIC   AddiLonal  second  best  path  installed  into  the  RIB     BGP  installs  the  best  and  backup/alternate  paths  for  the  selected  prefixes  into  the  BGP  RIB      
  28. 28. Thank  You  

×