Download

328 views
285 views

Published on

Published in: Business, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
328
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
8
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Using two ingress interfaces and one egress interface provides a simple Though both ingress and egress ports are on the same card, the CRS ingress and egress paths are completely separate and though they may be on the same card as in this test, the packets still traverse the switch fabric. So this single card test has the same forwarding path a multiple card test. IPv6 QOS header bits are set by the test equipment - no marking or remarking is performed by the router.
  • At the start of the test, HPU, HPM, and LPM are all at 10%, while LPU is at 70%. The test cycles by ratcheting up the next highest priority (LPM) in 10% increments. This graphs (and the next two) show that as the higher priority traffic increases, the lower priority traffic decreases proportionately without any packet loss in the higher priority traffic.
  • Once HPM reaches 100% (though it is clipped at 90% due to the presence of 10% HPU), we then return all traffic to the original load, then ratchet the highest priority traffic (HPU) up to 20% - then cycle through the other lower priority rates as before…
  • ..continuing until HPU - the highest priority - is at 100% and all other traffic is dropped. - never dropping a single packet of HPU. Identical results as test 1
  • Port 4 is one of the mcast only egress interfaces The results show that even though Port 1 on the same linecard is significantly oversubscribed, the adjacent interface(s) are unaffected. The output represents the results of strict priority queue between the only two classes destined to this interface - HPM and LPM
  • Using two ingress interfaces and one egress interface provides a simple Though both ingress and egress ports are on the same card, the CRS ingress and egress paths are completely separate and though they may be on the same card as in this test, the packets still traverse the switch fabric. So this single card test has the same forwarding path a multiple card test. IPv6 QOS header bits are set by the test equipment - no marking or remarking is performed by the router.
  • Using two ingress interfaces and one egress interface provides a simple Though both ingress and egress ports are on the same card, the CRS ingress and egress paths are completely separate and though they may be on the same card as in this test, the packets still traverse the switch fabric. So this single card test has the same forwarding path a multiple card test. IPv6 QOS header bits are set by the test equipment - no marking or remarking is performed by the router.
  • HPU starts at 55%, LPM at 30%, LPU at 5% HPM begins at 10%, then bursts up to 90% and back to 10% repeatedly BUT the 90% burst of HPM is clipped to 45% due to the 55% steady state of HPU - WITHOUT ANY packetloss of the HPU traffic.
  • Download

    1. 1. IPTV / IPMulticast End-to-End Greg Shepherd (shep@cisco.com)
    2. 2. Agenda <ul><li>IPTV Deployments </li></ul><ul><li>Impact of Packet Loss on MPEG2 Frames </li></ul><ul><li>Network Impairment Contributors </li></ul><ul><li>CRS IPMulticast Test Data </li></ul><ul><li>Summary </li></ul><ul><li>Future Challenges </li></ul>
    3. 3. IPTV Deployments today <ul><li>Two schools of thought in deployments today: </li></ul><ul><ul><li>I think I need 50ms cvg </li></ul></ul><ul><ul><li>IPMulticast is fast enough </li></ul></ul><ul><li>IPMulticast is UDP </li></ul><ul><ul><li>The only acceptable loss is 0ms </li></ul></ul><ul><ul><li>How much is “reasonable”? </li></ul></ul><ul><li>50ms “requirement” is not a video requirement </li></ul><ul><ul><li>Legacy telco voice requirement </li></ul></ul><ul><ul><li>Efforts for 50ms only cover a limited portion network events </li></ul></ul><ul><li>Where to put the effort? </li></ul><ul><ul><li>Make IPMulticast better? </li></ul></ul><ul><ul><li>Improve the transport? </li></ul></ul><ul><ul><li>Add layers of network complexity to improve core convergence? </li></ul></ul>
    4. 4. Agenda <ul><li>IPTV Deployments </li></ul><ul><li>Impact of Packet Loss on MPEG2 Frames </li></ul><ul><li>Network Impairment Contributors </li></ul><ul><li>CRS IPMulticast Convergence Test Data </li></ul><ul><li>Summary </li></ul><ul><li>Future Challenges </li></ul>
    5. 5. Impact of Packet Loss on MPEG Stream 0% Packet Loss 0.5 % Packet Loss 5 % Packet Loss Video is very susceptible to IP Impairments
    6. 6. Impact of Packet Loss on MPEG Stream <ul><li>Compressed Digitized Video is sent as I, B, P Frames </li></ul><ul><li>I-frames: contain full picture information </li></ul><ul><ul><li>Transmit I frames approximately every 15 frames (GOP interval) </li></ul></ul><ul><li>P-frames: predicted from past I or P frames </li></ul><ul><li>B-frames: use past and future I or P frames </li></ul>I-frame loss “corrupts” P/B frames for the entire GOP
    7. 7. Impact of Packet Loss on MPEG Stream <ul><li>Example Assumptions: </li></ul><ul><ul><li>MPEG2 stream CBR = 4.8828Mbps </li></ul></ul><ul><ul><li>MPEG2 IP stream pps = 427.35pps </li></ul></ul><ul><ul><li>L3 pkt_size = 1487Bytes (encap IP + UDP + RTP) </li></ul></ul><ul><ul><li>GOP-size-in-msec 480 </li></ul></ul><ul><ul><li>GOP-size-in-pkts 205 </li></ul></ul>Network events create correlated packet loss, not random single packet loss. What’s the relationship between network CVG time and I-frame loss?
    8. 8. MPEG Frame Impact from Packet Loss 32%
    9. 9. MPEG Frame Impact from Packet Loss <ul><li>P/B frame loss is less noticeable </li></ul><ul><ul><li>Error concealment techniques in the receiver can mask some </li></ul></ul><ul><li>I-Frames loss is more problematic </li></ul><ul><ul><li>I-frame loss can result in an entire GOP loss </li></ul></ul><ul><ul><li>A single packet lost from an I-frame corrupts the entire I-frame </li></ul></ul><ul><ul><li>I-frame (GOP) loss can result in blank screen for 1-2 secs </li></ul></ul><ul><li>50ms is a phantom goal </li></ul><ul><ul><li>32% chance of I-frame loss </li></ul></ul><ul><ul><li>..another way.. </li></ul></ul><ul><ul><li>32% of your streams will have 1-2 sec blank screen outage </li></ul></ul><ul><ul><li>Why then is this a goal for some? </li></ul></ul>
    10. 10. Agenda <ul><li>IPTV Deployments </li></ul><ul><li>Impact of Packet Loss on MPEG2 Frames </li></ul><ul><li>Network Impairment Contributors </li></ul><ul><li>CRS IPMulticast Convergence Test Data </li></ul><ul><li>Summary </li></ul><ul><li>Future Challenges </li></ul>
    11. 11. What are the Impairment Contributors? <ul><li>Link Failures </li></ul><ul><li>Node Failures </li></ul><ul><li>Random Uncorrected Bit Errors </li></ul><ul><li>Congestion </li></ul><ul><li>How do we measure these? </li></ul>
    12. 12. What are the Impairment Contributors? <ul><li>1st: Need Quantify Impairments </li></ul><ul><ul><li>Need some “standard” </li></ul></ul><ul><ul><li>Relevant to viewers’ experience </li></ul></ul><ul><li># Impairments per 2 hours </li></ul><ul><ul><li>Representative of a typical movie duration </li></ul></ul><ul><ul><li>Allow for comparing contributions over a standard window of time </li></ul></ul>
    13. 13. What are the Impairment Contributors? <ul><li>Some Assumptions / Some Industry Standard Data / Some Customer Experience Data </li></ul><ul><li>Total Value Across a Typical Provider Network </li></ul><ul><li>Trunk Failures - .0010 Imp/2hr </li></ul><ul><li>HW Card Failures - .0003 Imp/2hr </li></ul><ul><li>SW Failures - .0012 Imp/2hr </li></ul><ul><ul><li>NSF/SSO reduces the realized amount of this contribution </li></ul></ul><ul><li>SW Upgrades - .0037 Imp/2hr </li></ul><ul><ul><li>Modular code (IOS-XR) reduces the realized amount of this contribution </li></ul></ul>
    14. 14. What are the Impairment Contributors? <ul><li>Uncorrected Bit Errors - 11.4629 Imp/2hrs </li></ul><ul><ul><li>&quot;Video over IP&quot; by WesSimpson (page 238) - 10 -10 per trunk </li></ul></ul>Trunk Failures: .0010 HW Failures: .0003 SW Failures: .0012 Maintenance: .0037 Total: .0062 Imp/2hrs
    15. 15. Network Impairment Contributors <ul><li>All HW/SW/Link failures combined do not compare to uncorrected bit errors </li></ul><ul><li>Last-mile networks often most significant contributors </li></ul><ul><li>SW failures/Maintenance each contribute much more than link failures </li></ul><ul><ul><li>Stable, modular software with NSF/SSO can reduce this contribution even further </li></ul></ul><ul><li>Fast convergence in the core is a worthy goal </li></ul><ul><ul><li>Improves core-contributed artifacts </li></ul></ul><ul><ul><li>Need to consider the balance of a solid platform vs. layered complexity </li></ul></ul><ul><li>Solid performing platform is more important than complex protocol solutions </li></ul>
    16. 16. Agenda <ul><li>IPTV Deployments </li></ul><ul><li>Impact of Packet Loss on MPEG2 Frames </li></ul><ul><li>Network Impairment Contributors </li></ul><ul><li>CRS IPMulticast Convergence Test Data </li></ul><ul><li>Summary </li></ul><ul><li>Future Challenges </li></ul>
    17. 17. Internal CRS Multicast Convergence Test <ul><li>Typical customer topologies </li></ul><ul><ul><li>Most tree repairs are single-hop repairs </li></ul></ul><ul><li>Typical customer network configurations </li></ul><ul><ul><li>2500 IGP routes </li></ul></ul><ul><ul><li>250k BGP routes </li></ul></ul><ul><ul><li>Tests of 400 - 4000 (S,G) SSM entries </li></ul></ul><ul><li>ISIS Prefix Prioritization </li></ul><ul><ul><li>Loopbacks </li></ul></ul><ul><ul><li>Multicast source prefixes </li></ul></ul><ul><ul><li>Not yet in OSPF </li></ul></ul>
    18. 18. Summary across all available CRS1 data <ul><li>And most important… CRS1 SSM Convergence is rather quick! </li></ul>4000 IPTV channels 800 IPTV channels 400 IPTV channels 2500 IGP, 250k BGP
    19. 19. Summary across all available CRS1 data <ul><li>UC has negligible impact on MC behavior whether 2500 or 5000 ISIS prefixes </li></ul>
    20. 20. MPEG Frame Impact from Packet Loss Competitor systems can take 1sec or MORE to converge 400 (S,G) entries 100%
    21. 21. IPMcast QOS Requirements <ul><li>Network congestion is not a significant impairment contributor </li></ul><ul><ul><li>..normally.. </li></ul></ul><ul><li>BUT it is a necessary safety-net </li></ul><ul><li>On-network multicast traffic is well known </li></ul><ul><ul><li>Flows, rates, sources.. </li></ul></ul><ul><li>Access can sycronize/spike </li></ul><ul><ul><li>“ Mother’s day” events </li></ul></ul><ul><li>Real-time (VoIP) traffic should not suffer from other traffic events </li></ul>
    22. 22. Triple-Play QOS test assumptions <ul><li>Support 4 classes of service with a strict priority relationship between these classes as follows: </li></ul><ul><ul><li>Unicast High > Multicast High > Multicast Low > Unicast Low </li></ul></ul><ul><ul><li>ie: Voip > Premium Vid > Broadcast Vid > Access </li></ul></ul><ul><li>Full line rate performance is expected for all traffic transmitted in each class when uncongested. </li></ul><ul><li>No effects observed on higher class performance due to traffic transmission on a lower class. </li></ul><ul><li>No effect on unicast traffic, nor should the unicast traffic effect the multicast traffic. </li></ul><ul><li>Congested interface should not affect same multicast flow(s) destined to adjacent uncongested interfaces. </li></ul>
    23. 23. QOS Test Configuration 2 SPIRENT AX-4000 - Port 2 HPM,LPM LPU HPU + HPM + LPM + LPU = 100% HPU + HPM + LPM + LPU > 100% TenGigE0/0/0/3 TenGigE0/0/0/1 TenGigE0/0/0/0 SPIRENT AX-4000 Port 3 Port 1 CRS-1 TenGigE0/0/0/4 Port 4 TenGigE0/0/0/5 Port 5 TenGigE0/0/0/6 Port 6 HPU Strict Priority order for dropping HPU > HPM > LPM > LPU
    24. 24. QOS Test 2 - 1of3
    25. 25. QOS Test 2 - 2of3
    26. 26. QOS Test 2 - 3of3
    27. 27. QOS Test 2 - Port 4 QOS profile is maintained on the adjacent interface with zero packet loss of the higher priority traffic.
    28. 28. QOS Test Configuration 3 SPIRENT AX-4000 - Port 2 HPU, LPM LPU HPU + HPM + LPM + LPU = 100% HPU + HPM + LPM + LPU > 100% TenGigE0/0/0/2 TenGigE0/0/0/1 TenGigE0/0/0/0 SPIRENT AX-4000 Port 3 Port 1 CRS-1 HPU = 55%, LPM = 30%, LPU= 5% HPM step from 10% to 90% HPM
    29. 29. QOS Test Configuration 3 SPIRENT AX-4000 - Port 2 HPU, LPM LPU HPU + HPM + LPM + LPU = 100% HPU + HPM + LPM + LPU > 100% TenGigE0/0/0/2 TenGigE0/0/0/1 TenGigE0/0/0/0 SPIRENT AX-4000 Port 3 Port 1 CRS-1 Test parameters: Rate: Data traffic for HPU,LPM and LPU is fixed at 5.5Gb/s, 3.0Gb/s and 0.5Gb/s respectively. HPM traffic follows square wave pattern from 1.0Gb/s for 30secs to 9Gb/s for 20secs to represent bursty HPM traffic Packet size: 220 bytes for HPU, 1496 for HPM, LPM and LPU HPM
    30. 30. QOS Test 3
    31. 31. CRS IPMulticast Test Summary <ul><li>CRS is the IPTV / IPMcast Industry Leader </li></ul><ul><li>IPMcast convergence is exceptionally fast </li></ul><ul><ul><li>And improving through constant engineering efforts </li></ul></ul><ul><ul><li>No need to chase phantom numbers </li></ul></ul><ul><li>QOS performance is unmatched </li></ul>
    32. 32. Agenda <ul><li>IPTV Deployments </li></ul><ul><li>Impact of Packet Loss on MPEG2 Frames </li></ul><ul><li>Network Impairment Contributors </li></ul><ul><li>CRS IPMulticast Test Data </li></ul><ul><li>Summary </li></ul><ul><li>Future Challenges </li></ul>
    33. 33. IPTV Deployments today <ul><li>Native IPMulticast performance in the CRS is a driving factor </li></ul><ul><ul><li>Mcast PPS </li></ul></ul><ul><ul><li>Fan-out </li></ul></ul><ul><ul><li>IPMcast convergence </li></ul></ul><ul><ul><li>Unicast/Mcast QOS </li></ul></ul><ul><li>200 - 1000 (S,G)s is typical </li></ul><ul><li>Cable, DSL, and Satellite </li></ul><ul><li>The largest is testing 10,000 (S,G)s as its goal </li></ul><ul><ul><li>Has chosen native IPMcast over PtMP </li></ul></ul><ul><ul><li>High performance of the CRS (exceeded requirements) </li></ul></ul><ul><ul><li>Simple to configure, maintain, operate, etc.. </li></ul></ul>
    34. 34. IPTV Deployments Today <ul><li>Beginning to see the 50ms “requirement” disbanded </li></ul><ul><ul><li>Scalability and stability of CRS/XR </li></ul></ul><ul><ul><li>Simplicity of IPMcast </li></ul></ul><ul><li>Overlay solutions do not reduce frequency of impairments </li></ul><ul><ul><li>Never reaches 0 loss (never really reaches the 50ms goal) </li></ul></ul><ul><ul><li>May doesn’t address link-up transitions </li></ul></ul><ul><ul><li>Adds excessive network and operational complexity for little gain </li></ul></ul><ul><li>Selecting the right platform is the best solution </li></ul>
    35. 35. Agenda <ul><li>IPTV Deployments </li></ul><ul><li>Impact of Packet Loss on MPEG2 Frames </li></ul><ul><li>Network Impairment Contributors </li></ul><ul><li>CRS IPMulticast Test Data </li></ul><ul><li>Summary </li></ul><ul><li>Future Challenges </li></ul>
    36. 36. Future Challenges <ul><li>Current IPTV is a value added service </li></ul><ul><ul><li>On-net injection </li></ul></ul><ul><ul><li>PPV or local Advertising Revenue </li></ul></ul><ul><li>Walled Garden </li></ul><ul><ul><li>Edge provider “owns” the customer </li></ul></ul><ul><li>Will this last? </li></ul>
    37. 37. Future Challenges <ul><li>Access bandwidth is driven by competition </li></ul><ul><li>Access bandwidth rapidly surpassing video bandwidth </li></ul><ul><li>Video bandwidth is semi-bounded </li></ul>
    38. 38. Future Challenges <ul><li>IPTV works as a Value Added service today </li></ul><ul><li>Access bandwidth growth opens up new applications </li></ul><ul><li>Over-the-top video is already here - in some form.. </li></ul><ul><ul><li>Joost, MacTV, YouTube, BitTorrent, AMT </li></ul></ul><ul><li>More available bandwidth will only improve these applications </li></ul><ul><li>DVRs are changing how people watch TV </li></ul><ul><li>Consumers don’t care how their DVRs are populated </li></ul><ul><li>Will live-TV be relevant in the future? </li></ul>
    39. 39. Future Challenges <ul><li>How does a provider say in the food-chain? </li></ul><ul><li>Continue to expand content offering </li></ul><ul><ul><li>Stay ahead of the curve </li></ul></ul><ul><li>Open IPMcast transport to off-net content </li></ul><ul><ul><li>Look for key strategic content partners </li></ul></ul><ul><li>Integrated Directory API </li></ul><ul><ul><li>Cisco/SciAtl </li></ul></ul>
    40. 40. Thank You

    ×