IPTV Forum Eastern Europe 2009


             Ensuring QoS/QoE
             throughout the network




       Andrey Alekseyev
       Director IP service network
       COMSTAR-UTS

Wednesday, October 21, 2009
The company

              Leading supplier of integrated telecom solutions – Russia, CIS
              50.9% of COMSTAR-UTS owned by MTS, Russia’s top mobile operator
              Subsidiaries in Ukraine, Armenia
              Listed on the London Stock Exchange (LSE: CMST)
              Largest broadband customer base in Moscow
              Owns 56% of MGTS (Moscow PSTN), with 3.6 million of telephony
              subscribers
              Operates its own multi-service network across Russia and Europe
              The first Russian operator to launch commercial IPTV services in 2005
              Number of subscribers (Q2 2009):
               791,000 broadband Internet / 133,000 IPTV subscribers in Moscow
               48,000 corporate customers in Moscow
               324,000 broadband customers / 1,953,000 pay-TV subscribers in the regions




Wednesday, October 21, 2009
Hello, my name is Andrey Alekseyev and I work as the Director IP service network for COMSTAR-United TeleSystems. First of all, Iʼd like to apologize to the audience for not being able to
attend this conference, thatʼs because I had run into some unfortunate complications with my European visa. About the company - COMSTAR-United TeleSystems is one of the biggest
telecommunication companies in Russia providing a wide range of services to residential customers across Russia. Main customer base of COMSTAR-UTS is in Moscow where in tandem
with MGTS, the local PSTN company belonging to the same group of companies, we provide broadband Internet services and IPTV. We were the first Russian operator to launch IPTV
services in late 2005 and our IPTV customer base grew quickly.
The network

                   AS8359 – fully converged, IPv6 enabled IP/MPLS network
                   Core nodes in Moscow and Saint Petersburg (Russia)
                   Main backbone nodes/PoP’s in Moscow, Saint Petersburg, Rostov-on-Don, Kiev,
                   Stockholm, Frankfurt (am Main), Amsterdam, London, Paris, New York, Los
                   Angeles
                   Redundant n*10GE inter/intra-city links
                   Best of breed, proven network hardware
                   Tens of gigabits of Internet and media traffic in the network
                   Aggregating Internet traffic at multiple Internet Exchanges
                   Access network in Moscow built and operated by MGTS (local PSTN)
                   Access based mostly on ADSL in Moscow and Ethernet in the regions




Wednesday, October 21, 2009
About the network. COMSTAR-UTS has built its own powerful IP/MPLS network over the past 10 years. Obviously, being an ISP we should continuously develop it to meet the customersʼ
expectations on the quality of services. Thus we typically utilize the latest technology and state-of-the art solutions. We have points of presence across Russia, Europe and US and
aggregate lots of traffic. Our customers consume as much as they can, often just plainly saturating their ports (but not our uplinks!) with 24 hours downloads. In regards to the access
technologies we are mostly ADSL with a steady growth of percentage of Ethernet-based networks weʼre building across Russia. On the slide you may find more specific information about
autonomous system 8359 which is our flagship AS.
Moscow network – key elements
              Over 200 access nodes (MGTS central offices), redundant 10GE to every node
              Over 4000 DSLAMs installed
              ADSL access 6-20 Mbps
              80% of subscribers at 6 Mbps, 20% at 10 Mbps and up
              Internet traffic and IPTV unicast isolated and end-to-end prioritized in the network




                                                                                      n*10G                                                           CO
                                                                                                                                                               10G

                           IP/MPLS                             COMSTAR                                       MGTS                                              CO

                                                                                                                                                       10G
                                                                                      n*10G
                                                                                                                                                      CO




                     Backbone                                                         Core                                         Access
Wednesday, October 21, 2009
Moscow part of the network is probably the most complicated one. Thatʼs because in Moscow to connect our subscribers we should have a PoP/node at every central office in the city.
Whatʼs quite unique about Moscow is we have over 200 of central offices in the city. Network topology is mostly ring-based with a double-star in the heart of it. On this slide I put a brief
outline of a single ring segment connected to the network core. Basically, we have every central office connected to the core via redundant 10GigE links. We have over 4000 of DSLAMs
installed in the network with the typical access speed of 6 Megabits per second. It is essential for a network like this to utilize all the means available in the core and access equipment to
ensure most strict policies in regards to quality of service. We isolated Internet and IPTV traffic from each other throughout the entire network all the way down to subscriberʼs CPE. We apply
vendor specific solutions like prioritized queues and Differentiated Services architecture to achieve QoS in the network.
Challenges in the network
              Several administrative domains (MGTS access, COMSTAR-UTS core/backbone/
              transport)
              ADSL infrastructure based on traditional telephony copper lines to homes passed
              Vast majority of DSLAMs are at CO’s, typical distance to residential customer
              around 1.5 km
              Old copper infrastructure introduces erratic behaviour to the service with high
              demand on quality
              No use to monitor service close to DSLAM’s – most problematic is the subscriber
              line (and CPE)
              CPE’s unmanaged from the very beginning, no enforcement on CPE model/type
              True end-to-end monitoring solution required for IPTV
              Without ensuring proactive QoE measurements service delivery and support turns
              into a nightmare




Wednesday, October 21, 2009
We have many challenges to solve on the network side. The toughest ones are probably the quality of the copper lines and the fact we have deliberately chosen unmanaged CPEʼs from the
very beginning. The fact we have a few companies inside the group working closely to deliver the services also attributes to the complexity of the service delivery and service support chains.
To do things right we definitely should have adopted a number of process models, procedures and tools to monitor, analyze and solve problems.
IPTV infrastructure elements
             Key IPTV infrastructure elements very similar to the vast majority of deployments
             Major differences in streaming bitrates, VOD setup, middleware

                            Content creation/                     Headend
                                 ingest                          (receivers,
                                                                 encoders)

                                Middleware                   Network access/
                                                             aggregation/load                                         Core network                           Backbone
                                                                balancing
                           VOD central node
                                                             Network access/
                                                             aggregation/load
                                 Content                        balancing
                             encryption/DRM



                                                                  VoD
                                                                  CO A
                                                                                                                                  Access
                                                                  VoD                                                             network
                                                                  CO B
                                                                  VoD
                                                                  CO ...
                                                                                                                                                                         Subscriber

                                                          VOD edge nodes
                                                                                                                                                            Subscriber



Wednesday, October 21, 2009
In regards to the IPTV infrastructure we donʼt have anything unusual in the architecture. We have antennas, satellite receivers, encoders, middleware and accompanying elements just as
every other operator on the market does. However, in our opinion what differentiates our setup now is the current middleware platform and also probably our VOD infrastructure
enhancements.
Challenges in the headend
             Full re-coding for most channels and VOD titles to ensure predictable quality on the
             majority of ADSL lines
             Over 100 TV channels plus 70 radio channels to broadcast
             SDI input to encoders/streamers
             MPEG2 is still the main format with the legacy/new STB’s ratio of 95%
             CBR due to legacy issues (primarily because of the initial CAS/DRM setup)
             MPEG4 gradually introduced, requirement to maintain both MPEG2 and MPEG4 to
             ensure compatibility with legacy STB’s
             Most channels are from satellite (already compressed, with the obvious
             complications when re-coding to MPEG2 CBR)
             Even the channels produced locally suffer from compression issues
             A whole variety of input formats for VOD content ingest (MPEG PS files, DVD’s,
             HDCAM etc.)




Wednesday, October 21, 2009
Taking a closer look at our headend infrastructure you may find a number of factors which contribute to the complexity of ensuring the necessary quality of experience for the subscribers.
First of all, being mostly an ADSL ISP we need to provide certain guarantees on the quality of the streamed content. And we canʼt do this unless we re-code our content before streaming.
We have all of our receivers outputting video through SDI to the encoding part. We are also still on MPEG2 with the MPEG4 format gradually introduced with the increasing number of the
new models of set-top-boxes. We also struggle with a number of problems related to the VOD content management and especially with the whole variety of formats we are getting our VOD
content in. It is not a surprise that we have understood that ensuring QoE is not that easy but also that it is inevitable thing to do.
Headend – encoding
               Standard definition video
                  MPEG2
                  Profile MP@ML
                  Around 2.5s compression delay
                  Video bitrate 3.5 Mbps, IP stream 4 Mbps, ADSL bandwidth around 4.5 Mbps
                  Resolution 544x576
                  Constant IBBBP GOP structure and length

                     MPEG4/H.264
                     Profile Main@Level3
                     Around 2s compression delay
                     Video bitrate 1.8 Mbps, IP stream 2 Mbps, ADSL bandwidth around 2.5 Mbps
                     Resolution 544x576
                     IBBBP GOP structure, GOP length 24 (+adaptive GOP)
                     Hierarchical B

               High definition video
                  MPEG4/H.264
                  High@Level4 profile
                  Around 2s compression delay
                  Video bitrate 7.3 Mbps, IP stream 8 Mbps
                  Resolution 1440x1080
                  IBBBP GOP structure, GOP length 48 (adaptive GOP)

               Sound streamed w/o re-coding, same bitrate as ingest. Typically MPEG1 Layer2 96-256 Kbps
               for stereo, AC3 for AC3


Wednesday, October 21, 2009
This slide is probably purely technical. I will leave the thorough reading to the curious audience, maybe this one will be helpful for comparison to the setups you employ. Just a few things
worth mentioning. We use a short GOP structure for MPEG2 and this kind of helps us to avoid a number or problems related to the channel change timings. We were once approached by
then a major IPTV vendor soliciting their solution for fast channel change. The sales person actually failed to explain us how to beat the channel change time we already had at that time. For
the high definition content we have to stick with around 8 Megabits per second of ip streaming mostly because of the limitations in the access technology.
Headend – evolution, findings
               First deployed in 2004-2005 (MPEG2 only, receiver-encoder/streamer setup)
              Fully revamped in 2007-2008 (MPEG2/MPEG4/PiP of the same channel from a
              single encoder/streamer)
              Multiple vendor solutions tested before choosing a new setup/equipment
              Contrary to what was expected from testing, the result turned out to be that default
              factory settings for encoding are most effective ones
              Reasons for the above
                  most of the channels contain very different content (for instance, old color movie
                  followed by an old black and white movie, followed by a sport programme)
                  most of the channels we receive are already compressed – often not very well,
                  even if not coming from a satellite
                  SD channels/VOD content, which is still the biggest part of the streamed content
                  come in 544/576 resolution, not much additional processing to do
                  static picture enhancement options like noise reduction fail to maintain sustained
                  level of acceptable video quality in a situation when channel content changes
                  frequently and in unpredictable manner
                  with so much of an old/edited/re-mastered content on most channels any
                  specific picture enhancement options might actually degrade video/audio quality
                  in comparison to a well-balanced factory set of options for encoding


Wednesday, October 21, 2009
We never stopped to improve things, though. During 2007-2008 we actually completely revamped the headend encoding. We changed all of the bulky streaming encoders we had to the
most recent chassis which currently does simultaneous streaming of MPEG2, MPEG4 and QCIF Picture-in-picture copies of every channel. What is quite funny and what may seem to be an
unwise thing to do, is we still use the factory default settings (well, mostly) for almost all of the channels streamed. The reasons are simple. We have so different quality of the content - we
just cannot quite match all of the enhancement features we have in the hardware to this content. We have tried a variety of the hardware from different vendors only to find out that most of
the time any enhancement feature we looked into, fails to qualify as the cure for a mid to low quality content. There are certain specifics about locally available content which the vendors just
cannot fix. Or maybe they decided to fix it with the factory defaults :)
VOD load balancing
              Fault-tolerant central node at the headend premises
              Edge nodes across the city
              RTSP dynamically redirected to edge nodes based on the routing information
              Subscribers get content from the nearest edge node – unicast video traffic localized
              in the access network


                                                                                        VOD
                                                                                     central node




                                                                                           Core
                                                                                                                                        VOD
                                                                                                                                      edge node


                                            VOD
                                          edge node                                 Access




Wednesday, October 21, 2009
In respect to what we have done differently in the VOD part there is one thing which is particularly worth to mention. This one is about fault-tolerance and scalability. As I said the access
network in Moscow is huge and we didnʼt quite like the idea of making the IPTV unicast flow through the entire network to reach the subscribers. Instead we chose a decentralized model
with a dozen of VOD nodes located at the edge of the network. To make it fault-tolerant we invented our own algorithm to ensure the subscribers will be getting the content from the closest
node. It is based on the BGP prefixes information and it is dynamic. This way we allow our neighbour company, MGTS, to manage the load on the access network without our intervention
and we also provide enough means to fail over the damaged node. The redirector doing smart load distribution is actually part of the middleware.
Fault and performance management
             With the variety of services provided obviously we need an industry standard
             solution for fault management
             Deployed fault management solution allows to monitor network and service issues
             like link/router failures, application failures, IPTV infrastructure outages etc. This
             includes, for instance, monitoring service dependancies and multicast trees end-to-
             end with the root cause analysis
             We monitor everything! Moreover, every other day we typically find more things to
             monitor :)
             We have our engineering development/operations people for both IP/MPLS
             network and IPTV completely freed from day-to-day monitoring activities – the work
             is done by our NOC staff who do 24x7x365 monitoring of the network and the
             headend
             We have tools to free people from hard and boring tasks and keep them
             entertained with the new technologies
             For performance management we currently use a number of open source tools to
             mostly gather historical data and visualize it for reference purposes, planning and
             trend analysis
             For the entire network we also use a separate real-time traffic analyzer
             We supply all new CPE with TR-069 software enabled and can now monitor them

Wednesday, October 21, 2009
With these many complexities, issues and sophistication in the network and IPTV infrastructure, we have been continuously evolving in our monitoring solutions. We actually use now the
best fault management software available. And we make the most of it. We consider it very important to develop our people and to free them from abundant manual operations in regards to
monitoring. On the other side we donʼt want them to be just plain watchers. We always try to provide them with the tools which are both fun to study and operate and at the same time
convenient, scalable and robust. We still have lots of work to do in regards to the performance management, though. Also worth mentioning is the fact we try to use all of the mechanisms
available to manage and control every element in the service delivery chain. Thatʼs why we reconsidered the paradigm of unmanaged CPEʼs and implemented TR-069 solution with all of the
new CPEʼs shipped TR-069 enabled.
Headend monitoring
              Industry standard solutions for fault and performance management are good
              though still not sufficient to monitor problems specific to IPTV
              We complement it with a set of additional IPTV-related tools
              We do additional monitoring/management of the headend equipment with the
              proprietary vendor software
              We monitor MDI (DF and MLR) using hardware probes at the headend and inside
              the access network to keep a clear distinction between areas of responsibility in
              different administrative domains and to have a single metric to compare when there
              is a service degradation/loss
              We intentionally keep MDI monitoring simple yet efficient – it should be easily
              recognized and accepted by the NOC engineers, it should work around the clock, it
              has to do clear visualization
              With the deployed MDI probes we can partially detect even more complicated
              problems which stem from the receiving part due to weather conditions etc. (which
              are then reflected in the streams output from the headend)




Wednesday, October 21, 2009
In regards to monitoring IPTV infrastructure it is essential to complement the well-known and proven fault and performance management solutions with a certain number of carefully selected
instruments. We picked up a solution for MDI monitoring which we found to be the most effective for our NOC engineers. These are essentially MDI probes which we installed in the network
and since that time analyzing daily MDI reports is a routine. What helps us a lot to resolve a certain number of complicated issues in the process of cooperation with our neighbour engineers
is the fact we clearly separate the responsibility of keeping the service intact. And MDI probes contribute immensely.
MDI monitoring
              We have probes deployed across headend infrastructure and access network for
              MDI monitoring


                    Encoder
                     Encoder


                                             GE
                                                                                                                                                              Access
                                   MDI probe                                                                                                                (DSLAM's,
                                                                                                                                                              BRAS)
                                                  FE


                                                                                                                                 Edge node

                  NOC                                                                             CORE
                                                                                       Unicast
                                                                                        vpn                    GRT
                                         Controller
              Headend                                                                                                                       Sub                   Probe
                                                                                                                                                         Field
                                                                                                                                                       engineer



                                                        n*GE


                                        FE
                                                                                                         Access
                                                        GE
                                                                                                       (DSLAM's,                                        Multicast
                                             Probe                                                       BRAS)
                                                                                                                                                        Unicast mgmt
                           VOD edge node                                   Edge node



Wednesday, October 21, 2009
This is the illustration of the process of MDI monitoring in our network. All of the key elements are seen here and you may envision how we do that.
Set-top-box monitoring
              We never stop monitoring so we decided to go even further in order to be able to
              drill down to the source of the problems by doing service monitoring on the STB
              side
              Traditional approach of monitoring IPTV services at DSLAM port doesn’t work for
              us – our problems are mostly at the very end of the service delivery chain
              We found an innovative and clever solution on the market to do what we need – a
              tiny monitoring agent embedded inside STB and sending live statistics data in the
              real-time to the centralized server software
              Allows us to collect and analyze the following data per STB in the real-time:
                    Performance data (memory, CPU, network utilization)
                    User actions (navigation, channel change etc.)
                    STB actions in response to user activity (when actual IGMP join occurred, what
                    came in response to RTSP request, what are the errors etc.)
              Allows us to do network wide metrics analysis (e.g. actual mean channel change
              time across the entire subs base)
              Does the correlation between the infrastructure and the data collected from STB’s –
              can visualize what CO’s are prone to service degradation and why
              Does a variety of historical reports, including proactive suggestions (e.g. jeopardy
              of churn)

Wednesday, October 21, 2009
A year ago we accidentally found a solution weʼve been thinking of, to monitor things even further. The problem with our service model is we have lots of trouble tickets related to the quality
of the ADSL line and/or CPE or set-top-box configuration. The solution is actually simple - we now monitor set-top-boxes too! A tiny piece of software inside all of our set-top-boxes gather all
the necessary data to analyze. What is important, the data is available to us in the real-time. We can select any of our STBʼs and check if there is anything wrong with it. Or was. We can
now do fast correlation between what we get from the other monitoring tools and what we see from the subscribers side. Basically, we now have complete picture, seeing the network as the
operator and also as one big subscriber as well. A flexible reporting is available of course. We can now do day-to-day analysis of the things like overall service quality, channel popularity and
most important quality indicators as well.
Set-top-box monitoring
              Live channel status allows to quickly evaluate how well STB’s in the field receive
              multicast/VOD
              Network history tracks outages at CO’s
              Helps to correlate monitoring data
              Intuitive to use




Wednesday, October 21, 2009
A couple of screenshots to illustrate how it looks like to check the live channel status and outages tracking.
Set-top-box monitoring
               STB summary page allows to watch a particular STB state/behaviour in time




               KPI statistics provide
               useful information on indicators like channel change times and similar figures




Wednesday, October 21, 2009
This ones are all about drilling down to the gory details of set-top-box statistics and also, overall kpi statistics.
Set-top-box monitoring
              Event logs allow to check exactly what an STB was doing
              Proven reporting mechanisms provide historical overview and easily read status
              information
              Reports help manage content too!
              Easily integrated with the existing
              solutions (OSS/BSS and middleware)




Wednesday, October 21, 2009
We can analyze it more thorougly with the detailed log of what was going on inside an STB. Like when it sent IGMP join, when the stream started to flow to be decoded and all the things like
that. The reports are based on an industry standard tools and are clearly structured and ready to use.
IPTV service performance – middleware
              Transition from a basic web-based application to a feature-rich UI client
              Dramatic boost of interface performance
              Ensures great usability even with
              the navigation pane minimized
              Provides the same experience on
              legacy and new STB’s
              Does bells and whistles like PiP (QCIF)
              even on legacy STB’s




Wednesday, October 21, 2009
Last but not least come the middleware enhancements we did in the past few years. In the beginning we had only a basic web-based client in the set-top-box and in two years it became
really obsolete. We opted to change and in close cooperation with a local company a new middleware appeared. It is actually the solution we needed. It changed the usability and interface
performance so drastically we were really afraid it will be a shock to our subscribers with the migration. Fortunately it wasnʼt. Of course we got complaints, but then again, an operator always
gets some. What was really surprising we managed to migrate around 150,000 of STBʼs in two months period completely transparently. Now with the new middleware in place weʼre free to
introduce new services (which we already do) and the subscribers really experience a better quality because their navigation is now extremely fast, they have more intuitive interface, they
have lots of more things to come out of their TV screens too.
Well, thatʼs about all I had to say today. To summarize: QoE becomes really important these days and lots of tools should be utilized wisely. Often, revamping of the infrastructure is
unavoidable too. Many thanks to the audience for the attention to this presentation. Iʼm ready for the questions too if any of you have them. Iʼm generally reachable via e-mail and
occasionally I visit Europe too.
Products and partners




                                                                                                7




Wednesday, October 21, 2009
Here are the logos of our partners as the courtesy to them being most influential and helpful.

Comstar IPTV

  • 1.
    IPTV Forum EasternEurope 2009 Ensuring QoS/QoE throughout the network Andrey Alekseyev Director IP service network COMSTAR-UTS Wednesday, October 21, 2009
  • 2.
    The company Leading supplier of integrated telecom solutions – Russia, CIS 50.9% of COMSTAR-UTS owned by MTS, Russia’s top mobile operator Subsidiaries in Ukraine, Armenia Listed on the London Stock Exchange (LSE: CMST) Largest broadband customer base in Moscow Owns 56% of MGTS (Moscow PSTN), with 3.6 million of telephony subscribers Operates its own multi-service network across Russia and Europe The first Russian operator to launch commercial IPTV services in 2005 Number of subscribers (Q2 2009): 791,000 broadband Internet / 133,000 IPTV subscribers in Moscow 48,000 corporate customers in Moscow 324,000 broadband customers / 1,953,000 pay-TV subscribers in the regions Wednesday, October 21, 2009 Hello, my name is Andrey Alekseyev and I work as the Director IP service network for COMSTAR-United TeleSystems. First of all, Iʼd like to apologize to the audience for not being able to attend this conference, thatʼs because I had run into some unfortunate complications with my European visa. About the company - COMSTAR-United TeleSystems is one of the biggest telecommunication companies in Russia providing a wide range of services to residential customers across Russia. Main customer base of COMSTAR-UTS is in Moscow where in tandem with MGTS, the local PSTN company belonging to the same group of companies, we provide broadband Internet services and IPTV. We were the first Russian operator to launch IPTV services in late 2005 and our IPTV customer base grew quickly.
  • 3.
    The network AS8359 – fully converged, IPv6 enabled IP/MPLS network Core nodes in Moscow and Saint Petersburg (Russia) Main backbone nodes/PoP’s in Moscow, Saint Petersburg, Rostov-on-Don, Kiev, Stockholm, Frankfurt (am Main), Amsterdam, London, Paris, New York, Los Angeles Redundant n*10GE inter/intra-city links Best of breed, proven network hardware Tens of gigabits of Internet and media traffic in the network Aggregating Internet traffic at multiple Internet Exchanges Access network in Moscow built and operated by MGTS (local PSTN) Access based mostly on ADSL in Moscow and Ethernet in the regions Wednesday, October 21, 2009 About the network. COMSTAR-UTS has built its own powerful IP/MPLS network over the past 10 years. Obviously, being an ISP we should continuously develop it to meet the customersʼ expectations on the quality of services. Thus we typically utilize the latest technology and state-of-the art solutions. We have points of presence across Russia, Europe and US and aggregate lots of traffic. Our customers consume as much as they can, often just plainly saturating their ports (but not our uplinks!) with 24 hours downloads. In regards to the access technologies we are mostly ADSL with a steady growth of percentage of Ethernet-based networks weʼre building across Russia. On the slide you may find more specific information about autonomous system 8359 which is our flagship AS.
  • 4.
    Moscow network –key elements Over 200 access nodes (MGTS central offices), redundant 10GE to every node Over 4000 DSLAMs installed ADSL access 6-20 Mbps 80% of subscribers at 6 Mbps, 20% at 10 Mbps and up Internet traffic and IPTV unicast isolated and end-to-end prioritized in the network n*10G CO 10G IP/MPLS COMSTAR MGTS CO 10G n*10G CO Backbone Core Access Wednesday, October 21, 2009 Moscow part of the network is probably the most complicated one. Thatʼs because in Moscow to connect our subscribers we should have a PoP/node at every central office in the city. Whatʼs quite unique about Moscow is we have over 200 of central offices in the city. Network topology is mostly ring-based with a double-star in the heart of it. On this slide I put a brief outline of a single ring segment connected to the network core. Basically, we have every central office connected to the core via redundant 10GigE links. We have over 4000 of DSLAMs installed in the network with the typical access speed of 6 Megabits per second. It is essential for a network like this to utilize all the means available in the core and access equipment to ensure most strict policies in regards to quality of service. We isolated Internet and IPTV traffic from each other throughout the entire network all the way down to subscriberʼs CPE. We apply vendor specific solutions like prioritized queues and Differentiated Services architecture to achieve QoS in the network.
  • 5.
    Challenges in thenetwork Several administrative domains (MGTS access, COMSTAR-UTS core/backbone/ transport) ADSL infrastructure based on traditional telephony copper lines to homes passed Vast majority of DSLAMs are at CO’s, typical distance to residential customer around 1.5 km Old copper infrastructure introduces erratic behaviour to the service with high demand on quality No use to monitor service close to DSLAM’s – most problematic is the subscriber line (and CPE) CPE’s unmanaged from the very beginning, no enforcement on CPE model/type True end-to-end monitoring solution required for IPTV Without ensuring proactive QoE measurements service delivery and support turns into a nightmare Wednesday, October 21, 2009 We have many challenges to solve on the network side. The toughest ones are probably the quality of the copper lines and the fact we have deliberately chosen unmanaged CPEʼs from the very beginning. The fact we have a few companies inside the group working closely to deliver the services also attributes to the complexity of the service delivery and service support chains. To do things right we definitely should have adopted a number of process models, procedures and tools to monitor, analyze and solve problems.
  • 6.
    IPTV infrastructure elements Key IPTV infrastructure elements very similar to the vast majority of deployments Major differences in streaming bitrates, VOD setup, middleware Content creation/ Headend ingest (receivers, encoders) Middleware Network access/ aggregation/load Core network Backbone balancing VOD central node Network access/ aggregation/load Content balancing encryption/DRM VoD CO A Access VoD network CO B VoD CO ... Subscriber VOD edge nodes Subscriber Wednesday, October 21, 2009 In regards to the IPTV infrastructure we donʼt have anything unusual in the architecture. We have antennas, satellite receivers, encoders, middleware and accompanying elements just as every other operator on the market does. However, in our opinion what differentiates our setup now is the current middleware platform and also probably our VOD infrastructure enhancements.
  • 7.
    Challenges in theheadend Full re-coding for most channels and VOD titles to ensure predictable quality on the majority of ADSL lines Over 100 TV channels plus 70 radio channels to broadcast SDI input to encoders/streamers MPEG2 is still the main format with the legacy/new STB’s ratio of 95% CBR due to legacy issues (primarily because of the initial CAS/DRM setup) MPEG4 gradually introduced, requirement to maintain both MPEG2 and MPEG4 to ensure compatibility with legacy STB’s Most channels are from satellite (already compressed, with the obvious complications when re-coding to MPEG2 CBR) Even the channels produced locally suffer from compression issues A whole variety of input formats for VOD content ingest (MPEG PS files, DVD’s, HDCAM etc.) Wednesday, October 21, 2009 Taking a closer look at our headend infrastructure you may find a number of factors which contribute to the complexity of ensuring the necessary quality of experience for the subscribers. First of all, being mostly an ADSL ISP we need to provide certain guarantees on the quality of the streamed content. And we canʼt do this unless we re-code our content before streaming. We have all of our receivers outputting video through SDI to the encoding part. We are also still on MPEG2 with the MPEG4 format gradually introduced with the increasing number of the new models of set-top-boxes. We also struggle with a number of problems related to the VOD content management and especially with the whole variety of formats we are getting our VOD content in. It is not a surprise that we have understood that ensuring QoE is not that easy but also that it is inevitable thing to do.
  • 8.
    Headend – encoding Standard definition video MPEG2 Profile MP@ML Around 2.5s compression delay Video bitrate 3.5 Mbps, IP stream 4 Mbps, ADSL bandwidth around 4.5 Mbps Resolution 544x576 Constant IBBBP GOP structure and length MPEG4/H.264 Profile Main@Level3 Around 2s compression delay Video bitrate 1.8 Mbps, IP stream 2 Mbps, ADSL bandwidth around 2.5 Mbps Resolution 544x576 IBBBP GOP structure, GOP length 24 (+adaptive GOP) Hierarchical B High definition video MPEG4/H.264 High@Level4 profile Around 2s compression delay Video bitrate 7.3 Mbps, IP stream 8 Mbps Resolution 1440x1080 IBBBP GOP structure, GOP length 48 (adaptive GOP) Sound streamed w/o re-coding, same bitrate as ingest. Typically MPEG1 Layer2 96-256 Kbps for stereo, AC3 for AC3 Wednesday, October 21, 2009 This slide is probably purely technical. I will leave the thorough reading to the curious audience, maybe this one will be helpful for comparison to the setups you employ. Just a few things worth mentioning. We use a short GOP structure for MPEG2 and this kind of helps us to avoid a number or problems related to the channel change timings. We were once approached by then a major IPTV vendor soliciting their solution for fast channel change. The sales person actually failed to explain us how to beat the channel change time we already had at that time. For the high definition content we have to stick with around 8 Megabits per second of ip streaming mostly because of the limitations in the access technology.
  • 9.
    Headend – evolution,findings First deployed in 2004-2005 (MPEG2 only, receiver-encoder/streamer setup) Fully revamped in 2007-2008 (MPEG2/MPEG4/PiP of the same channel from a single encoder/streamer) Multiple vendor solutions tested before choosing a new setup/equipment Contrary to what was expected from testing, the result turned out to be that default factory settings for encoding are most effective ones Reasons for the above most of the channels contain very different content (for instance, old color movie followed by an old black and white movie, followed by a sport programme) most of the channels we receive are already compressed – often not very well, even if not coming from a satellite SD channels/VOD content, which is still the biggest part of the streamed content come in 544/576 resolution, not much additional processing to do static picture enhancement options like noise reduction fail to maintain sustained level of acceptable video quality in a situation when channel content changes frequently and in unpredictable manner with so much of an old/edited/re-mastered content on most channels any specific picture enhancement options might actually degrade video/audio quality in comparison to a well-balanced factory set of options for encoding Wednesday, October 21, 2009 We never stopped to improve things, though. During 2007-2008 we actually completely revamped the headend encoding. We changed all of the bulky streaming encoders we had to the most recent chassis which currently does simultaneous streaming of MPEG2, MPEG4 and QCIF Picture-in-picture copies of every channel. What is quite funny and what may seem to be an unwise thing to do, is we still use the factory default settings (well, mostly) for almost all of the channels streamed. The reasons are simple. We have so different quality of the content - we just cannot quite match all of the enhancement features we have in the hardware to this content. We have tried a variety of the hardware from different vendors only to find out that most of the time any enhancement feature we looked into, fails to qualify as the cure for a mid to low quality content. There are certain specifics about locally available content which the vendors just cannot fix. Or maybe they decided to fix it with the factory defaults :)
  • 10.
    VOD load balancing Fault-tolerant central node at the headend premises Edge nodes across the city RTSP dynamically redirected to edge nodes based on the routing information Subscribers get content from the nearest edge node – unicast video traffic localized in the access network VOD central node Core VOD edge node VOD edge node Access Wednesday, October 21, 2009 In respect to what we have done differently in the VOD part there is one thing which is particularly worth to mention. This one is about fault-tolerance and scalability. As I said the access network in Moscow is huge and we didnʼt quite like the idea of making the IPTV unicast flow through the entire network to reach the subscribers. Instead we chose a decentralized model with a dozen of VOD nodes located at the edge of the network. To make it fault-tolerant we invented our own algorithm to ensure the subscribers will be getting the content from the closest node. It is based on the BGP prefixes information and it is dynamic. This way we allow our neighbour company, MGTS, to manage the load on the access network without our intervention and we also provide enough means to fail over the damaged node. The redirector doing smart load distribution is actually part of the middleware.
  • 11.
    Fault and performancemanagement With the variety of services provided obviously we need an industry standard solution for fault management Deployed fault management solution allows to monitor network and service issues like link/router failures, application failures, IPTV infrastructure outages etc. This includes, for instance, monitoring service dependancies and multicast trees end-to- end with the root cause analysis We monitor everything! Moreover, every other day we typically find more things to monitor :) We have our engineering development/operations people for both IP/MPLS network and IPTV completely freed from day-to-day monitoring activities – the work is done by our NOC staff who do 24x7x365 monitoring of the network and the headend We have tools to free people from hard and boring tasks and keep them entertained with the new technologies For performance management we currently use a number of open source tools to mostly gather historical data and visualize it for reference purposes, planning and trend analysis For the entire network we also use a separate real-time traffic analyzer We supply all new CPE with TR-069 software enabled and can now monitor them Wednesday, October 21, 2009 With these many complexities, issues and sophistication in the network and IPTV infrastructure, we have been continuously evolving in our monitoring solutions. We actually use now the best fault management software available. And we make the most of it. We consider it very important to develop our people and to free them from abundant manual operations in regards to monitoring. On the other side we donʼt want them to be just plain watchers. We always try to provide them with the tools which are both fun to study and operate and at the same time convenient, scalable and robust. We still have lots of work to do in regards to the performance management, though. Also worth mentioning is the fact we try to use all of the mechanisms available to manage and control every element in the service delivery chain. Thatʼs why we reconsidered the paradigm of unmanaged CPEʼs and implemented TR-069 solution with all of the new CPEʼs shipped TR-069 enabled.
  • 12.
    Headend monitoring Industry standard solutions for fault and performance management are good though still not sufficient to monitor problems specific to IPTV We complement it with a set of additional IPTV-related tools We do additional monitoring/management of the headend equipment with the proprietary vendor software We monitor MDI (DF and MLR) using hardware probes at the headend and inside the access network to keep a clear distinction between areas of responsibility in different administrative domains and to have a single metric to compare when there is a service degradation/loss We intentionally keep MDI monitoring simple yet efficient – it should be easily recognized and accepted by the NOC engineers, it should work around the clock, it has to do clear visualization With the deployed MDI probes we can partially detect even more complicated problems which stem from the receiving part due to weather conditions etc. (which are then reflected in the streams output from the headend) Wednesday, October 21, 2009 In regards to monitoring IPTV infrastructure it is essential to complement the well-known and proven fault and performance management solutions with a certain number of carefully selected instruments. We picked up a solution for MDI monitoring which we found to be the most effective for our NOC engineers. These are essentially MDI probes which we installed in the network and since that time analyzing daily MDI reports is a routine. What helps us a lot to resolve a certain number of complicated issues in the process of cooperation with our neighbour engineers is the fact we clearly separate the responsibility of keeping the service intact. And MDI probes contribute immensely.
  • 13.
    MDI monitoring We have probes deployed across headend infrastructure and access network for MDI monitoring Encoder Encoder GE Access MDI probe (DSLAM's, BRAS) FE Edge node NOC CORE Unicast vpn GRT Controller Headend Sub Probe Field engineer n*GE FE Access GE (DSLAM's, Multicast Probe BRAS) Unicast mgmt VOD edge node Edge node Wednesday, October 21, 2009 This is the illustration of the process of MDI monitoring in our network. All of the key elements are seen here and you may envision how we do that.
  • 14.
    Set-top-box monitoring We never stop monitoring so we decided to go even further in order to be able to drill down to the source of the problems by doing service monitoring on the STB side Traditional approach of monitoring IPTV services at DSLAM port doesn’t work for us – our problems are mostly at the very end of the service delivery chain We found an innovative and clever solution on the market to do what we need – a tiny monitoring agent embedded inside STB and sending live statistics data in the real-time to the centralized server software Allows us to collect and analyze the following data per STB in the real-time: Performance data (memory, CPU, network utilization) User actions (navigation, channel change etc.) STB actions in response to user activity (when actual IGMP join occurred, what came in response to RTSP request, what are the errors etc.) Allows us to do network wide metrics analysis (e.g. actual mean channel change time across the entire subs base) Does the correlation between the infrastructure and the data collected from STB’s – can visualize what CO’s are prone to service degradation and why Does a variety of historical reports, including proactive suggestions (e.g. jeopardy of churn) Wednesday, October 21, 2009 A year ago we accidentally found a solution weʼve been thinking of, to monitor things even further. The problem with our service model is we have lots of trouble tickets related to the quality of the ADSL line and/or CPE or set-top-box configuration. The solution is actually simple - we now monitor set-top-boxes too! A tiny piece of software inside all of our set-top-boxes gather all the necessary data to analyze. What is important, the data is available to us in the real-time. We can select any of our STBʼs and check if there is anything wrong with it. Or was. We can now do fast correlation between what we get from the other monitoring tools and what we see from the subscribers side. Basically, we now have complete picture, seeing the network as the operator and also as one big subscriber as well. A flexible reporting is available of course. We can now do day-to-day analysis of the things like overall service quality, channel popularity and most important quality indicators as well.
  • 15.
    Set-top-box monitoring Live channel status allows to quickly evaluate how well STB’s in the field receive multicast/VOD Network history tracks outages at CO’s Helps to correlate monitoring data Intuitive to use Wednesday, October 21, 2009 A couple of screenshots to illustrate how it looks like to check the live channel status and outages tracking.
  • 16.
    Set-top-box monitoring STB summary page allows to watch a particular STB state/behaviour in time KPI statistics provide useful information on indicators like channel change times and similar figures Wednesday, October 21, 2009 This ones are all about drilling down to the gory details of set-top-box statistics and also, overall kpi statistics.
  • 17.
    Set-top-box monitoring Event logs allow to check exactly what an STB was doing Proven reporting mechanisms provide historical overview and easily read status information Reports help manage content too! Easily integrated with the existing solutions (OSS/BSS and middleware) Wednesday, October 21, 2009 We can analyze it more thorougly with the detailed log of what was going on inside an STB. Like when it sent IGMP join, when the stream started to flow to be decoded and all the things like that. The reports are based on an industry standard tools and are clearly structured and ready to use.
  • 18.
    IPTV service performance– middleware Transition from a basic web-based application to a feature-rich UI client Dramatic boost of interface performance Ensures great usability even with the navigation pane minimized Provides the same experience on legacy and new STB’s Does bells and whistles like PiP (QCIF) even on legacy STB’s Wednesday, October 21, 2009 Last but not least come the middleware enhancements we did in the past few years. In the beginning we had only a basic web-based client in the set-top-box and in two years it became really obsolete. We opted to change and in close cooperation with a local company a new middleware appeared. It is actually the solution we needed. It changed the usability and interface performance so drastically we were really afraid it will be a shock to our subscribers with the migration. Fortunately it wasnʼt. Of course we got complaints, but then again, an operator always gets some. What was really surprising we managed to migrate around 150,000 of STBʼs in two months period completely transparently. Now with the new middleware in place weʼre free to introduce new services (which we already do) and the subscribers really experience a better quality because their navigation is now extremely fast, they have more intuitive interface, they have lots of more things to come out of their TV screens too. Well, thatʼs about all I had to say today. To summarize: QoE becomes really important these days and lots of tools should be utilized wisely. Often, revamping of the infrastructure is unavoidable too. Many thanks to the audience for the attention to this presentation. Iʼm ready for the questions too if any of you have them. Iʼm generally reachable via e-mail and occasionally I visit Europe too.
  • 19.
    Products and partners 7 Wednesday, October 21, 2009 Here are the logos of our partners as the courtesy to them being most influential and helpful.