Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Consumer communication paradigm shifts

940 views

Published on

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

Consumer communication paradigm shifts

  1. 1. IEEE Communications Magazine • November 2012150 0163-6804/12/$25.00 © 2012 IEEEINTRODUCTIONFuture media Internet will carry high-qualitymultimedia content, integrating social communi-cations with other applications. There will becross-domain shared experiences, where groupsof consumers will interact and share servicesindependent of their location. Currently, multi-media-based services are designed for one spe-cific device and network, but very soon friendsand family in different domains will be able toconverse and interact while watching video con-tent together. The paradigm is changing fromsingle end-user media consumption to a groupshared experience. In this new model, socialcommunication opportunities may be exploited(e.g., conferencing while watching television),facing lots of challenges, such as shared experi-ence modeling, universal session handling, syn-chronization, quality of service (QoS), scalability,noise reduction, presence awareness, designguidelines, privacy concerns, and social network-ing integration [1].This article focuses on one of these chal-lenges ahead in new emerging social multimediaapplications, which is the synchronization of dif-ferent media streams across multiple locations,known as inter-destination multimedia synchro-nization (IDMS), and presents a discussionabout the evolution and technical aspects of ourown RTP/RTCP-based IDMS solution, as wellas its current standardization process.IDMS DEFINITION,EXAMPLES, AND CHALLENGESIDMS DEFINITIONThere are three kinds of temporal multimediasynchronization techniques: intra-stream, inter-stream, and IDMS [2]. Figure 1 shows an exam-ple of each kind, in which a group of distributedreceivers is playing video, audio, and scene infor-mation (e.g., subtitles, chat or advertisements)streams simultaneously, all of them from thetransmission of an online football match. Intra-stream synchronization deals with the mainte-nance, during playout, of the temporalrelationships between the media units (MUs)within each media stream. In Fig. 1, we canobserve a proper and continuous playout processof each media stream in all the receivers (e.g.,the video sequence showing the ball crossing thefield). Inter-stream synchronization refers to thepreservation of the temporal dependencesbetween playout processes of different mediastreams involved in the application (e.g., audio,video, and data streams synchronization in Fig.1). The above kinds of synchronization are usu-ally implemented in typical multimedia applica-tions. Nevertheless, IDMS is essential in newemerging social multimedia applications. Itinvolves the simultaneous synchronization of oneor more playout processes of one or severalmedia streams at dispersed receivers. Forinstance, it can be noticed in Fig. 1 how, at anyABSTRACTCurrently, the media consumption paradigmis changing from a single end user to a groupshared experience. Now, social communicationopportunities may be exploited (e.g., conferenc-ing while watching television), facing lots oftechnological (e.g., synchronization, universalsession handling, scalability) and perceptual(e.g., presence awareness, QoE) challenges. Thisarticle focuses on one of these major challengesahead in new emerging social interactive multi-media applications, which is the synchronizationof different media streams across multiple loca-tions, known as inter-destination multimediasynchronization (IDMS). We describe the threekinds of temporal multimedia synchronization,and summarize some related work and examplesof applications in which IDMS is needed. Thearticle includes a discussion about an RTP/RTCP-based IDMS solution the authors havebeen working on, as well as the standardizationstatus regarding this kind of synchronization.TOPICS IN CONSUMER COMMUNICATIONSAND NETWORKINGFernando Boronat and Mario Montagud, Universitat Politècnica de València (UPV)Hans M. Stokking and Omar Niamut, TNOThe Need for Inter-DestinationSynchronization for Emerging SocialInteractive Multimedia ApplicationsPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND
  2. 2. IEEE Communications Magazine • November 2012 151moment during the session, all the receivers areplaying the same MUs of each media streamsynchronously. In this article we focus on IDMS.EXAMPLES OF APPLICATIONS IN WHICHIDMS IS NEEDEDNowadays, we can find a large number of multi-media applications in which the lack of IDMSmay affect the user’s quality of experience (QoE)in many different ways [3, 4]. Here, we onlyhighlight three representative use cases:1) Synchronous e-learning. An instructor dis-tributes a multimedia lesson to a group of dis-tributed students (attending it from differentlocations), and he/she can occasionally makesome comments and questions about its content.Hence, it is important that a multimedia ques-tion is rendered almost simultaneously at each ofthe students, and, as a result, has a fair chanceof being answered.2) Networked real-time multiplayer games.Multiple players usually collaborate with eachother and fight against other multiple players. Ifdistributed players do not perceive a consistentevolution of the game state, the fairness or effi-ciency of the collaborative work can be seriouslyspoiled.3) Social TV. Disjoint groups of viewers caninteract and share services within the context ofsimultaneous media content consumption, byusing immediate chat messaging, audio/videoconferencing services, or any other sort of sharedexperience that has yet to appear [1]. An exam-ple is the Watching Apart Together case, as whenvarious groups of friends are watching a liveonline football match at separate fixed or mobilelocations (Fig. 1), and, in an extreme case, someother friends could be watching the match livephysically at the stadium and communicatingwith them using their smart phones. In suchcases, significant events (e.g., goals) should beperceived by all the users almost simultaneously(IDMS), even in all involved time-dependentmedia streams (inter-stream synchronization), toguarantee a pleasant shared experience. Other-wise, the involved interaction patterns would bebroken.Some platform (e.g., the one in [5]) will beneeded to involve all the friends watching thematch, creating an ad hoc group in a cross-domain session through which media and inter-actions can be shared, synchronized, adapted,recorded, played back, and analyzed (with theusers’ consent). Once the match begins, thefriends could talk to each other and discuss thematch, inclusive of watching each other. Friendsat the stadium could send videos to give remotefriends a view of the match from the spectators’point of view, whereas friends at home couldalso send the recorded TV edited highlights(e.g., to clarify off-side situations). More friendscould join the shared session late (e.g., thosetravelling by train, using a cellular network, as inFig. 1).IDMS CHALLENGESAs far as we know, the exact ranges of asyn-chrony levels tolerated by users in specific IDMSuse cases have not been sufficiently determinedyet. Here, we present some conclusions frompreliminary studies, but we consider they mustbe followed up with more exhaustive subjectiveassessments in the future. In [3], it is concludedthat the requirements on inter-destination con-tent synchronicity may vary between 15 and 500ms, depending on the type of interactive serviceoffered. More recently, in [6] it is concluded thatdelay differences up to 1 s might not be percep-tible by users in a distributed video watching sce-nario while communicating each other (text andvoice chat), but playout differences above 2 sreally become annoying for most users. Howev-er, those differences can be order of magnitudeslarger in practical content distribution networks([3, 6]), mainly due to the existence of variousfactors, some of which can be related to eitherthe distribution network or the user equipment’sFigure 1. Inter-destination multimedia synchronization (IDMS).Receiver 1Receiver 2IPbroadcastnetworkReceiver NMobilenetworkContentproviderPlayoutcontroller10 ms50ms200 msPlayoutcontrollerPlayoutcontrollerInitial playout instant (t1)IDMSIDMSGoal40 ms TimeSoccerstadiumVideo streamAudio streamData streamOfficeHomeTrainGoal40 ms TimeVideo streamAudio streamData streamGoal40 mst1 TimeVideo streamAudio streamData streamIDMS is essential innew emerging socialmultimediaapplications.It involves thesimultaneoussynchronization ofone or more playoutprocesses of one orseveral mediastreams at dispersedreceivers.Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND
  3. 3. IEEE Communications Magazine • November 2012152features, such as capturing, coding, packetiza-tion, distribution (traffic load, trans-coding, orformat conversion, fragmentation and re-assem-bly of packets, multicast or dynamic routingstrategies, improper queuing policies…), pro-cessing, depacketization, decoding, buffering,rendering and presentation delays, or packetlosses, which can seriously disturb the originalmedia timing, and result in different (and time-variant) end-to-end (or playout) delays whenmulticasting media content from a media serverto one or multiple receivers (Fig. 2).Hence, we conclude that existing distributiontechnologies do not handle the IDMS problemin an optimal way. Delay variability becomes aproblematic barrier when interaction betweenthe user and the media content, or between dif-ferent users in the context of specific content, isneeded. It may prevent the inclusion of advancedforms of interactivity in group shared mediaexperiences. Thus, additional adaptive IDMSsolutions must be provided.IDMS RELATED WORK ANDCOMMERCIAL APPLICATIONSRELATED WORKSLot of research about intra and inter-stream syn-chronization can be found, but not for IDMS. Acomparative survey of intra-stream synchroniza-tion techniques was presented in [7], whilst themost recent inter-stream synchronization andIDMS techniques were compiled in [2].Regarding IDMS, three main signalingschemes can be identified: two centralized (mas-ter/slave or M/S scheme, and synchronizationmaestro scheme or SMS) and a distributed one(distributed control scheme or DCS).In the M/S scheme (e.g., [1]), receivers areclassified into a master receiver and slavereceivers (the rest). Only the master sends feed-back information about playout timing, and allslave receivers adjust their playout processesaccordingly. SMS (e.g., [8]) is based on the exis-tence of a synchronization maestro (the sourceor one receiver) which gathers the informationon the playout processes from all the receiversand, if needed, distributes new adapted controlpackets to make them enforce IDMS adjust-ments. In DCS (e.g., [1]), all the distributedreceivers multicast IDMS reports. Accordingly,each receiver locally decides the IDMS refer-ence from among its own playout timing andthose of the others.Each of the control schemes has its strengthsand weaknesses. On one hand, centralizedschemes can preserve the causality and consis-tency between the states of the receivers moreeasily than distributed schemes. They also out-perform the latter in terms of security, and intro-duce lower traffic overhead. On the other hand,centralized schemes have larger network delays(in the case of SMS), lower robustness, as wellas poorer flexibility and scalability.Interested readers are referred to [2], inwhich the authors presented an exhaustive tax-onomy of IDMS solutions adopting those con-trol schemes.EXISTING APPLICATIONS WHERE IDMS ISEMPLOYEDSome studies have been devised in order todesign appropriate synchronization techniquesfor keeping consistency in existing (some com-mercial) collaborative virtual environments(CVEs) and multiplayer online games (MOGs).In [9], a synthesis of architectures and mecha-nisms used by some CVE systems is presented.Also, the works in [10, 11] provide an overviewof architectures and synchronization algorithmsadopted by some commercial MOGs. In theseworks, the surveyed consistency maintenancealgorithms are classified as either conservative(e.g., lockstep synchronization, bucket synchro-nization, local lag, rollback, interactivity lossavoidance) and optimistic (e.g., time warp, trail-ing state synchronization, input broadcasting,dead-reckoning, operational transform) algo-rithms,1 based on how they deal with possibleconflicts and the corresponding corrections. Onone hand, conservative algorithms solve the syn-chronization problem by preventing misorder-ings outright, allowing the processing of eventsonly when it is consistency-safe to do so (e.g. bywaiting for all possible events to be received byall the users). On the other hand, optimisticalgorithms employ some mechanisms to detectand correct probable conflicts, processing eventsoptimistically before knowing surely that no ear-lier events could arrive, and then repairinginconsistencies. These algorithms are far bettersuited in fast paced games, where interactivityand responsiveness are essential aspects.Besides, various applications for remotelywatching media together employ some sort ofIDMS in their service offering (e.g. Synchtube,2Nefsis,3 VLC4…).Synchtube enables multiple users to join avirtual room in which they can interact whilewatching a YouTube video together, offeringbidirectional synchronization of the video player(i.e., when video playout is paused by one user,the video players of all other users in the sameroom are also paused). Start/pause/stop signaling1 References to those algo-rithms can be found in[9–11].2 Synchtube :http://www.synchtube.com/:Last accessed 2012-03-313 Nefsis : http://www.nef-sis.com/: Last accessed2012-03-314 VLC media player:http://www.videolan.org/vlc: Last accessed 2012-03-31Figure 2. End-to-end delay variability: Need for IDMS.CapturingPlayout(end-to-end)delayPlayout(end-to-end)delayPlayout(end-to-end)delaySenderdelayStream Auser 3CodingPacketizingEncapsulatingDecapsulatingDecodingBufferingRenderingPresentationEncapsulatingCapturingStream Auser 2DelaydifferencesCodingPacketizingEncapsulatingDecapsulatingDecodingBufferingRenderingPresentationNetworkCapturingStream Auser 1CodingPacketizingEncapsulatingDecapsulatingDecodingBufferingRenderingPresentationNetworkDistributiondelayClientdelayPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND______________
  4. 4. IEEE Communications Magazine • November 2012 153is used to resynchronize player playout by reset-ting the start position of each player. The levelof video synchronization that is achieved amongclients is accurate on the order of 1 s, while userinteraction is near instantaneous.Nefsis uses cloud offloading to synchronizemedia playout at different locations during ateleconferencing session. One client starts toplay a file, which is then sent to a virtual serverin the cloud, which shares it with all clients inthe session. The session initiator is then able tocontrol playout of the media file at every client,synchronizing playout at start/pause/stop events.For either case, it is unclear which technologyor underlying synchronization technique isapplied since these are proprietary solutionsunder intellectual property protection.A third example is VLC (open source solu-tion), which employs an M/S scheme to synchro-nize playout at multiple player instances. OneVLC player is designated as a server (master),which broadcasts a synchronization clock signalwith which all clients (slaves) can synchronize.Clients try to keep in sync with the server usingthe clock signal, while start/pause/stop events areused for explicit synchronization. However, thisVLC functionality, called netsync, does not com-pensate the effects of delay variability or jitter ateach distributed client.5Some of the previous solutions rely on syn-chronizing certain control events, such as play/pause/stop/seek commands (i.e., no continuousmedia stream synchronization is performed). Insome others, the approach is to estimate theworst network delay and then enforce this delayto all the receivers to prevent inconsistencies.But apart from compensating the network delayvariability (e.g. by means of adjusting the jitterbuffer), additional techniques should be intro-duced to equalize varying playout delays (e.g.decoding, post-processing, rendering), which arethe result of the use of heterogeneous devices bydistributed consumers.6 Also, most of the exist-ing IDMS solutions define new proprietary pro-tocols, with specific control messages, that couldincrease the network load.Additional commercial solutions for distribut-ed media watching exist ([1, 6]), e.g. Yahoo!’sZync,7 ClipSync,8 or Watchitoo,9 but they alsostill use rudimentary IDMS solutions like theabove ones.Thus, the conclusion is that further researchin synchronous shared media is needed to pro-vide users with virtual watching and living rooms,where they can naturally communicate, interact,and collaborate, as they would if they were inthe same location, despite the heterogeneity indelays and devices.IDMS STANDARDIZATIONOur goal was to design an open, standard, andinteroperable IDMS solution that actively and con-tinuously synchronizes media streams on the packetlevel, but, at the same time, tackles the IDMS prob-lem above the transport layer, as close as possibleto the “playout point” (Fig. 3). This way, an overallsynchronization status within accurate bounds canbe achieved, since multimedia synchronization is an“end-to-end” challenge (from media capture/retrieval at the sender side to media presentation atthe receiver side).Standardization of IDMS would fill a gap inmulti-device media synchronization, thus sup-porting technology for hybrid media device envi-ronments and avoiding the proliferation ofproprietary and incompatible solutions. It willhelp the uptake of implementations, ensuring amore widespread use of IDMS in practice.This section explains the rationale for usingReal-Time Transport Protocol/Real-Time Trans-port Control Protocol (RTP/RTCP) for IDMSand the way RTCP can be extended to achievethis goal. Besides, the background, evolution, aswell as the technical aspects (architectures andprotocols) of our RTP/RTCP-based IDMS solu-tion as standardized by European Telecommuni-cations Standards Institute (ETSI) TISPAN andthe Internet Engineering Task Force (IETF) areoutlined.RATIONALE FOR USING RTP/RTCP FOR IDMSNowadays, RTP and RTCP (RFC 3550) areextensively used in interactive streaming servicessuch as VoIP, VoD, or IPTV. The timestamps,sequence numbers, and payload type identifica-tion provided by RTP packets are useful forreconstructing the original media timing, and forreordering and detecting packet losses at theclient side (intra-stream synchronization). More-over, the reporting features provided by RTCPare useful to obtain quality feedback about RTPdata delivery. First, service providers can use theQoS metrics included in RTCP reports, such asdelay and loss rate, for troubleshooting and faulttolerance management. Second, the timing infor-mation and source identification parameters pro-vided by each RTCP Sender Report (SR) andSource Description report (SDES), respectively,are useful to allow inter-stream synchronization.Another often used standardized solution forintra-stream and inter-stream synchronization isbased on MPEG Transport Stream (TS). Theformer is achieved by playing all media data atFigure 3. RTP/RTCP-based IDMS solution in the TCP/IP protocol stack.End-to-end delayRTCP SR + IDMSRTPRTCP RR + XR for IDMSRTP/RTCP-basedIDMSsolutionIP layerApplication layerMedia presentationReceiverSub-network layerRTCP (control)RTP (data)TCP UDPTransport layerIPnetworkIP layerApplication layerMedia transmissionSenderSub-network layerRTCP (control)RTP (data)TCP UDPTransport layer5 A more extendedoverview of the state of theart in IDMS applicationsfor shared video watchingcan be found in “ANAL-YSIS: Cloud-Based Ser-vices and Service/ContentSynchronization,” Deliv-erable 4.1 of Next-Gener-ation Hybrid BroadcastBroadband (HBB-NextProject, FP7-ICT-2011-7), March 2012,http://www.hbb-next.eu/documents/HBB-NEXT_D4.1.pdf6 Even when differentusers employ the samekind of terminal, othertime-variant factors (e.g.clocks skews and drifts,CPU overload, losses,post-processing, and ren-dering delays) can differ-ently disrupt the originalmedia timing in eachclient.7http://sandbox.yahoo.com/Zync8 http://www.clipsync.com/9 http://watchitoo.com/Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND_____________________
  5. 5. IEEE Communications Magazine • November 2012154their proper timing, indicated by the presenta-tion timestamp, while the latter is achieved byusing the same timestamps for each separatedelementary stream in the media container, andaligning them at the receiver side.Both RTP/RTCP and MPEG-TS standard-ized (intra- and inter-) stream synchronizationsolutions are at the media packet level, as theproblem of synchronizing media streams shouldbe solved at the level of the stream.Moreover, RFC 3550 allows further modifica-tions and/or additions to RTP/RTCP (e.g. bydefining new header extensions, report blocks orpacket types), to include profile-specific infor-mation required by particular applications. Theguidelines are in RFC 5968. Likewise, RFC 3611allows the definition of new RTCP eXtendedReport (XR) blocks for exchanging additionalQoS metrics. As IDMS involves the collection,summarizing and distribution of RTP packetarrival and playout timings, and this informationcan be considered as a QoS metric (it can reflectthe effect of jitter, network load, packet losses,clock skews/drifts, presentation skews, CPUoverload, etc.), RTCP becomes a promisingalternative for carrying out IDMS.Another advantage is that, using RTP/RTCP,the optimum transmission rate for the feedbackcontrol messages does not need to be computed(as required by most of the existing IDMS solu-tions [2]), since RTCP feedback reports areexchanged regularly, and the report intervalperiod is dynamically adjusted according to theactive number of senders and receivers, and tothe allocated session bandwidth (RFC 3550). Aswell, the maximum fraction of the total amountof control traffic added by RTCP must be limit-ed to 5 percent of the RTP session bandwidth(RFC 3550), so the traffic overhead added by anIDMS solution based on the RTCP capabilitieswill be controlled and not very high. Finally, thewidespread use of those protocols in actual dis-tributed multimedia applications would facilitatethe implementation and deployment of anRTP/RTCP-based IDMS solution.During the ETSI standardization process, theappropriateness of different protocols for theiruse in the IDMS solution was assessed. SessionInitiation Protocol (SIP), Real Time StreamingProtocol (RTSP), Diameter/H.248 and RTCPwere initially considered as candidate protocolsfor IDMS. SIP INFO can be used to carry IDMSmessages, but is only suitable when receivers doindeed support SIP. In case of synchronizationof streams by network entities (network-basedIDMS), this is clearly not the case: SIP is notsupported by the network elements transportingthe actual media streams. RTSP could also beextended with synchronization parameters.Although RTSP is a typical protocol for videoon demand (VoD) services, it is not often usedfor IPTV multicast services. Only an RTSP-based solution would thus not be sufficient.Also, similar to SIP, RTSP is not supported bythe network elements. Diameter and H.248 areprotocols used in next generation networks(NGN) that link the service plane to the trans-port plane. H.248 especially can well be extend-ed with synchronization information. However,using these protocols has the downside that theyare only applicable for in-network synchroniza-tion, because they are not supported by end ter-minals. As mentioned above, RTCP offers manyadvantages in using it for IDMS. The only down-side is that it implies using RTP as a mediatransport protocol. However, RTP is alreadyFigure 4. ETSI TISPAN functional entities and reference points in the IMS-based IPTV architecture [13].ISCISCService controlfunction(SCF)Service discoveryfunction(SDF)User profileserver function(UPSF)Media controlfunction(MCF)Media deliveryfunction(MDF)Elementary control andforwarding function(ECF/EFF)Transport processing functionCore IMSService selectionfunction(SSF)Ut IMS-based IPTV modelSIP/SDPRTP/RTCPDiameterRTSPHTTPIGMP/MLDDVBSTP or FLUTENot definedReference pointXaXaSs’Sh ShCxXpy2XcGmDjXdUserequipment(UE)Whereas the firstTISPAN IPTV stan-dards mainly focusedon regular TV ser-vices, the recentTISPAN Release 3standards contain aseries of specifica-tions for advancedlarge-scale IPTV ser-vices, including per-sonalization, SocialTV and IDMSfeatures.Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND
  6. 6. IEEE Communications Magazine • November 2012 155advocated in current IPTV solutions because oftheir use for other mechanisms such as retrans-mission or forward error correction (FEC). Also,RTP mechanisms are very useful for intra- andinter-stream synchronization. Thus, both theETSI and IETF standardized solutions for IDMSare RTP/RTCP-based.BACKGROUND: PRELIMINARY RTP/RTCP-BASEDIDMS SOLUTIONOur preliminary IDMS approach [8] extended theRTCP receiver reports (RRs) in order to includethe playout point of each receiver (sequencenumber of the MU being played out and its play-out time). It also defined new RTCP application-defined (APP) packets to estimate network delaysand exchange playout settings instructions. Itsapplication in actual system implementationsdoes, however, present some drawbacks:• Since RTCP RR extensions should include“profile-specific” information, even accord-ingly signalized, they would break the oper-ation and cause inconsistences in most RTPend systems and middle boxes, lowering thechances of successful deployment.• APP packets should be used for application-(i.e., vendor-) specific extensions, not forstandardized solutions.Thus, the definition of new RTCP packettypes or reports (presented later) seems moreappropriate to avoid the above compatibility andinteroperability constraints.ETSI TISPAN PROPOSALThe ETSI Technical Body for Telecommunica-tions and Internet converged Services and Proto-cols for Advanced Networking (TISPAN) is amajor European-based standardization organiza-tion with significant operator involvement. Itmainly considers the standardization of NGNand its associated services. Whereas the firstTISPAN IPTV standards mainly focused on reg-ular TV services, the recent TISPAN Release 3standards contain a series of specifications foradvanced large-scale IPTV services, includingpersonalization, Social TV and IDMS features.The TISPAN IMS-based IPTV specificationdescribes use cases, requirements, architecturesand protocol solutions, and it is based both onSIP — NGN-based IPTV architecture — andHTTP (Hyper-Text Transfer Protocol) — Inte-grated IPTV subsystem — both containing theIDMS functionality. In this section, we reflect onthe main topics from each of these parts, butfocusing on the IMS-based IPTV specificationwith session control based on SIP (the HTTP-based solution is in many aspects quite similar).Use Cases and Solution — Reference [12]contains the service layer requirements andincludes a variety of advanced IPTV use cases(e.g. Watching Apart Together and remote gameshow participation). The IDMS solution in ETSITISPAN shares some properties of other recentapplication-layer services for IPTV, such as solu-tions for retransmission or FEC. For example,similar to the recent DVB retransmission solu-tion, IDMS can be implemented as an add-on inan existing IPTV deployment.Architecture — ETSI describes the architecturefor IMS-based IPTV services in [13] (Fig. 4).The ETSI TISPAN IDMS mechanism uses theconcept of synchronization sessions, whichrequires the introduction of two new functionalentities (one Media Synchronization ApplicationServer, MSAS, and multiple SynchronizationClients, SCs) and one new Sync reference pointbetween the MSAS and the SCs (Fig. 5a).SCs report on media arrival or presentationtimes (IDMS timing information) to the MSAS.The MSAS collects the reports from the SCs,calculates IDMS settings instructions, and sendsthem back to SCs to enforce IDMS adjustments.A requirement for SCs is that they are clock-synchronized (e.g., by using Network Time Pro-tocol [NTP]). Algorithms to calculate thesynchronization settings instructions from col-lected IDMS reports are not specified but left tovendor-specific implementations.ETSI TISPAN considers various mappings ofthe IDMS functional architecture onto the enti-ties in the IPTV architecture. This is one of thelarge advantages of making a functional archi-tecture: various implementations can be facilitat-ed with a single specification. In one mapping,aimed at small-scale deployments, the MSAS isimplemented in the network and the SC is locat-ed in the user equipment or UE (terminal-basedIDMS), as a functional entity separated from themedia distribution function (MDF). For syn-chronization using a direct communication chan-nel between multiple UE units, the MSAS iscollocated with the SC in a UE unit. This allowspeer-to-peer exchange of IDMS messages. Inanother mapping, aimed at large-scale deploy-ment of media synchronization, the SC is locatedwithin an edge node of the transport network(network-based IDMS), for example a digitalsubscriber line access multiplexer (DSLAM) orcable modem termination system (CMTS). Fur-thermore, at a higher level (e.g., in the core net-work), a MSAS must be used to control theIDMS timing of the SCs. In such a case, by syn-chronizing streams in the network, but close tothe UE, a rough form of IDMS is achieved. TheSCs are selected such that further downstreamdelays are considered acceptable for IDMS (i.e.,any delay differences introduced from the edgenode to the UE, e.g., buffer settings, cannot becontrolled by the IDMS approach). In IPTVdeployments this is manageable, since the IPTVoperator controls the UE (set-top box) settings.Also, channel changing delays in a shared settingFigure 5. Functional entities and reference point for IDMS: a) ETSI TISPAN[14]; b) IETF ID [4].RTCP SR + IDMSRTCP RR + XRblock for IDMSRTP senderSyncMediasynchronizationapplication server(MSAS)Synchronizationclient(SC)(a) (b)Mediasynchronizationapplicationserver(MSAS)RTP receiverSynchronizationclient(SC)Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND
  7. 7. IEEE Communications Magazine • November 2012156can be low, since the UE need not buffer themedia stream for IDMS. In all mappings, thesession-related part of the MSAS is part of theservice control function (SCF), or exists as adedicated IMS application server.ETSI TISPAN, additionally, specifies anIDMS solution for the modification or re-origi-nation of streams, which may be the case whenan IPTV implementation serves both HDstreams and SD streams, using transcoding.Additional measures are then required, such asplacing additional media-stream modifying SCswithin the functional entities where mediastreams are modified. If any modifications causechanges in (RTP) timestamps of the media, amedia stream modifying SC can submit the cor-relation between incoming and outgoing RTPtimestamps to the MSAS.Protocols — ETSI IDMS protocol is specifiedas a two-part solution [14]. The first part consid-ers the setup and teardown of a synchronizationsession as piggy-backing on the existing mediasession setup. When SCs are located within edgenodes, the SCs need to be configured before-hand with regard to IDMS (since they are notinvolved in the media session). When SCs arelocated in the UEs these sessions are setup usingSIP and SDP (Session Description Protocol),using the Gm and ISC reference points (Fig. 4),for broadcast, or using a combination of SIP andRTSP, also using SDP, for content on-demand.The synchronization session information is con-tained in the SDP media description, containingthe following items:• The MSAS’ address, allocated by the SCF.Also, various MSASs could be hierarchical-ly or otherwise coupled to allow for SCs touse a different MSAS.• A SyncGroupId, identifying the synchroniza-tion group, being either allocated by theSCF or indicated by the UE.• In case of content on-demand, the mediastream SSRC (Synchronization Source),used to correlate various RTCP messages.From the viewpoint of an end-user, the syn-chronization session can be ended in two ways: byending the entire media session, obviously includ-ing any synchronization part, or by reverting thesynchronized media session to a regular mediasession. Reverting to a regular media session isdone by using the SIP mechanism of re-INVITE,with which session maintenance is carried outusing SIP. In that case, a SIP re-INVITE is sentcontaining an exact duplicate of the sessiondescription, but omitting the synchronizationparameters. Using SIP in this way allows for flexi-ble setup of synchronization sessions, not only forcontent-on-demand services but also for broadcastservices such as linear TV (e.g., various groups ofviewers sharing a television experience can co-exist, even for the same television broadcast).After configuration of network elements orsynchronization session setup for UEs, IDMSmessages can be exchanged between SCs andMSAS. Although the ETSI TISPAN specifica-tion for IDMS supports the use of MPEG TSdirectly on top of UDP, since RTCP has beenchosen as the control protocol for IDMS, RTP isrequired as media transport protocol.A new RTCP XR block type has been speci-fied for the purpose of synchronization (Fig. 6a).An IANA registration has been performed basedon the ETSI TISPAN specifications, making theRTCP XR block available to a wider community.This new block is composed of the defaultXR headers (RFC 3611), followed by the specif-ic parameters. It contains the SyncGroupID inthe Media Stream Correlation Identifier field, themedia source SSRC, an RTP timestamp belongingto the RTP packet to which the report refers,the packet received time, and, optionally, thepacket presented time. Packet received time ismore accessible for receivers, but may be lessaccurate for IDMS, for example, because of dif-ferent jitter buffer settings or variable renderingdelays. In spite of the more difficult implementa-tion requirements, the use of packet presenta-tion time will enable high accuracysynchronization, but requires SCs to track pack-ets to their ultimate presentation time.For synchronization status information, theuse of this block is straightforward. For synchro-nization settings instructions, an XR block forIDMS should be interpreted as a status informa-tion report of the synchronization referencepoint (e.g. the one of the most lagged SC).IDMS requires all SCs in each group to matchthis reference point. The Synchronization PacketSender Type (SPST) field of this block must beused for indicating its originator.An SDP parameter, called sync-group, hasbeen specified for signalizing the use of the XRblock for IDMS, and for agreeing on the Sync-GroupId to which the SCs will belong.IETF INTERNET DRAFT ON IDMSEven though the earlier work within ETSIextended RTCP, it seemed more suitable to con-tinue this effort within the IETF Audio VideoTransport Core Working Group (AVTCoreWG) which is responsible for the standardiza-tion of RTP/RTCP. The ETSI proposal is a ded-icated solution for use in large-scale IPTVdeployments, but other services such as Internet-based video services may also benefit fromIDMS, and other IDMS use cases requiringhigher levels of synchronization are not support-ed by that solution. Thus, within the IETF, anInternet draft (ID) on IDMS has been proposed[4], using the ETSI specification as a startingpoint, but solving some open issues and provid-ing additional features.Use Cases — Apart from Social TV, the IDalso includes other important use cases. Exam-ples are multiple devices within the same home(e.g. a television in the living room and one inthe kitchen); an audio system containing multi-ple networked speakers to be synchronized; anda video wall consisting of various networked dis-plays. These examples require far more accuratesynchronization of media playout than social TVrequires. Thus, the goal in the IETF is to have amore generally applicable and precise IDMSsolution.Architecture — Here, the architecture has beensimplified. The ETSI solution is meant to bevery scalable, and thus the synchronization func-Although the ETSITISPAN specificationfor IDMS supportsthe use of MPEG TSdirectly on top ofUDP, since RTCP hasbeen chosen as thecontrol protocol forIDMS, RTP isrequired as mediatransport protocol.Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND
  8. 8. IEEE Communications Magazine • November 2012 157tions have been specified as functions separatedfrom the MDF and the UE. In the ID, the SCfunction is defined as part of an RTP receiver,and the MSAS function is defined as part of theRTP sender (Fig. 5b). Optionally, the MSAS canalso be part of a receiver.Protocol — The protocol in the ID reuses theETSI specified XR block for IDMS. RFC 5968states that the only valid reason to create a newRTCP packet type is if the required functionalitywould not be appropriate as part of one of thecurrent packet types. Within IETF, a policy ismaintained to use XR blocks only for monitor-ing and reporting purposes, but not for controlpurposes. Therefore, for sending synchronizationsettings instructions to SCs, a new RTCP packettype has been defined, called the RTCP IDMSSettings packet (Fig. 6b). It contains mostly thesame fields as the XR block. However, forachieving higher-accuracy synchronization, a 64-bit presentation timestamp parameter has beenadopted instead of the 32-bit parameter in theETSI block. This allows a higher level of granu-larity for use cases such as audio beamformingor networked video walls. A new SDP parame-ter, called rtcp-idms, has been specified in the IDfor declaring the use of this IDMS Settings pack-et.Besides, another SDP parameter, calledclocksource, has been specified in [15], butderived from the initial versions of the ID onIDMS, which allows SCs to declare if they sup-port clock synchronization, which sources theysupport for this and which was used latest forsynchronization. SCs can use this parameter tonegotiate the use of a common or similar wall-clock source to meet the accuracy requirements.Currently, the defined sources are local (mean-ing no support for synchronization exists), NTP,Figure 6. RTCP packets for IDMS: a) RTCP XR block for IDMS in both ETSI TISPAN [14] and IETF ID[4]; b) RTCP packet type for IDMS (IDMS settings) in IETF ID [4].0V=2 P Reserved PT=XR=207 LengthPReservSPSTBT=12 Block lengthPT ReservedSSRC of packet senderMedia stream correlation identifierSSRC of media sourcePacket received RTP timestampPacket presented NTP timestamp (32-bit central word)(a)(b)Packet received NTP timestamp, most significant wordPacket received NTP timestamp, least significant wordPacket received RTP timestampPacket received NTP timestamp, most significant wordPacket received NTP timestamp, least significant wordPacket presented NTP timestamp, most significant wordPacket presented NTP timestamp, least significant wordTBD: To be determined2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 310V=2 P Reserved PT=TBD LengthSSRC of packet senderSSRC of media sourceMedia stream correlation identifier2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 31Within IETF, a policyis maintained to useXR blocks only formonitoring andreporting purposes,but not for controlpurposes. Therefore,for sending synchro-nization settingsinstructions to SCs,a new RTCP packettype has beendefined, called RTCPIDMS Settingspacket.Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND
  9. 9. IEEE Communications Magazine • November 2012158GPS, GAL and PTP. This is an extendable list,so future clock synchronization technologies canbe supported as well.Finally, interoperability between ETSI andIETF specifications is arranged for in the ID. TheXR block for IDMS is the same in both specifica-tions. Further, if all SCs and the MSAS involved inan IDMS session support the new IETF-definedIDMS report, they must use that. SCs may stillsupport the XR block for reporting IDMS settings,but only as a backwards compatibility mechanismwith ETSI. This prevents a real forking of theRTP/RTCP-based IDMS solution, and will help inthe adoption of a single solution by the industry.CONCLUSIONSIn this article we have focused on one of themajor challenges ahead in new emerging socialmedia sharing applications: IDMS. We havedescribed some use cases in which IDMS becomesessential, and presented some related work aswell as existing commercial solutions for IDMS.The need for standardization of IDMS toprovide interoperable solutions for hybrid mediadevice environments and ensure morewidespread use of IDMS in practice has beenemphasized. Accordingly, the rationale for usingand extending RTP/RTCP for IDMS has beendiscussed, and the standardization processregarding our RTP/RTCP-based IDMS solutionhas been summarized.Further research on distributed media synchro-nization is targeted to address some technical andperceptual challenges regarding novel mediastreaming technologies, advanced media encoders,and emerging patterns in media consumption inorder to provide pleasant shared experiencesbetween groups of consumers located in cross-domain scenarios and using heterogeneous devices.ACKNOWLEDGMENTSThis work has been financed partially by Univer-sitat Politècnica de València (UPV) under itsR&D Support Program in PAID-11-02-331 Pro-ject and PAID-01-10, and by TNO, under itsFuture Internet Use Research & InnovationProgram.REFERENCES[1] I. Vaishnavi et al., “From IPTV to Synchronous SharedExperiences Challenges in Design: Distributed MediaSynchronization,” Sig. Processing: Image Commun., vol.26, no. 7, Aug. 2011, pp. 370–77.[2] F. Boronat, J. Lloret, and M. García, “Multimedia Groupand Interstream Synchronization Techniques: A Com-parative Study,” Information Systems, vol. 34, no. Mar.2009, pp. 108–31.[3] M. O. van Deventer et al., “Advanced Interactive Televi-sion Service Require Synchronization,” IWSSIP ’08,Bratislava, June 2008.[4] R. van Brandenburg et al., “RTCP for Inter-DestinationMedia Synchronization,” draft-ietf-avtcore-idms-06.txt,IETF working draft, July 2012.[5] C. Hesselman et al., “Sharing Enriched MultimediaExperiences Across Heterogeneous Network Infrastruc-tures,” IEEE Commun. Mag., vol. 48, no. 6, June 2010,pp. 54–65.[6] R. N. Mekuria, Inter-Destination Media SynchronizationSocial TV Experimentation TV Broadcasts, M.Sc. thesis,Delft University of Technology, Apr. 2011.[7] N. Laoutaris; I. Stavrakakis, “Intra-Stream Synchroniza-tion for Continuous Media Streams: A Survey of Play-out Schedulers,” IEEE Network, vol. 16, no. 3, 2002,pp. 30–40.[8] F. Boronat, J. C. Guerri, and J. Lloret, “An RTP/RTCPbased Approach for Multimedia Group and Inter-Stream Synchronization,” Multimedia Tools App., vol.40, no. 2, Nov. 2008, pp. 285–319.[9] C. Fleury et al., “Architectures and Mechanisms to effi-ciently Maintain Consistency in Collaborative VirtualEnvironments,” Proc. IEEE VR Wksp. Software Eng. andArchitectures for Realtime Interactive Sys., Waltham,MA, Mar. 2010.[10] M. Roccetti, S. Ferretti, and C. Palazzi; “The Brave NewWorld of Multiplayer Online Games: Synchronization Issueswith Smart Solution,” 11th IEEE Symp. Object OrientedReal-Time Distrib. Computing, May 2008, pp. 587-92.[11] R. D. S. Fletcher, “Consistency Maintenance for Multi-Player Video Games,” M.Sc., Queens Univ, Canada, Jan.2008.[12] ETSI TS 181 016 V3.3.1 (2009-07). “Telecommunica-tions and Internet Converged Services and Protocols forAdvanced Networking (TISPAN); Service Layer Require-ments to Integrate NGN Services and IPTV.[13] ETSI TS 182 027 V3.5.1 (2011-03), “Telecommunica-tions and Internet Converged Services and Protocols forAdvanced Networking (TISPAN); IPTV Architecture; IPTVFunctions Supported by the IMS Subsystem.”[14-] ETSI TS 183 063 V3.5.2 (2011-03), “Telecommunica-tions and Internet Converged Services and Protocols forAdvanced Networking (TISPAN); IMS-based IPTV Stage3 Specification.”[15] A. Williams, R. van Brandenburg, and K. Gross, “RTPClock Source Signaling,” draft-williams-avtcore-clksrc-00, IETF working draft, Feb. 2012.BIOGRAPHIESFERNANDO BORONAT [M’93] (fboronat@dcom.upv.es) is anassistant professor in the Communications Department atthe Gandia Campus of the Polytechnic University ofValencia (UPV), Spain, where he studied telecommunica-tions engineering. He obtained his Ph.D. degree in 2004,and his topics of interest are communication networks,multimedia systems, and multimedia synchronization pro-tocols. He is involved in several TPCs and editorial boardsof national and international conferences and journals,respectively.MARIO MONTAGUD (mamontor@posgrado.upv.es) studiedtelecommunications engineering at UPV. He also has aMaster’s degree in communications technology, networksand systems from UPV. Currently, he is researching com-puter networks, multimedia systems, multimedia syn-chronization protocols, and simulation techniques withina local research project from UPV to obtain his Ph.D.degree.HANS STOKKING (hans.stokking@tno.nl) started his carrierover 12 years ago at KPN Research after graduating in sys-tems engineering, policy analysis and management fromDelft University of Technology, where he developed archi-tectures for modern telephone networks, with a focus onintegration with the Internet. He has continued this workat TNO and adjusted his focus to triple-play infrastructureand services, having a leading role in the development ofcombinational services, integrating telephone, TV, andInternet services.OMAR NIAMUT (omar.niamut@tno.nl) has been working as aresearch scientist and project manager at TNO since 2006.His expertise lies in IPTV networks, architectures, and ser-vices, with a focus on interactive services over next-genera-tion/hybrid networks and immersive media. He hasextensive international experience through standardization,conference lectures, and European projects. He has Mas-ter’s and Ph.D. degrees in electrical engineering from DelftUniversity of Technology, with a background in algorithmsfor universal audio coding.Further research ondistributed mediasynchronization istargeted to addresssome technical andperceptualchallenges regardingnovel mediastreamingtechnologies,advanced mediaencoders, andemerging patterns inmedia consumption.Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTANDPrevious Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageIEEECommunications q qM MqqMMqMQmags®THE WORLD’S NEWSSTAND_____________________________________________________

×