SlideShare a Scribd company logo
1 of 120
PHOENIX Deliverable D3.2b
PHOENIX
IST-2003-001812
DELIVERABLE D3.2B
REFINEMENT OF SPECIFICATION, INTERMEDIATE DESIGN AND
ANALYSIS OF TRANSPORT AND NETWORK LAYER PROTOCOLS AND
MECHANISMS
Contractual Date of Delivery to the CEC: 30.04.2005
Actual Date of Delivery to the CEC: 30.04.2005
Author(s): BUTE (editor)
Participant(s): BUTE, VTT, CEFRIEL, THALES, SIEMENS
Workpackage: WP3
Est. person months: 16.5
Security: PP
Nature: R
Version: 1.0
Total number of pages: 119
Abstract:
The purpose of this document is to provide the vital information for the specification and design of transport and network
layer protocols and mechanisms deployed in Phoenix project. The document describes the transport and network layer
functionalities in detail, including also multicasting and Quality of Service issues and a brief survey on testing and simulation
experiments. This document focuses on studying and optimizing existing network and transport layer protocols and methods
as well as developing new mechanisms in order to improve the QoS guarantees of the multimedia streams for Phoenix
system. In Chapter 2 the varying network protocols designed for layered video streams and multimedia are evaluated and
optimized in order to provide information required and delivered by JSCC controllers through network and network protocol
stack. Nevertheless multicast protocols are also taken into consideration together with IPv6 and its mobility support. Chapter
3 is concentrating on questions raised about providing Quality of Service in IP networks. Also in this chapter a proposal is
introduced for a light and consistent measurement process of the traffic. Chapter 4 is about initial transport layer design and
Chapter 5 contains testing and simulation issues regarded the transport protocols, the mobility and the multicast data
transmission.
Keyword list:
Transport layer protocols, Network layer protocols, IPv6, Mobility, Quality of Service, Multicasting, Simulation,
Testing
Page 1/120
PHOENIX Deliverable D3.2b
EXECUTIVE SUMMARY....................................................................................................................................3
1 INTRODUCTION................................................................................................................................................4
2 NETWORK LAYER PROTOCOLS AND MECHANISMS...........................................................................5
3 QUALITY OF SERVICE..................................................................................................................................31
4 INITIAL TRANSPORT LAYER DESIGN.....................................................................................................84
5 SIMULATIONS AND TESTING.....................................................................................................................88
6 CONCLUSIONS...............................................................................................................................................113
INDEX OF FIGURES AND TABLES.............................................................................................................114
INDEX OF FIGURES AND TABLES.............................................................................................................114
REFERENCES...................................................................................................................................................118
REFERENCES...................................................................................................................................................118
LIST OF ACRONYMS......................................................................................................................................120
LIST OF ACRONYMS......................................................................................................................................120
Page 2/120
PHOENIX Deliverable D3.2b
Executive Summary
The main purpose of this document is to provide the vital information regarded the specification and design
issues of transport and network layer protocols and mechanisms in Phoenix project. The document describes the
transport and network layer functionalities including also multicasting and Quality of Service and a survey on
testing and simulation methods, experiments and results. This document focuses on studying and optimizing
existing network and transport layer protocols and methods as well as developing new mechanisms in order to
improve the QoS guarantees of the multimedia streams for Phoenix system.
The first chapter gives a brief introduction about the parts and the main elements of the document.
In the second chapter the varying network protocols designed for layered video streams and multimedia are
evaluated and optimized in order to provide information required and delivered by JSCC controllers through
network and network protocol stack. This chapter gives a short review of the UDP, UDP-Lite, DCCP, DCCP-
Lite, SCTP, RTP and RTCP protocols, extended with new elements and results. Nevertheless multicast protocols
are also taken into consideration together with IPv6 and its mobility support. There is a brief presentation about
the basic concepts related to the advantages of multicast distribution, and the main principles of the multicast
group management and routing protocols have been chosen to use in the Phoenix project. Then a description of
the signaling information traveling back and forth the source and the receivers is coming. This part also
considers the different adaptive video distribution schemes for the multicast scenario. Finally the basics of the
IPv6 (the main network layer protocol of the Phoenix system) is detailed as well as its integrated mobility
extension called Mobile IPv6.
Chapter 3 is concentrating on questions raised about providing Quality of Service in IP networks. After a very
brief description of the QoS concept and the already conducted analysis and achieved results, chapter 3 provides
initially a more simulations results and studies about the very basic proposal of a dynamic WFQ, a wider set of
working scenarios, design choices and configuration settings. A light and consistent measurement process is
proposed and evaluated for different types of traffic aggregates, relevant figures and setup options. In particular
the application of such a mechanism to the basic dynamic WFQ is studied and assessed against the already
deployed measurement processes, also with traffic aggregates variable in average rate and real-time
characteristics of its component flows. Finally, a more sophisticated and performing dynamic WFQ scheduling
schema is conceived and analyzed in detail, as the final solution for the very last resulting Phoenix system.
Chapter 4 is about initial transport layer design. The chapter starts with a short description of the transport layer
requirements regarded the Phoenix system. Then the most important facts about the UDP-Lite modules of the
Basic Chain simulator are coming, followed by the details of transport layer mechanism implementations,
namely the partial checksum, congestion control and PMTU discovery mechanisms.
Chapter 5 contains testing and simulation issues regarded the transport protocols, the mobility and the multicast
data transmission. The first part of the chapter is about the simulation models of the UDP-Lite and the DCCP.
The simulation results from transport protocols are not presented here since the performed simulations are highly
dependent on the link layer solution presented in the Deliverable 3.4b for the wireless radio interface. The
second part of the chapter deals with the Transport protocol testbed which provides a platform for performing
protocol tests and measurements in the equal environment. The packet loss rate, throughput and packet jitter, i.e.
packet arrival time variation were the main attributes and they were compared using different BER values.
Theoretical packet loss probabilities were also used in the comparison. The results presented in this chapter were
used in the indicative theoretical analysis performed to the UDP-Lite. Then the description of the Mobility
testbed created to examine and analyze the properties of different mobility methods in real physical environment
is done followed by the measurement results of the unicast and multicast multimedia streaming over Mobile
IPv6. Finally the issues of the Multicast testbed are detailed with the results regarded the comparing tests of the
adaptive simulcast to other video distribution methods.
Last chapter (Chapter 6) draws the conclusions and some future work regarded the transport and network layer
issues of the overall system.
Page 3/120
PHOENIX Deliverable D3.2b
1 Introduction
The work in task 3.2 IP Networking concentrates on studying and optimizing existing network and transport
layer protocols as well as developing new mechanisms in order to improve the QoS guarantees of the multimedia
streams for Phoenix system. The transmission protocols for layered video streams and multimedia are evaluated
and optimized in order to maintain and monitor the required end-to-end QoS level, and to provide information
required and delivered by JSCC controllers through network and network protocol stack.
At transport layer several different protocols e.g. UDP, UDP-Lite, DCCP, DCCP-Lite, SCTP and RTP/RTCP
will be studied in order to form a basis for optimization of end-to-end transport protocols and for possible new
protocol extensions (like RTP payload format for SVC).
At the network layer, some multicast distribution models and different multicast group management schemes
will be considered, in order to bundle together receivers with similar QoS requirements. The effect of mobility in
the wireless link is also considered at the IP layer.
Furthermore the simulation issues regarded the transport layer modeling will be described as well as the
examinations done in different transport and network protocol testbeds.
Page 4/120
PHOENIX Deliverable D3.2b
2 Network layer protocols and mechanisms
The main network layer protocol considered for the PHOENIX system at the first phase is the IPv6 protocol. The
transport layer protocols include the UDP and UDP-Lite protocols. These protocols form also the basics of the
Basic Chain system. RTP & RTCP have to be used for streaming support with UDP and UDP-Lite supported
with the features of multicasting. Other promising candidates include DCCP, DCCP-Lite and SCTP. Basic
introduction to all of these protocols were already given in the D3.2a. Here is still a short review of them
extended with some new elements and results.
2.1 Transport protocols: UDP, UDP-Lite, DCCP, DCCP-lite, SCTP, RTP & RTCP
2.1.1 UDP & UDP-Lite
UDP [6] provides connectionless, uncontrolled-loss i.e. best-effort, maybe-duplicates, unordered service. It adds
demultiplexing (port numbers) and optional data integrity service (checksum) to IP. Checksum is optional with
IPv4 but mandatory with IPv6 because IPv6 does not have a checksum of its own. The UDP checksum always
includes the network layer pseudoheader, which differs a little bit for IPv4 and IPv6. Disabling UDP checksum
with IPv6 is not really an option because then damaged headers might pass unverified and packets might be
misdelivered. UDP has neither error reporting nor error recovery mechanism: damaged packets will be
discarded. The data rate is set by the sending application. UDP has no congestion control.
UDP-Lite [7] introduces partial checksum to UDP: If errors are detected in the sensitive part (network layer
pseudoheader, UDP-Lite header or part of the beginning of the packet payload) of the packet, it will be
discarded. If errors are in the insensitive part of the packet (payload), packet is not discarded. This enables e.g.
RTP to checksum its header but not its payload.
The partial checksum is implemented by changing the UDP Length field to Coverage field – it specifies the
coverage of the checksum and it is defined by the sending application on a per-packet basis. This change is
possible because there is some redundancy (UDP length field = IP length field – size of IP header). Setting
Coverage equal or bigger than packet length turns UDP-Lite into traditional UDP.
2.1.2 DCCP & DCCP-Lite
DCCP is designed for use with streaming media (packet stream, application is responsible for framing). It
provides unreliable flow of datagrams with acknowledgements and with a reliable three way handshaking for
connection setup and teardown. However, there are no retransmission methods for datagrams. Only options are
retransmitted as required to make feature negotiation and acknowledgement information reliable. Feature
negotiation means that endpoints can agree on the values of features or properties of the connection, for example
the congestion control mechanism to be used.
Two TCP-friendly congestion control mechanisms are currently available:
• TCP-like [9] (CCID 2) for flows that want to quickly take advantage of available bandwidth, and can
cope with quickly changing send rates, and
• TCP-Friendly Rate Control [10] (CCID 3) for flows that require a steadier send rate, such as streaming
applications.
DCCP has 10 different packet types. DCCP-Request, Response, and Ack packet sequence is used in the
connection setup. After that the data transmission phase is implemented with Data, Ack and possibly DataAck
packets. CloseReq, Close and Reset packets are used in the tearing down of the connection. Reset packet can be
also used in closing the connection abnormally. Sync and SyncAck packets are used to re-synchronize the sender
and receiver for example after a burst of losses. All DCCP packets begin with a generic DCCP packet header.
All DCCP packets may also contain options, which occupy space at the end of the header and are a multiple of 8
bits in length.
The partial checksum mechanism in DCCP always includes the whole header including the options and the
network layer pseudoheader. The data payload can be protected as a whole or only the first n*4 bytes (0≤n<15).
DCCP-Lite [14] is a simplified version of the DCCP. Unfortunately, it does not support partial checksum.
Page 5/120
PHOENIX Deliverable D3.2b
2.1.3 SCTP
SCTP’s [12] design has many of the strengths of TCP, such as rate-adaptive window-based congestion control,
error detection, and fast retransmission method similar to TCP SACK. It provides acknowledged error-free non-
duplicated transfer of user data. Partial reliability extension is also specified for the real-time multimedia traffic.
SCTP has also several new features such as multihoming (allows two endpoints to set up an association with
multiple IP addresses for each endpoint) and multistreaming (each stream is a subflow within the overall data
flow, and the delivery of each subflow is independent of each other). SCTP has also security features: the service
availability of reliable and timely data transport (elimination of DoS attacks by utilizing a four-way handshake
sequence and a cookie mechanism) and the integrity of the user-to-user information carried by SCTP (by using
IPSec [18], [19] or Transport Layer Security (TLS) [17]).
Each SCTP packet is composed of a common header followed by one or more chunks. Chunks contain either
control information or user data.
2.1.4 RTP & RTCP
TCP is considered to be too slow protocol for real time multimedia data such as audio and video because of its
three way hand shaking. That is why UDP is usually used instead TCP over IP. But this has also problems,
because UDP is unreliable, there are no retransmissions upon packet losses. RTP [13] was designed for real-time
multimedia applications’ transport protocol by IETF.
Strictly speaking, RTP is not a transport protocol since it does not provide a complete transport service. Instead,
the RTP PDUs must be encapsulated within another transport protocol (e.g. UDP) that provides framing,
checksums and end-to-end delivery. RTP only provides timestamps and sequence numbers, which may be used
by an application written on top of RTP to provide error detection, re-sequencing of out-of-order data, and/or
error recovery. Note, RTP itself does not provide any error detection/recovery, it is the application on top of RTP
that may provide these. RTP also incorporates some presentation layer functions: RTP profiles make it possible
for the application to identify the format of data, i.e. audio/video, what compression method, etc. RTP sequence
numbers can be also used e.g. in video decoding; packets do not necessarily have to be decoded in sequence.
The overhead of RTP header is quite large, and header compression is proposed.
RTCP takes care of QoS monitoring, inter-media synchronization, identification, and session size
estimation/scaling. The control traffic load is scaled to be max. 5% of the data traffic load.
RTP receivers provide reception quality feedback using RTCP report packets that can be of two types: sender or
receiver type. The receiver sends SR (sender reports) if it is actively participating in the session. Otherwise it
will send the receiver reports (RR). In addition to these, RTCP has SDES (Source Description), BYE (sent at the
end of RTP session) and APP (Application defined, intended for experimental use as new applications and new
features are developed) packet types.
2.1.4.1 Scalable Video Coding (SVC)
Scalable Video Coding (SVC) is being designed as the scalable extension of MPEG-4 AVC. E.g. the SVC base
layer is compatible with the MPEG 4 AVC main profile.
The SVC draft distinguishes between a video coding layer (VCL) and a network abstraction layer (NAL). The
VCL contains all signal processing functionality of the codec. The NAL encapsulates the output of the VCL
encoder into network abstraction layer units (NAL unit).
Again, we call all data containing update information from one particular quality to the next quality “belonging
to one scalability level”. Alternatively levels could be combined to form scalability layers which update the
video from one quality to the next.
Since SVC uses an AVC compatible base layer, the RTP payload format incorporates almost the same features
as the RTP payload format for AVC. Modifications and extensions are made to support scalability features.
In order to deliver a scalable bit stream to a wide audience, media aware network elements (MANE) should be
used instead of normal routers and gateways. This network elements, application layer gateway or RTP proxy
are capable of parsing the RTP payload header and reacting on the contents. MANEs could drop packets due to
Page 6/120
PHOENIX Deliverable D3.2b
network congestions or on the user’s request. On a scalable bit stream, we call this operation the scaling
operation. MANEs, also, could adjust error protection or channel coding properties on packet basis for use in
unequal error protection schemes. There are different strategies to obtain the maximum quality for given
conditions.
The RTP payload format supports the two scalability modes: fast scalability and full scalability (see also
deliverables D2.1b and D2.4b).
2.1.4.2 Packetization Rules
The RTP payload format for SVC transmits the network abstraction layer units (NAL units) as is encapsulated in
RTP packets, but no additional RTP payload header is added. The NAL unit header co serves as RTP payload
header.
To maintain backward compatibility to AVC, the RTP payload format for SVC is based on the RTP payload
format for AVC and can be seen as an extension. To enable simple adaptation operations on network nodes,
NAL units in the scalable extension (called here “SVC NAL units”) are encapsulated in a virtual NAL unit (type
SCAL_EX) providing scalability level or layering information depending on the scalability mode used. This
ensures scalability capabilities by exploring the RTP payload header.
In the RTP payload format encapsulating one NAL unit per RTP packet is defined as well as aggregation packets
and fragmentation units.
Table 2-1 NAL Unit Types in SVC Elementary Streams (differences to AVC are highlighted)
Nal_unit_type Description
0 Unspecified
1 Coded slice of a non-IDR picture
slice_layer_without_partitioning_rbsp( )
2 Coded slice data partition A
slice_data_partition_a_layer_rbsp( )
3 Coded slice data partition B
slice_data_partition_b_layer_rbsp( )
4 Coded slice data partition C
slice_data_partition_c_layer_rbsp( )
5 Coded slice of an IDR picture
slice_layer_without_partitioning_rbsp( )
6 Supplemental enhancement information (SEI)
sei_rbsp( )
7 Sequence parameter set
seq_parameter_set_rbsp( )
8 Picture parameter set
pic_parameter_set_rbsp( )
9 Access unit delimiter
access_unit_delimiter_rbsp( )
10 End of sequence
end_of_seq_rbsp( )
11 End of stream
end_of_stream_rbsp( )
12 Filler data
filler_data_rbsp( )
13 Sequence parameter set extension
seq_parameter_set_extension_rbsp( )
14 Sequence parameter set in scalable extension
seq_parameter_set_rbsp( )
15 Picture parameter set in scalable extension
pic_parameter_set_rbsp( )
16..18 Reserved
19 Coded slice of an auxiliary coded picture without
partitioning
slice_layer_without_partitioning_rbsp( )
20 Coded slice of a non-IDR picture in scalable extension
Page 7/120
PHOENIX Deliverable D3.2b
slice_layer_in_scalable_extension_rbsp( )
21 Coded slice of an IDR picture in scalable extension
slice_layer_in_scalable_extension_rbsp( )
22..23 Reserved
24..29 Unspecified by MPEG but used for RTP transport
30 Scalability extension in File Format and for RTP
transport (SCAL_EX)
31 Unspecified
2.1.4.2.1 AVC NAL Units
The NAL units of the AVC base layer are transmitted as defined in RFC 3984 (RTP Payload Format for H.264
Video). To enable temporal scalability features to the AVC base layer the NRI field of the NAL unit header is
used to provide the picture priority. Using the NRI field (nal_ref_idc in the AVC specification), 4 levels of
temporally scalability are supported. Frames of different level of temporal decomposition may contain to the
same temporal scalability level.
2.1.4.2.2 SVC NAL Units
To enable simple scalability operations all SVC NAL units belonging to one particular scalability level are
transmitted in a virtual aggregation NAL unit of type SCAL_EX. Alternatively, if the layered scalability mode is
used, all SVC NAL units belonging to one particular layer are transmitted in a virtual aggregation NAL unit of
type SCAL_EX.
The structure of such virtual aggregation NAL unit is depicted in Figure 2-1. The use of this virtual aggregation
NAL unit allows simple and fast adaptation operations by exploiting the first two (or three) bytes of this NAL
unit. The temporally scalability level of AVC base layer NAL units do not differ from the temporally scalability
level of corresponding SCAL_EX NAL units.
extensionheaderwith
levelID
SVCNALunit
length
length
SVCNALunit
scalability level A
Figure 2-1 Aggregated SVC NAL units
2.1.4.2.3 Fragmentation of Progressive Refinement NAL Units
This is also related to D2.4b, therefore some information is repeated here for convenience. The quality base layer
is encoded using AVC entropy coding, including the block transform, quantization and CABAC as specified in
H.264/AVC (ISO/IEC 14496-10). To provide SNR scalability the texture of higher quality levels is coded by
repeatedly decreasing the quantization step size and applying modified CABAC entropy coding. This mode is
referred to as progressive refinement in the SVC definition. The use of progressive refinements enables fine
grain scalability (FGS): a progressive refinement NAL unit can be truncated at any arbitrary point. One or more
FGS refinement levels could be supplied with the bit stream.
To enable simple adaptation operations each of the FGS refinement layers could be stored divided at pre defined
rate points. These points could be provided by the encoder or by a bit stream extraction tool with special rate
control features. An adaptation operation can be performed by exploiting the SNRLevel field. The fragments
Page 8/120
PHOENIX Deliverable D3.2b
of such divided progressive refinement NAL unit are encapsulated in SCAL_EX NAL units with ascending
SNRLevel IDs.
extensionheaderwith
SNRlevelID=a
SVC progressive refinement NAL unitlength
SVC progressive
refinement NAL unit
fragment 0
SVCNALunitheader
SVCNALunitheader
extensionheaderwith
SNRlevelID=a+1
length
SVC progressive
refinement NAL
unit fragment 1extensionheader
SVC
progressive
refinement
NAL unit
fragment 2
extensionheaderwith
SNRlevelID=a+2
length
extensionheader
Figure 2-2 Fragmentation of a progressive refinement NAL unit
2.1.4.3 RTP Payload Format
The payload format defines three different basic payload structures. A receiver can identify the payload structure
by the first bytes of the RTP payload, which co-serve as part of the RTP payload header. Depending on type up
to three more bytes co-serve as RTP payload header. This byte(s) is(are) always structured as a NAL unit header.
• Single NAL unit packet: Contains only a single NAL unit in the payload.
• Aggregation packet: Packet type used to aggregate multiple NAL units into a single RTP payload. This
packet exists in three versions, the single-time aggregation packet type A (STAP-A), the single-time
aggregation packet type B (STAP-B) and the multi-time aggregation packet type with 16 bit offset an
24 bit offset (MTAP16 and MTAP24). To obtain scalability features, only NAL units of the same
scalability level or of the same layer (depending on the scalability mode used) should be combined to an
aggregation packet. This could lead to an increasing delay. Aggregation packets with NAL units of
consecutive scalability levels or layers might be build as well.
• Fragmentation unit: Used to fragment a single NAL unit over multiple RTP packets, with two versions
FU-A and FU-B.
The RTP payload format for SVC incorporates the same mechanisms as the SVC file format- the packetization
for RTP transport can be easily done.
Table 2-2 Summary of allowed NAL unit types for RTP transport
Page 9/120
PHOENIX Deliverable D3.2b
Type Packet Single NAL Non-interleaved Interleaved
Unit mode mode mode
-------------------------------------------------------------------
0 undefined ignore ignore ignore
1-23 NAL unit yes yes no
24 STAP-A no yes no
25 STAP-B no no yes
26 MTAP16 no no yes
27 MTAP24 no no yes
28 FU-A no yes yes
29 FU-B no no yes
30 SCAL_EX yes yes no
31 undefined ignore ignore ignore
All RTP packets start with one octet with the following format:
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
|F|NRI| Type |
+-+-+-+-+-+-+-+-+
F: Forbidden Zero Bit: Must be set to 0. Network nodes may set
this bit to 1 if some transmission error occurred. Then the decoder
shall not rely on the NAL unit content.
NRI: nal_ref_idc: 2 bit
Provides temporal scalability information (4 level). This is an
extension to the AVC specification that also applies for AVC base
layer NAL units
Type: 5 bit
NAL unit type ID
Figure 2-3 1st
octet of the NAL unit header / of the RTP payload header
For NAL units of type SCAL_EX (see sub clause 2.1.4.2.2) the RTP payload header consists of the first two (or
three) bytes of the NAL unit header depending on the scalability mode used:
Page 10/120
PHOENIX Deliverable D3.2b
1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+
|F|NRI| type |L=0| level_ID |
+-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+
F and NRI as defined above
type: 5 bit
ScalabilityExtensionNAL unit type ID: set constantly to a value of 30
L: 1 bit
Layer-ID provided: A value of 0 indicates that a struct providing
the scalability level is following. A value of 1
Indicates that the layer-ID is following.
level_ID: 7 bit or 15 bit
Structure indicating the scalability level for each direction
(temporal, spatial and SNR)
The size of the level_ID field and its structure is defined
during initialization.
Otherwise a default size of the level_ID field and a default
structure apply.
Figure 2-4 TP extension header for full scalability mode
1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+
|F|NRI| type |L=1| layer_ID |
+-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+
F and NRI as defined above
type: 5 bit
ScalabilityExtensionNAL unit type ID: set constantly to a value of 30
L: 1 bit
Layer-ID provided: A value of 0 indicates that a struct providing
the scalability level is following. A value of 1
Indicates that the layer-ID is following.
layer_ID: 7 bit
LayerID
Figure 2-5 RTP extension header for layered mode
Page 11/120
PHOENIX Deliverable D3.2b
2.1.4.3.1 Single NAL unit mode
The single NAL unit packet must contain one and only one NAL unit. This means that neither an aggregation
packet nor a fragmentation unit can be used within a single NAL unit packet. A NAL unit stream composed by
decapsulating single NAL unit packets in RTP sequence number order must conform to the NAL unit decoding
order.
The first byte(s) of a NAL unit co-serve(s) as the RTP payload header. The number of bytes co-serving as RTP
payload header depends on type.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|NRI| type : additional header depending on type |
+-+-+-+-+-+-+-+-+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
| |
| Single NAL unit |
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :...OPTIONAL RTP padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
F and NRI as defined above
type: 5 bit
NAL unit type ID
Figure 2-6 RTP payload format for single NAL unit packet
2.1.4.3.2 Aggregation packets
To reflect the dramatically different MTU sizes of two key target networks - wireline IP networks (with an MTU
size that is often limited by the Ethernet MTU size - roughly 1500 bytes), and IP or non-IP (e.g. ITU T H.324/M)
based wireless communication systems with preferred transmission unit sizes of 254 bytes or less – an
aggregation packets scheme is defined. This scheme is used in order to prevent media transcoding between the
worlds, and to avoid undesirable packetization overhead.
Two types of aggregation packets are defined:
• Single-time aggregation packet (STAP) aggregates NAL units with identical NALU-time. Two types of
STAPs are defined, one without DON (STAP-A) and another one including DON (STAP-B) (DON -
decoding order number).
• Multi-time aggregation packet (MTAP) aggregates NAL units with potentially differing NALU-time.
Two different MTAPs are defined that differ in the length of the NAL unit timestamp offset.
The term NALU-time is defined as the value that the RTP timestamp would have if that NAL unit would be
transported in its own RTP packet.
Each NAL unit to be carried in an aggregation packet is encapsulated in an aggregation unit.
In an aggregation packet NAL units of the AVC compatible base layer and SVC NAL units shall exist in the
same aggregation packet. The NRI value shall have the greatest value of all NAL units within the packet.
The RTP payload format will be illustrated here just for Single Time Aggregation Packets Type A.
Page 12/120
PHOENIX Deliverable D3.2b
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|NRI| type : |
+-+-+-+-+-+-+-+-+ |
| |
| one or more aggregation units |
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :...OPTIONAL RTP padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
F and NRI as defined above
type: 5 bit
RTP packet type ID (type 24..27)
Figure 2-7 RTP payload format for AVC base layer aggregation packets
If the aggregation packet contains SCAL_EX NAL units, then the first 2 or 3 bytes of the SCAL_EX NAL unit
with all the lowest scalability level ID shall be copied to the start position of the aggregation packet. This allows
fast and simple adaptation operations by examining the first bytes of each packet. The STAP header follows as
specified in Figure 2-8.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|NRI| 30 |1| layer_ID |STAP-A NAL HDR | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| |
| one or more aggregation units |
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :...OPTIONAL RTP padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 2-8 Single Time Extended Aggregation Packet Type A for SCAL_EX NAL units in layered scalability
mode
Single-time aggregation packets (STAP) should be used whenever aggregating NAL units that all share the same
NALU-time. The payload of an STAP-A does not include DON (decoding order number) and consists of at least
one single-time aggregation unit as presented in Figure 2-9. The payload of an STAP-B consists of a 16-bit
unsigned decoding order number (DON) (in network byte order) followed by at least one single-time aggregation
unit as presented in Figure 2-10.
Page 13/120
PHOENIX Deliverable D3.2b
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
: |
+-+-+-+-+-+-+-+-+ |
| |
| single-time aggregation units |
: |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 2-9 Payload format for STAP-A
The DON field specifies the value of DON for the first NAL unit in an STAP-B in transmission order. The
value of DON for each successive NAL unit in appearance order in an STAP-B is equal to (the value of DON of
the previous NAL unit in the STAP-B + 1) % 65536, in which '%' stands for the modulo operation.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
: decoding order number (DON) | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| |
| single-time aggregation units |
: |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 2-10 Payload format for STAP-B
A single-time aggregation unit consists of 16-bit unsigned size information (in network byte order) that indicates
the size of the following NAL unit in bytes (excluding these two octets, but including the NAL unit type octet of
the NAL unit), followed by the NAL unit itself including its NAL unit type byte(s). A single-time aggregation
unit is byte-aligned within the RTP payload but it may not be aligned on a 32-bit word boundary. Figure 2-11
presents the structure of the single-time aggregation unit payload.
Page 14/120
PHOENIX Deliverable D3.2b
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
: NAL unit size | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| |
| NAL unit |
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 2-11 Structure for single-time aggregation unit STAP
As an example Figure 2-12 shows the resulting structure of a Single Time Aggregation Packet Type A
containing SCAL_EX NAL units in layered scalability mode. Similar definitions apply for Multi Time
Aggregation Packets and Aggregation Units.
1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|NRI| 30 |1| layer_ID |STAP-A NAL HDR | NALU 1 Size
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
NALU 1 Size | |
+-+-+-+-+-+-+-+-+ |
| |
| NAL unit 1 |
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| | NALU 2 Size | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| |
| NAL unit 2 |
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :...OPTIONAL RTP padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 2-12 Single Time Extended Aggregation Packet Type A for SCAL_EX NAL units (layered
representation)
2.2 Multicasting
The scope of this part of document is to describe how multicast video data and the related signaling information
should be handled in the framework of the Phoenix project. First we present briefly the basic concepts related to
the advantages of multicast distribution, and the main principles of the multicast group management and routing
protocols that we have chosen to use in the project. Then we describe the signaling information traveling back
and forth the source and the receivers. We also consider the different adaptive video distribution schemes for the
multicast scenario.
Page 15/120
PHOENIX Deliverable D3.2b
2.2.1 Multicast distribution models
In IP multicast the sender only sends a packet once. The multicast routers along the path of the multicast flow
duplicate a packet if necessary. That is how multicast decreases the bandwidth usage of the underlying network.
While in IPv4 networks multicast is somewhere between unicast and broadcast, in IPv6 it completely replaces
broadcast. Multicast addresses are dynamically allocated. Multicasting consists of the following logical units:
• Group management protocols: implemented at the edges of the network, in the access infrastructure;
they handle group creation and group dynamics (members entering or leaving a group)
• Multicast routing protocols: implemented at the core of the network; they are responsible for creating
and maintaining the multicast tree, and for the transmission of the multicast flow along this tree.
Multicasting builds on the multicast group concept. A multicast group is a set of hosts that are interested in the
reception of the multicast flow sent to this group. The group is identified by a multicast address. The hosts use
group management protocols to enter or leave a given group. Multicasting uses multicast routing protocols for
delivering the multicast packets to the hosts.
2.2.1.1 Multicast models
Currently, we differentiate the following three different multicast distribution models:
• Any-Source Multicast – ASM
• Source-Filtered Multicast – SFM
• Source Specific Multicast – SSM
2.2.1.1.1 ASM
ASM designates today the first multicast distribution model, as it was imagined by Steve Deering back in early
90’s. The ASM model is an open one, where anyone is allowed to send data to a multicast group address, even
without being a member of the group, and everyone listening to that address will receive it. This is the most
common model; however, it has many disadvantages. One of them is that of unauthorized senders or the
installation being very complex. Because the address allocation is completely free in this model, another problem
is handling the address conflicts, an important problem especially in IPv4, rather than in IPv6. Furthermore, the
model raises inter-domain security and scalability concerns. On the other hand, one of the advantages is that only
IGMPv2 or MLDv1 support is needed in the network.
2.2.1.1.2 SFM
SFM is an extension of the ASM model, with the ability of the receivers to tell which sources are they interested
in. With this, unauthorized and unwanted senders can be filtered out, even if the network could only support the
ASM service. For the usage of the SFM service IGMPv3 or MLDv2 group management support is necessary.
2.2.1.1.3 SSM
The SSM model is a newer and more advanced model compared to the SFM service. As in the SFM model, the
ability to filter out sources is present in this model too. That is why the support of IGMPv3 or MLDv2 is
necessary. On the other hand, the SSM service is less complex and more error resilient than the ASM approach.
In SSM multicast groups are replaced by multicast channels, identified by an (S, G) address pair, where S is the
unicast address of the source and G is the multicast address of the group. This solves the allocation problem, as
the multicast addresses G do not have to be globally unique anymore; two SSM channels that use the same
multicast address G will be differentiated by the unicast addresses of their respective sources S1 and S2.
2.2.1.2 Group Management
Currently IGMP (Internet Group Management Protocol) (IPv4) and MLD (Multicast Listener Discovery) (IPv6)
[20] are used for group management in IP networks. The most important task of these protocols is servicing the
group membership information. Multicast routers use the group membership information to create a multicast
distribution tree.
2.2.1.2.1 The IGMP protocol
Hosts use the IGMP protocol to communicate with their local multicast routers about their group membership
needs.
Currently there are three versions of the IGMP protocol. IGMPv1 was the first group management protocol in
the IP environment. By using it, joining a group was a fast procedure, but leaving it needed a longer timeframe.
IGMPv2 was created to solve this problem, by introducing a fast leave mechanism. Both of these versions only
Page 16/120
PHOENIX Deliverable D3.2b
support the ASM model; to support the SSM model, the protocol needed to be extended. Therefore, the latest
version of the protocol, IGMPv3, introduces the notion of source lists; receivers can explicitly specify the
sources they want to listen, or the sources they want to filter out, for a given multicast address. The format of the
IGMP packets had to be thoroughly modified to support these features.
Figure 2-13 shows the message format of the IGMPv2 protocol, while Figure 2 -14 shows the IGMPv3 message
format.
Figure 2-13 IGMPv2 message format
Figure 2-14 IGMPv3 message format
2.2.1.2.2 The MLD protocol
While the MLDv1 [21] protocol is the IPv6 version of IGMPv2, MLDv2 [22] is the IPv6 version of the IGMPv3
protocol. In the IPv6 header the Next Header field is set to 58, which identifies it as an MLD message. MLD is
used to inform the routers if there are any hosts interested in a multicast flow. When the last group member
leaves, the router itself can also leave the group, and rejoin the multicast tree any time later.
In MLDv2, the latest version of the protocol, there are two types of messages defined:
• Multicast Listener Queries
• Multicast Listener Report
To ensure interoperability with earlier versions of the protocol, Multicast Listener Done messages are also
supported.
The query messages are used by the routers to query the state of the multicast listeners on their interfaces. There
are three types of query messages:
• General Query: used to learn which multicast addresses have listeners on an attached link;
• Multicast Address Specific Query: used to learn if a particular multicast address has any listeners on an
attached link;
• Multicast Address and Source Specific Query: used to learn if a particular multicast address and a given
source have any listeners on an attached link.
The first two query messages are part of both the first and second version of the protocol, while the third
message is only part of MLDv2.
Report messages are sent by the hosts to the neighboring multicast routers to report their current multicast
listener states or the changes in it. Every report message is valid for only one multicast address; thus, if a host
has more than one membership to report, it has to send a message for every one of them separately.
Page 17/120
PHOENIX Deliverable D3.2b
2.2.1.3 Multicast Routing
The group membership information collected by the group management protocols are used by the multicast
routing protocols to forward the multicast packets to the group members. There are two types of multicast trees:
• Source-Based trees: Using source-based trees the root of the multicast tree will be the source of the
multicast flow. The building of the tree can be source-driven, based on a flood and prune techniques
(e.g., DVMRP), or receiver driven, based on explicit join messages of the receivers (e.g., PIM-SSM).
• Shared trees: A shared tree algorithm builds only one shared tree for a group. This tree has some core
multicast routers through which every multicast packet must travel. The building of this tree is receiver
driven. When a receiver wants to join a shared tree it sends a join message which travels back to the
core routers. This makes it more scalable than building a source-based tree using a flood and prune
technique (e.g., in DVMRP). However, this tree won’t be an optimal one, and the load concentration on
the core routers can be very high.
The distribution of the group members is relevant when selecting the tree type:
• Dense mode: the pre-assumption of this mode is that the subnets containing the hosts have many group
members. Other assumption is that there is plenty of bandwidth available. This mode is useful when
there are only a few sources and many destination hosts, or the multicast flow’s needed bandwidth is
high and constant. Most dense mode protocols use source-based trees. One of the application areas of
the dense mode protocols is video broadcast (like LAN or ADSL TV).
• Sparse mode: in this mode the hosts are distributed widely in the whole network. This means that there
can be as much members as in the dense mode but they are much more separated. This mode does not
need high bandwidth. This mode is useful when we have sources that only need low bandwidth or the
multicast flow is not constant (like in a video conferencing application). Most sparse mode protocols
use shared trees.
2.2.1.3.1 Intra-domain routing
The Internet Engineering Task Force (IETF) developed many protocols for both modes:
• Distance-Vector Multicast Routing Protocol – DVMRP [23]
• Multicast Extensions to Open Shortest Path First – MOSPF [24]
• Protocol-Independent Multicast – PIM
• Protocol-Independent Multicast – Dense Mode – PIM-DM [25]
• Protocol-Independent Multicast – Sparse Mode – PIM-SM [26]
• Core-Based Tree Protocol – CBT [27]
2.2.1.3.1.1 The DVMRP protocol
The DVMRP protocol was mainly used in the Multicast Backbone (MBone). It works well in one domain, but
with sparse mode applications it creates unwanted overload on the network, because it uses flooding for building
the multicast tree. That is why it is mainly used with dense mode applications.
2.2.1.3.1.2 The MOSPF protocol
The MOSPF protocol is the modification of the OSPF unicast routing protocol to support multicast routing. It is
also created for intra-domain routing. The most important difference compared to other protocols is that it is
capable of handling different application needs such as QoS or balancing the load on different links. Another
advantage of the protocol is that it creates very effective multicast distribution trees. It is mostly used in dense
mode applications.
2.2.1.3.1.3 The CBT protocol
The CBT protocol creates only one, but bi-directional multicast tree. Therefore it is more scalable than the
DVMRP protocol and more suitable for wide area networks. But the concentration on the central routers is much
higher, which can become the bottleneck of the network.
2.2.1.3.1.4 The PIM protocols
The PIM protocols do not depend on any unicast routing protocols. That means that they can use the unicast
routing table of any unicast routing protocol (like OSPF or RIP). The PIM protocol supports both the sparse
Page 18/120
PHOENIX Deliverable D3.2b
mode (PIM-SM) and the dense mode (PIM-DM) multicasting. To support multicasting however, it introduces to
new entities:
• Rendezvous Point (RP): Every multicast group has a shared multicast distribution tree. The root of this
tree is the RP (RP-tree).
• Designated Router (DR): A subnet can join the Internet through several routers. This means that a
subnet can have more multicast routers. If these routers would work independently from each other, the
hosts received every multicast packet duplicated, which would be the wasting of bandwidth. That is
why they choose a designated router among themselves which will work as the only multicast router of
the given subnet. In time the function of the DR can be overtaken by another multicast router which is
closer to the source or the RP of the tree.
2.2.1.3.1.4.1 The PIM-DM protocol
The PIM-DM protocol is similar to DVMRP in that it uses flooding for the building of the multicast tree. It uses
the unicast routing information for the flooding process. It floods the network with multicast packets and then
uses prune messages to cut of those routers that don’t have any members in their subnets.
2.2.1.3.1.4.2 The PIM-SM protocol
The PIM-SM protocol can either use the routing information gathered by any unicast routing protocol or build on
the information gathered by other multicast routing protocols. It builds one-way shared trees for every multicast
group. The root of this tree is the Rendezvous point (RP-tree). One great advantage of this protocol is that it can
change from the RP-tree to a shortest path tree (which is mainly a dense mode structure). The shortest path tree’s
root is the source itself. One difference compared to the PIM-DM protocol is that the group members should
explicitly join a multicast group in order to receive the multicast flow. Another advantage of this protocol is that
it does not use flooding for building the tree. After joining a multicast group the DR can change when needed
from the RP-tree to a shortest path tree. This reduces the bandwidth utilization furthermore. These are some of
the features that make this protocol so popular.
2.2.1.3.1.4.3 The PIM-SSM protocol
The PIM-SM protocol has another version which supports the SSM model and it is called PIM-SSM (PIM-
Source Specific Multicast). Because the multicast trees always have the source as root, the PIM–SSM protocol
doesn’t have shared trees, only source-based trees. Therefore it doesn’t need an RP and the MSDP protocol
which is used in inter-domain communication between the different RP-s. Only one source can send to a given
channel, which is defined by the multicast group and the source. The sources are responsible for the address
conflicts between the channels which use their source id. It is currently the base routing protocol of the SSM
model.
2.2.1.3.2 Inter-domain routing
The protocols mentioned in the chapter above are mainly used for intra-domain routing. That is why the IETF
developed the following protocols for the support of inter-domain multicast routing:
• Multiprotocol Border Gateway Protocol – MBGP [28]
• Border Gateway Multicast Protocol – BGMP [29]
• Multicast Source Discovery Protocol – MSDP [30]
2.2.1.3.2.1 The MBGP (BGP-4)/ MSDP/PIM-SM protocol stack
The BGP based inter-domain routing comes from the idea that multicast routing should be hierarchical as well as
unicast routing. With that MBGP is a scalable inter-domain routing protocol, which supports hierarchical routing
and capable of handling different routing policies. One of the disadvantages of the MBGP protocol is that while
it is capable of calculating the next hop, it cannot construct the whole multicast tree. This is why an intra-domain
multicast routing protocol is needed to construct the tree. Mostly the PIM-SM protocol is used for this purpose
because of its effectiveness and flexibility. But the PIM-SM protocol needs knowledge about the multicast
sources and listeners in other domains. This task is done by the MSDP protocol, which communicates with the
PIM-SM routers and tells them that information. This protocol stack supports only the ASM model, thus MLDv1
support is enough in the network. Its advantage is that there are no multicast trees which spawn through several
domains, so every Internet provider can have its own PIM-SM domain. But its disadvantage is that the MSDP
protocol and with that the whole protocol stack is not scalable to high number of sources.
2.2.1.3.2.2 BGMP protocol
Page 19/120
PHOENIX Deliverable D3.2b
The BGMP protocol was developed to solve the MSDP’s scalability problem. The difference between the
BGMP protocol and the protocol stack mentioned before is that it creates bi-directional multicast trees from the
domains. It supports both the ASM and SSM model and inside one domain it can work together with any
multicast routing protocol.
2.2.1.4 Overview
Currently, we use the ASM model with the PIM-SM routing protocol. This is because of the advantages
mentioned in chapter 2.2.1.3.1.4. Another reason is that we currently don’t have a stable implementation of the
MLDv2 group management protocol. This is needed for the usability of the SSM model and the PIM-SSM
routing protocol. This method would be better because of security issues and for reason mentioned in chapter
2.2.1.1.3.
2.2.2 Multicast feedback
In this chapter we will introduce the used signaling information packets and give a short briefing on the used
multicasting protocols.
2.2.2.1 Signaling
Table 2-3 Overview of the signaling packets
ID – Name Initiator Destination Synchronized
SSI – Source Significant
Information
Source Channel coders, Receivers Yes, it’s proportional
with the data stream
SRI – Source A-priori
Information
Source Receivers, can be used by
the intermediate stations
(channel coders)
Yes, the more we use
the better the quality
CSI – Channel State
Information
Wireless Receiver (end
terminal)
Wireless transmitter,
Source
Not
NSI – Network State
Information
Receivers (Channel
Encoder, Wireless
transmitters)
Source Not
SAI – Source A-
posteriori Information
Receivers Wireless transmitters,
Source
Not strictly
DRI – Decision
Reliability Information
Wireless Receiver
(Channel decoder)
Receivers (Source decoder) Yes, but not strictly
2.2.2.1.1 Handling the downstream signaling information
We have 3 signaling information traveling toward the end terminals. These should be synchronized with the
actual data stream. 2 of that are generated by the source: SSI, SRI. For this information the easiest way to
achieve synchronization is to put the signaling and the corresponding data stream into the same packet. That way
when a user receives a video packet it also receives all the information it needs to decode the stream. This can be
done because all the information is generated by the source. However, this is not the situation with the DRI
information, because it is created by the wireless receivers and not the source. This method could be used by
only reconstructing the whole packet, which would lead to a greater utilization of the channel decoder and
maybe higher latency. Of course, when the stream is transcoded at the channel coder part, the DRI can be easily
put in the packet without much extra utilization.
2.2.2.1.2 Handling the upstream signaling information
In our work we focused mainly on the signaling information traveling back to the source, hence making it able to
adapt itself to the network conditions. This information is the NSI, CSI and the SAI; they don’t need to be
synchronized with the multicast data flow. This information is important for fine tuning the coding process and
getting the feedback about the quality of the received stream. While CSI works with the radio channels quality
and is more related to the physical level parameters (BER, signal quality), NSI deals with an end-to-end quality
of the whole path and is mostly related to network level parameters (like packet loss, delay, jitter). SAI delivers
information about the decoding process. SAI and CSI are mainly used by the channel encoder and only a reduced
set of information travels back to the source. NSI is also used by the channel coders, but it’s more important for
the source encoder. In our work we created a program that collects NSI and CSI information. This was needed
for verifying the testbed’s capability of adaptation to the changing network conditions. The NSI packets we
create contain 3 separate information (currently working on further information types):
• Packet loss ratio
Page 20/120
PHOENIX Deliverable D3.2b
• Packet delay
• Packet delay variation (jitter)
While the CSI packets have the following data:
• Bit error rate (BER)
• Signal quality
• Signal strength
Table 2-4 Overview of the used scenario groups by the Phoenix project
Single User & Multi-user Regroups requirements relating to multimedia
scenarios on which there is a single or more users.
Unicast and Multicast Regroups requirements relating to the communication
between a single sender to a receiver or between a
single sender and multiple receivers on a network
Streaming and Communication Regroups requirements relating to streaming
applications like video streaming, video telephony or
video conference.
Single-Hop and Multi-Hop Regroups requirements relating to the mobility and
roaming of the user between different type of
networks.
Confidentiality and Security Regroups requirements relating to the protection of
the application from damage attach and against
unauthorized acquisition of user’s information.
For the download part (streaming the appropriate data and the relevant signaling information) we propose normal
multicast streaming (doesn’t matter if one or more receivers are in the network). For the upload part we propose
unicast methods (this part is mainly for signaling – CSI, NSI, SAI).
Figure 2-15 Single-hop scenario with no network after the first wireless hop
We are mainly working with the scenario shown on the Figure 2-15. In this case, the channel and source
decoders are on the same node, namely the destination terminal. Because of this we don’t have any network part
in between the channel and the source decoder. The SSI and SRI information is put into the same packet as the
streamed data; it will reach the destination node when the data packet reaches it. The DRI information is
generated by the source decoder, i.e., it is generated by the same node that will use it. Thus, the two coders will
communicate with each other directly.
The other set of the signaling messages is more difficult to handle. While SAI is mainly used in between the
source decoder and the channel decoder, a reduced or aggregated set should be sent towards the source as well.
2.2.2.2 Problems with the current propagation method
Currently, we use unicast for the feedback information traveling from the receivers and channel coders back to
the source (NSI, CSI). This means that every receiver and access point (channel coders) sends its NSI and CSI
information separately to the source. There are two main problems with this solution. First of all, it creates
Page 21/120
PHOENIX Deliverable D3.2b
tremendous overhead in the network. Secondly the source is already used for video streaming, transcoding, and
other applications that use most of the CPU power. There are several methods to solve these problems.
2.2.2.3 Possible solutions
2.2.2.3.1 Multicast propagation
One solution can be to use multicast for the feedback propagation as well. There are several possibilities for the
multicast scenario as well. We can send the feedback packets using the same multicast tree that we use for the
video streams. The advantage of this method is that if a receiver joins a multicast group, which it must do if it
wants to receive the video stream, then it will also receive the feedback information from the other receivers.
The problem is that we will put an extra load with this on that multicast tree, and also it is not always necessary
for the receivers to get the others’ NSI, CSI packets. Another possibility is that we use a separate multicast tree
for the feedback information. In this case, every receiver has to join at least two multicast groups: one for the
video stream and one for the feedback channel. It is important that in both cases we can lower the bandwidth
utilization and the source’s computational load by using feedback suppression. This means that when a receiver
gets a feedback packet which is similar or nearly identical (in between given parameters) to what it would have
sent, then it won’t send any feedback information or only a smaller packet with those parts that are not identical.
There are clearly some disadvantages of the multicast propagation of the feedback information. The most
important one is the question of what kind of multicast should be used. Depending on the routing algorithm used
in the given protocol, the time each receiver gets the feedback packets from the other receivers can vary in wide
ranges. With core based algorithms the placement of the core is a very important factor in the former question.
This is clearly a drawback because in a real life scenario we don’t know where the receivers will be.
2.2.2.3.2 Feedback aggregators
Another method to solve the feedback implosion problem is to put feedback aggregators into the network. In this
case, we are still using unicast propagation, but the responsibility and the task done by the source can be divided
between the feedback aggregators and the source. With the feedback aggregators, a similar functionality can be
introduced into to the network, which is already present in the channel coders. Namely, the channel coders only
send an aggregated CSI packet towards the source. This packet only contains the information relevant to the
source for its core functionality (adaptation). On the other hand, it aggregates the CSI information for all
receivers that are connected to the given access point and sends only one CSI packet per access point and not per
receivers. Whit this aggregation, we can significantly lower the network utilization and also the computational
problem at the source can be solved. Also, the aggregators could filter all outliers or notorious receivers who
tend to send very bad or very good feedback about the network or wireless channel. This can be beneficial
because of several security issues. First of all, a receiver which wants to decrease the quality of a given group
can try to send much feedback with very bad parameters. This would result in the source having its quality
decreased, so everyone in the group would get a worse stream than what it could normally receive. But even a
bigger problem can be when an attacker sends feedback information about very good condition and when the
source changes to better qualities, the network becomes congested and quality of the service decreases
significantly. If the receiver is really having trouble, than first it should change its main quality to a lesser one
(until it reaches the worst quality or one which is suitable for it). If a feedback aggregator can filter these
receivers, then they won’t be able to degrade the quality of the whole service. Having these aggregators put
inside the network means that we put extra intelligence inside the network, like we put on the access points
(channel coders). As with the channel coders, if we already have these nodes in the network, we could use them
not only to aggregate the feedback information, but to help the system to adapt more quickly and more
efficiently. Like the channel coders, they could implement another coder inside of them, which would be able to
slightly change the coding scheme, which would fit more to the network conditions on that branch of the
multicast tree. Some of the adaptive multicast distribution schemes can benefit from these intelligent nodes. Of
course, this method has its disadvantages as well. One of them is that to enable this feature, we have to get inside
the network which is not always possible. Even if we can put our own software on the inner network nodes,
these nodes can be already overloaded and it is not sure that they are able to handle the extra computational
needs of a transcoder, or even an aggregator. Of course if we are able to put our own feedback or transcoder
server machine into the network, than the only thing we must do is get an address from the network operator and
we can use our servers.
2.2.2.4 Feedback aggregator placement
Another question is about network engineering. Namely, where should we put these aggregators? One solution is
to put the aggregator nodes into the branches of the multicast tree. But the multicast tree can dynamically change
when some users join or leave the group, so we don’t where the branches will be. If we know the actual topology
then we can distribute the inner coders so every third or forth level in the multicast tree will have one or more
aggregators. Another method is to put the aggregators into the network anywhere we want, independently from
Page 22/120
PHOENIX Deliverable D3.2b
the video distribution topology. In this case, we just put some aggregators in the given networks: some in the
core network for general reachability. One aggregator should be put in the home network (where the source is)
for avoiding the problem when there are many receivers near to the source. Also, when a network experiences
that there are many receivers in that network constantly, than it can be beneficial to put an aggregator into that
network. It also depends whether or not the aggregators are also transcoders as well or not. The aggregators are
mainly responsible for the handling of feedback information. That is why if an aggregator receives many
feedback it can still handle them because it has no other tasks (or the service operator can easily change the
hardware if the quality of the service decreases significantly). Placing new feedback aggregators is mainly good
for load balancing between the aggregator units.
2.2.2.4.1 Aggregator communication
The placement mainly depends on the communication form (e.g.: unicast, multicast, anycast) with which the
receivers use to send data to the aggregators.
2.2.2.4.1.1 Unicast
In a unicast case, the receiver should know to which aggregator it should send its feedback. But it is not clear
how the receiver could know which feedback aggregator is the nearest (in any metric) one. Although it is
possible to autoconfigure the receiver when it joins the service, it still needs reconfiguration when changing
between networks (changing to another feedback aggregator range). This also needs extra communication
between the source or the service and the receiver to inform it about the unicast address of the feedback
aggregator or transcoder (maybe with some kind of a TTL limited router advertisement).
2.2.2.4.1.2 Multicast
Other solution can be to propagate the feedback information back to the aggregators via multicast. From the
aggregators back to the source it is still unicast, because it would only complicate to use other methods (and to
ensure reliability). Because it is not a problem when some feedback from the receivers are lost (or there are
methods for reliable multicasting), but the aggregator -> source delivery must be reliable, because the source
wouldn’t function correctly, or the adaptation would be slower and even incorrect. This kind of distribution
would probably need sparse mode multicasting (lot of sources and only a few receivers). The disadvantage is
that the transcoders should also take care of group membership registration. If ASM (Any Source Multicast)
would be used, security issues could arise, because attackers could easily send messages to the given multicast
group. Also, this way every feedback aggregator receives every user’s feedback, so the source would get almost
the same aggregated messages from every aggregator, which is not a good solution. Using SSM should take care
of the security problem, but then the aggregator, to be able to join to all user, should know about them before
they are starting to send their feedback. The other problem could be solved here as well, because we only
subscribe the aggregators to a given subset of users. Altogether, multicasting may be not the best method to use
with the communication with the aggregators.
2.2.2.4.1.3 Anycast
There is the possibility to use anycast for forwarding or discovering the aggregators in the network. As
mentioned in the unicast solution, it would be hard to find the nearest aggregator node for a receiver. Anycast
was designed for this kind of service. When transmitting a message to an anycast group, it will be received by
the nearest (by any metric the anycast routing protocol or scheme supports; mainly its hop number, but can be
anything else as well). Currently there are 3 main usage areas of anycast: service discovery, query/reply services
(like the root DNS service is currently implemented), routing services. This way the user application should only
know the anycast address used by the aggregator service, and the network will take care of routing the packets to
the nearest node. The biggest problem with anycast is that there is very little anycast deployment, although it was
introduced in 1993. This way putting a new aggregator in the network would not need any extra changing to any
software. The used anycast address needs to be assigned to this node as well and it is able to function as an
aggregator. As I mentioned, it can be used either to use it for service discovery or for the service itself. This
means that the receiver application could use it to find the nearest aggregator and then send its feedback via
normal unicast. In the other case, the receiver would send its feedback via anycast always and no service
discovery is needed then. This could be a better solution because, when an aggregator is switched off (for any
reason) the routing will automatically route the packets to another aggregator node. While in the first case, the
packets will be lost until the next service discovery when another aggregator address will be received.
2.2.2.5 Proposed solution
The most suitable solution is to use anycast to transmit from the receivers to the aggregator nodes. It can
significantly decrease the complexity of the system (the task of finding the nearest feedback aggregator is
handled by the routing system). While from the aggregators back to the source normal unicast is sufficient. This
Page 23/120
PHOENIX Deliverable D3.2b
part of the communication can be replaced with anycast as well, so to create a multi-level aggregation method.
This method can be good when the number of aggregators would increase so that the source would again
experience feedback implosion. But with only a few aggregators (10-30) a one-level aggregation method is
enough.
One aggregator should be placed in the network where the source is (in a video distribution scenario). In a video
conference scenario, this is not possible because the sources can be in any network. The aggregators should be
placed in any other network. When considering not a single-hop scenario (Figure 2-15), it is advisable to
aggregate the feedback information before sending back to the source on the wireless channel.
2.2.2.5.1 Tasks of the aggregators
The main task of the aggregator nodes is to lessen the network utilization in the upstream part of the
communication by aggregating the feedback information of several receivers and sending back only one
feedback packet per aggregator. Thus, the aggregators should have some of the functionality of the source. This
means that they must be able to calculate some kind of an average of the received parameters. They should also
be able to handle outliers (mentioned earlier in chapter 2.2.2.3.2). There is also the possibility of using more
intelligent nodes, which not only have the ability to aggregate feedback information, but also transcode or
change the quality of the multimedia stream going through it (active agents). With these kinds of nodes there is
the possibility of creating a multimedia overlay network and a more efficient adaptation method.
Tasks overview:
• Aggregate feedback from the receivers
o Calculate averaged values for packet loss, delay, jitter, etc.
o Handle outlier feedbacks
• In case of active agent nodes
o Transcode multimedia stream to given quality based on the feedback
2.2.3 Adaptive video distribution
One of the most important factors in the project is how the server can adapt itself to the needs of the various
receivers, which depends on the way the information is delivered to the receivers. In the following chapter, we
describe different adaptive multicast distribution methods.
2.2.3.1 Video distribution schemes
The following schemes give different solutions to the adaptation problem. There are currently 5 different
approaches [31]:
• Single-rate non-adaptive
• Single-rate adaptive
• Simulcast
• Layered multicast
• Agent-based multicast
Table 2-5 shows a small comparison of the mentioned video distribution schemes.
Table 2-5 Comparison of different adaptive video distribution schemes
2.2.3.1.1 Single-rate non-adaptive
This method is the simplest multicast distribution method. When using this method the server only has one
quality of the given video stream and no adaptation is available from the server. Because of this, the method is
out of the scope of the project.
2.2.3.1.2 Single-rate adaptive
Page 24/120
Distribution scheme Adaptation mechanism Network requirement Coding requirement
Single-rate non-adaptive None - None
Single-rate adaptive Scalable feedback control - Rate control
Simulcast Scalable feedback control - Rate control, Transcoding
Layered: Network driven Priority dropping Priority identification Scalable coding
Layered: Receiver driven Joining/leaving groups - Scalable coding
Agent-based multicast Transcoding in agents Active service node Transcoding
PHOENIX Deliverable D3.2b
This scheme is a modified version of the above mentioned one, where the source has more version of the
appropriate video stream in different qualities. This enables it to adapt itself to the underlying network
parameters. The source only sends one stream as well, but the quality of the stream can change in time. The
quality is determined by the feedback the receivers send back to the source. For fast adaptation, the receivers
need to send their feedback very often. The more often they send, the more accurate the source can handle the
quality parameter. But it also takes higher bandwidth consumption. This means that we must have high
bandwidth in the upstream part as well.
It can also lead to the flooding of the source. One solution to this problem is that the source generates a 16 bit
random key and sends it and a number indicating how many digits of the key are significant. Then each receiver
generates a key as well, but only responds to the solicitation if the keys and the states match. In small networks,
less matching bit is enough, while in large networks the source should use higher number of significant bits. This
is good because in larger networks the probability of getting appropriate and enough feedback is higher. The
scalability of this method is good.
Although this distribution scheme is very simple and the source implosion problem can be easily eliminated, it
has a very big disadvantage. Namely the source has to adapt itself to the worst receiver. This means that when
we have one bad receiver, then all the other receivers will get the same bad quality stream, even though they are
able to receive better quality streams. This is not affordable, because especially in the wireless networks there is
a high probability of hosts that have low quality channels and low bandwidth.
2.2.3.1.3 Simulcast
The base concept of this distribution scheme is that the source sends out several streams of the same video but
with different qualities. Then the receiver chooses which quality is best suitable for its network conditions. A
working implementation of this scheme is the DSG (Destination Group Setting) protocol. In this protocol there
are 3 different video qualities. All of them are forwarded towards the receivers and they decide which one to
receive actually. If the network conditions change then the receiver changes to a better or worse quality.
Changing between the different qualities means changing (joining / leaving) the different multicast groups. One
multicast group only receives the quality that is set to that group, so the members of a group will only receive
only one stream and not all 3 of them. The only parts where more streams have to be forwarded are the parts
where the multicast trees intersect each other. There is no support needed from the network, because normal IP
multicasting support is enough to handle the changes in group memberships.
The problem of flooding is not apparent here because there is no feedback needed to travel back to the source.
This is because the receivers can adapt themselves to the network conditions. Of course, there is a problem with
very good and very bad receivers. When a receiver has worse or better conditions than the worst or best quality
then there is no way of adapting it to those conditions. It is a bigger problem with the bad receivers, because they
will get the worst quality stream with high packet loss, while the very good receivers will get the best quality
(even though they could receive better quality) but with no problem.
2.2.3.1.4 Layered adaptation
Layered multicasting is a distribution method where we can send more qualities of a video stream at once. The
essence of this method is that we separate the video stream into layers: a base layer and one or more
enhancement layers. The base layer must be decodable by itself and the enhancement layers only give extra
information to it raising the quality of the video stream. Figure 2-16 shows an example of a simple layered
coding scheme of a video stream. There are two ways to determine how to separate the layers:
• Network-driven approach
• Receiver-driven approach
Figure 2-16 Example of layer coded video stream
Page 25/120
PHOENIX Deliverable D3.2b
2.2.3.1.4.1 Network-driven adaptation
When there is not enough bandwidth the routers can drop as many packets from the enhancement layers as they
need. This method guarantees that the receivers will get the best quality that they could receive. When we add
priority to the enhancement layers, the scheme can be further enhanced. The base layer must always arrive to the
receivers and with the priority information we can tell the routers which enhancement layers give the most extra
information to the base layer, thus which layers should be forwarded with higher priority. This means that the
method gives the routers have the task to decide if they can forward the given packets. So it needs the support of
the routers that can handle the priority information.
2.2.3.1.4.2 Receiver-driven adaptation
When the receiver can set which layers it wants to receive than it is called receiver-driven layered multicasting.
In this case, we don’t need any support from the network. A receiver periodically joins a higher layer to explore
the available bandwidth. If packet loss exceeds some threshold after the join experiment, the receiver should
leave the group. Otherwise it will stay at the new subscription level. There is a possibility of receiver-driven
congestion control. In this case, the sender temporarily increases the sending rate on a layer, and a receiver will
join a higher layer only if there’s no packet loss during this experiment.
2.2.3.1.5 Agent based adaptation
The significance of this method is that we control the video stream all the way along its path from the source to
the receivers. For this we need to put some intelligence into the network. This intelligence is called an agent. The
tasks of the agents are to send feedback to the source, to the receivers, to the other agents and also to handle the
multicast stream. There are two types of agents:
• Passive: it only collects information and sends out its feedback.
• Active: it can actively change the video stream. But this should be done not so often, because it takes a
lot of computational resources and also, changing the coding of the stream will effect the remaining part
of the network.
With this method we can optimize the coding of the video stream at every part of the network. In the RTP
protocol some of these tasks are already built in. These services are called:
• RTP-level mixer
• Translator
2.2.3.2 Proposed solution: Adaptive simulcast
While the methods described in Chapter 2 were mainly developed for the wired Internet, the mobile environment
creates some requirements that must be solved as well. The methods using feedback information must face the
fact that the upload part of the wireless networks has much less bandwidth than a wired network. Another
important fact is that the parameters of the wireless channel change more often. Thus the adaptation rate must be
much higher and faster.
The base of our work is the joint source and channel coding method. The concept of this technique is to control
and adapt both the source and the channel coding parameters according to the network parameters. However, in
our case, to allow faster adaptation and more bandwidth saving, the coders are separated. The source coder can
be found at the server node, but the channel coder is right before the wireless part of the network, at the access
points. This helps the whole process in several aspects. First of all, the channel coders will get the channel
parameters of the wireless channels more quickly, thus they can adapt rapidly. Also, they will only send as much
information about the wireless channel back to the source as is needed by the source coder to fulfill its task [32]
[33]. Another task of the channel coder is to gather the parameters of the wired network part needed by the
source coder and send them back.
In this way, the channel coders are acting like active agents. Although they cannot modify the source coding
parameters of the media flow, but they can modify the channel parameters in real time. This mainly means
choosing the channel coding method best suitable for the given receivers and setting the redundancy used in that
method.
For the base of the video distribution method, we chose simulcast. The two most popular techniques are
simulcast and layered multicast. We decided for the favor of simulcast because of two reasons. Firstly, layered
multicast needs redundant coding, while simulcast only needs transcoding the media file into the required
qualities, which is much simpler. Secondly, the layered coding technique creates a very big overhead. It can
reach a hundred percent surplus, which means that the bandwidth consumption can be equal or more than the
bandwidth used by the simultaneously sent simulcast flows. It is also easier to create an end user software for
handling simulcast, because there is no need to synchronize the data coming from the several layers of the
layered approach [34] [35].
Page 26/120
PHOENIX Deliverable D3.2b
Our main extension to simulcast is to add adaptation features to the server side as well. So while letting the user
application choose from the quality it wishes to receive, the server side fine tunes the received media feed based
on the feedback information it receives from the channel coders (network level parameters like packet loss, jitter,
etc.) and the receivers (wireless channel parameters like bit error rate, etc.). This fine tuning is needed because of
the rough setting of the different simulcast qualities. We cannot allow having a different quality for every second
or third user with great user numbers, as that would effectively result in changing back to unicast. The
connection between these methods can be seen in Figure 2-17.
We can extend the adaptation capabilities by letting for example the users who have the best channel parameters
receive a little bit better stream if they are able to do so. This is done by extending simulcast to not only send the
different feeds in different qualities but in quality ranges. For example, instead of having a 500 kbit/s quality
feed, the server has 450 kbit/s – 550 kbit/s quality range. The server will change real-time coding parameters
based on the feedback information and tries to adapt to the users.
Figure 2-17 The comparison between unicast, simulcast and normal multicast (Source: [35])
Page 27/120
PHOENIX Deliverable D3.2b
2.2.4 Multicast distribution architecture
In this chapter, we give a brief conclusion about the architectural elements we have chosen are best suitable for
the Phoenix project.
2.2.4.1 Routing and multicast model
There are two parameters which we must consider in using or selecting a multicast protocol. One parameter is
the model we want to use and the other is which mode is better for the applications in the project.
2.2.4.1.1 Dense-Sparse mode
One of the main conditions of the dense mode protocols is that we have enough bandwidth and there are many
users and also their distribution is dense. Because of the wireless parts we can’t say that we have high
bandwidth, thus we must use sparse mode protocols. Among the sparse mode protocols PIM-SM is the best
choice, because of the advantages we already mentioned in chapter 2.2.1.
2.2.4.1.2 Multicast model
The SSM model and with it the PIM-SSM and MLDv2 protocols are more suitable for us, but there are currently
no working implementations of the mentioned protocols. In fact, the architectural description of the PIM-SSM
protocol is only an Internet Draft yet. That’s why we use the PIM-SM and MLDv1 protocols for multicast
routing and group management.
2.2.4.2 Feedback propagation
For the feedback propagation, we suggest using feedback aggregation mechanism. The feedback aggregators
should be reached by anycast or by anyvast service discovery and then by normal unicast.
2.2.4.3 Video adaptation
The video distribution method we use is the so called adaptive simulcast method. The base of this method is the
simulcast approach, which is then extended with adaptation capabilities. The other extension to the simulcast
method is the agent based extension. The example of this is the channel coder that can be found on the access
points and can real-time change the channel coding parameters. Another usage of agents can be the placement of
active transcoders (combined with the feedback aggregators) in the network.
2.3 IPv6
IPv6 (initially called “IPng” or Internet Protocol Next Generation) is the new version of the IP protocol
developed by the IETF (Internet Engineering Tasking Force, the main organization concerned with Internet
information transmission standards) at the beginning of the 90s. The previous version, called IPv4, is the
protocol implemented in most of the computers and networks nowadays.
The future Internet needs security, authentication and privacy mechanisms, and users need enough capacity to
support new multimedia applications (which use more and more bandwidth) with guaranteed video and audio
flow delays.
IPv6 was developed because some different reasons. The most important one is the increasing number of
machines being connected to Internet everyday. In a close future, the 32 bits used in IPv4 (which means 232
different IPv4 addresses) will not be enough to index their IP addresses. In IPv6, the number of addresses to be
used has been dramatically increased until 2128
(128 bits per address). This deficit of addresses in IPv4 comes
from the original development of the IP protocol in the 70s, when the huge success of this protocol in many
fields, not only scientific or educational, but also in the daily life was not predicted by its creators. This problem
could partially be solved using address reassignment or NAT (Network Address Translation). The second
solution consists of using one public IP address for the whole network and private IP addresses inside it.
However, with this solution many applications would be constrained to the Intranets, since many protocols are
not capable of going through NAT devices, like Real Time protocols, IPsec. In addition, there is also a problem
that would remain, which is the great size of the routing tables in the Internet backbone, which leads to an
adverse effect on the response times.
After IPv4 was created, and due to many new applications that use it, it has been necessary to create “patches”.
The most known are some related to the use of Quality of Service (QoS), Security (IPsec) and mobility (Mobile
IP). The inconvenient using those extensions arises when using them simultaneously (because they were
designed independently afterwards).
Page 28/120
PHOENIX Deliverable D3.2b
The main new characteristics of IPv6 are:
o Expanded addressing capabilities. IPv6 increases the size of the IP address from 32 bits to 128 bits.
Solutions designed to solve the lack of addresses, like NAT, are not necessary now with IPv6. The
scalability of multicast routing is improved by adding a "scope" field to multicast addresses. In addition, a
new type of address called an "anycast address" is defined, used to send a packet to any one of a group of
nodes.
o Flexible and simplified header format: The fixed size of the IPv6 header (40 bytes) and the possible
additional headers to place next to it makes possible for the routers to save processing time and make
packet routing more efficient.
o Enhanced support for extensions and options. The changes in the header options codification make limits
less stringent and more flexible to introduce new options in the future. Moreover, the IP packets are
delivered more efficiently because there is no fragmentation in the routers. Hence, routing is more
efficient in the network backbone and there is an increase in the speed.
o Autoconfiguration. It is a possibility to configure without needing servers. Devices can configure their
own Ipv6 addresses using the information they obtain from the network router. Moreover, there are
reconfiguration facilities.
o Security. In Ipv4, the only way to solve security problems was using SSL to have transport level security,
or using SSH or HTTP to have application level security. The third solution was IPsec (network level
security) but it is not commonly used. However, in IPv6 is mandatory to use IPsec.
o Quality of Service (QoS) and Class of Service (CoS). Packets from particular traffic flows can be labeled.
Senders can request special treatment for them, like quality of service or real time service. It is possible to
reserve network resources to multimedia applications with guaranteed bandwidth and delays.
o Mobility. More and more, the tendency is to have network connection in every place and to take
advantage of the same functionality regardless of the place where to connect. It is possible to use
multimedia services like voice over IP or video on demand without interrupting the active connections
with protocols like MIP (Mobile IP) or HMIP (Hierarchical MIP), even when making a network change,
as it will be explained later. Again, they are patches for the IPv4 protocol, but in IPv6 is a mandatory
functionality, so every system, which uses IPv6, must implement it, because it has been included initially
with the protocol. This feature will be very important when UMTS mobile telephone networks start to
operate.
o Authentication and privacy capabilities: IPv6 specifies extensions that use authentication, data integrity
and confidentiality.
It is necessary to remark these are basic characteristics. The own structure of the protocol allows itself to grow
and to be scalable, depending on the requirements of the new applications or services. Scalability is the most
important feature of IPv6, compared with IPv4.
IPv6 is a fundamental ingredient for the vision we have of the Society of Mobile Information. Nowadays, the
number of wireless phone surpasses already fully the number of fixed terminals of Internet. Presently, IPv6 is
outlined as the only viable architecture that can accommodate the new wave of cellular devices that support
Internet. In addition, IPv6 allows to the supply of services and benefits demanded by mobile infrastructures
(GPRS, General Packet Radio Service, or UMTS), networks of broadband, electronic of consumption and
terminals, and the subsequent interoperability/management
2.4 Mobile IPv6
Mobile IPv6 provides layer 3 transparent mobility for the superior levels (i.e. UDP), so a mobile node can be
reachable with its home address no matter it is connected to its Home Network or another [36]. The transition or
handover between networks is transparent for the superior levels and the connectivity loss produced during the
handover is due to the exchange of the corresponding signaling messages.
Every mobile node (Mobile Node, MN) has a local address (home address), which it is its original network
address. This address remains although the mobile node passes to another network. Packets sent to the mobile
node, when staying in its original network, will be routed normally as if the node was not mobile. The prefix of
this address is the same that the network’s prefix where the node is originally connected.
When a mobile node goes to a different network from the original, it will obtain a new “guest” address (Care-of
Address, CoA, belonging to the address space of the visited network). The mobile node can acquire its care-of
address through conventional IPv6 mechanisms, such as stateless or stateful auto-configuration. From now on, it
can be reached also with this new address (apart from the home address). After obtaining the new address, the
mobile node contacts with one router from its original network (Home Agent, HA) and, with a registration,
Page 29/120
PHOENIX Deliverable D3.2b
communicates its current CoA to it. Afterwards, when a packet is sent to the mobile node home address, the
Home Agent will intercept it and tunnel it to the mobile node CoA. This correspondence between the mobile
node home address and its current CoA (while it stays in the new network) is called binding.
With this mechanism, the packets reach the mobile node current location, because the CoA belongs to the
address space of the subnet where it is connected, so the IP routing delivers the packet sent by the home agent
(which has inside the encapsulation original IP packet) to the mobile node. The mobile node can have more than
one CoA address. This scene is for example the normal structure of wireless networks (for example in cellular
telephone system), where one mobile node can be connected simultaneously to several networks (for example
several overlapped cells) and it must be reachable by any of them.
Normally, a mobile node obtains one CoA address with a stateless autoconfiguration, although it can use also
stateful methods (like DHCPv6) or static preassignment.
One typical packets exchange between one mobile node and one correspondent node is shown in Figure 2-18.
One packet sent by the correspondent node to the mobile node, when the latter is visiting one network, arrives to
the Home Network, where the home agent intercepts it and sends it to the mobile node current location. In
addition, it can be observed that packets sent by the mobile node are directly delivered. This has been changed in
the last specifications of mobile IPv6 (due to security problems). In most basic cases, packets sent by the mobile
node go through a reverse tunnel.
Therefore, MIPv6 provides mechanisms which allow a direct connection between the mobile and correspondent
node, without being necessary the intervention of the home agent. When the home agent is necessary for that
type of communication, it is called “triangle routing” and it is less efficient. This mechanism appears in IPv4, but
in IPv6 there is one route optimization mechanism that surpasses it.
This optimization routing procedure allows the mobile node to put its CoA address (temporal address while
visiting a network) as the original address in the IPv6 packets it sends to correspondent nodes. The mobile node
includes one IPv6 destination option called “home address” which contains its home address, so the
correspondent nodes can identify from which mobile node the packets come from. At the same time, the packets
sent by a correspondent node to the mobile node have the CoA of this mobile node as the destination address.
Once again, to achieve transparency in the mobility, these packets have one special IPv6 routing header,
containing only one hop, similar to the mobile node home address.
Figure 2-18 Route optimization in Mobile IPv6
Page 30/120
PHOENIX Deliverable D3.2b
3 Quality of Service
In this section, first of all a brief description of the Quality of Service (QoS) concept if the IP world is provided
in terms of general definitions, specifications. Second, the descriptions of the traffic and reference working
scenarios are introduced. Then a summary of the main principles and the analysis-simulations results are
reported and discussed. Some considerations are included about the design choices of deploying and examining a
specific scheduling discipline (WFQ). Finally the application of the measurement process to the specific WFQ
proposal is described.
3.1 Providing Quality of Service in IP networks
In the previous release of this deliverable (D3.2a), the issue of providing Quality of Service (QoS) on an IP
network was tackled focusing the attention on a single interface and considering a specific family of scheduling
discipline, namely the GPS (General Processor Sharing). The concerns related to the resource management in
order to achieve target QoS guarantees (more specifically, delay and loss) was analyzed with reference to a well-
known scheduling schema, the WFQ (Weighted Fair Queuing).
A dynamic version of this algorithm was proposed to better exploit the available transmission resources. The
collected simulation results have shown the benefits of such a dynamic WFQ. It is possible to obtain relative
delay assurance differentiation for the various traffic aggregates at a given interface, in relation to numerical
factors assigned to each associated queue (termed as Pi, for the i-th queue). With a proper planning and resource
provisioning of the network, it is even possible to achieve absolute QoS guarantees.
However, the network scenarios and the design, as well configuration, parameters are enormous and further
investigations about the behavior of a dynamic version of A gps-like scheduler are needed to effectively provide
the desired QoS in whatever working conditions.
It will be highlighted as the measurement process plays a fundamental role in order to obtain both an effective
and consistent system. Moreover, the related computational overhead must be properly taken into account, being
the most expensive drawback of the proposed novel scheduling module.
The designed schema was essentially devised in order to investigate about the performance that can be achieved
with a dynamic version of WFQ, but it is foreseeable that better algorithms exist, both in terms of resource
exploitation and stricter control of the target QoS guarantees.
After a very brief description of the already conducted analysis and achieved results, the following sections
provide initially a more simulation results and studies about the very basic proposal of a dynamic WFQ. a wider
set of working scenarios, design choices and configuration settings are concerned. This way, helpful conclusions
can be gained for a network designer or administrator.
A light and consistent measurement process is proposed and evaluated for different types of traffic aggregates,
relevant figures (e.g. buffer dimensions, peak/mean rates) and setup options. in particular, the application of such
a mechanism to the basic dynamic WFQ is studied and assessed against the already deployed measurement
processes, also with traffic aggregates variable in average rate and real-time characteristics of its component
flows.
finally, a more sophisticated and performing dynamic WFQ scheduling schema is conceived and analyzed in
detail, as the final solution for the very last resulting PHOENIX system.
3.2 Traffic and reference working scenarios descriptions
First of all, for the sake of completeness we remind very shortly the considered network traffic and the reference
scenario, including the scheduler main configuration parameters and link interface characteristics.
As already stated in D3.2a, the traffic to be generated must represent a typical aggregate in a Differentiated
Service network.
Concerning the first set of simulations, we employed H.263 video flows at different bit rates, ranging from 64 to
256 Kbit/s as mean value, created starting from real traces of video streaming and conferencing applications.
Figure 3 -19 shows the traffic generated by such a source. Considering the nature of a typical compressed video
flow, the bit rate can be highly variable with a burstiness factor (peak to mean rate ratio) of even 10.
Page 31/120
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final
D3.2b_v1.0_final

More Related Content

What's hot

Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)
Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)
Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)IRJET Journal
 
Simulation of LTE Network Parameters
Simulation of  LTE Network ParametersSimulation of  LTE Network Parameters
Simulation of LTE Network ParametersIRJET Journal
 
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...IJECEIAES
 
A New Data Link Layer Protocol for Satellite IP Networks
A New Data Link Layer Protocolfor Satellite IP NetworksA New Data Link Layer Protocolfor Satellite IP Networks
A New Data Link Layer Protocol for Satellite IP NetworksNiraj Solanki
 
Cm chou 20050124
Cm chou 20050124Cm chou 20050124
Cm chou 20050124hinalala
 
Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...
Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...
Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...IJECEIAES
 
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...ijwmn
 
FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...
FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...
FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...ijwmn
 
Performance analysis of fls, exp, log and
Performance analysis of fls, exp, log andPerformance analysis of fls, exp, log and
Performance analysis of fls, exp, log andijwmn
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Transport Layer
Transport LayerTransport Layer
Transport LayerAmin Omi
 
A Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System AssessmentA Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System AssessmentJuniper Networks
 
A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...
A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...
A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...AAKASH S
 

What's hot (20)

Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)
Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)
Concurrent Multi - Path Real Time Communication Control Protocol (Cmprtcp)
 
Simulation of LTE Network Parameters
Simulation of  LTE Network ParametersSimulation of  LTE Network Parameters
Simulation of LTE Network Parameters
 
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
Fairness Comparison of TCP Variants over Proactive and Reactive Routing Proto...
 
A New Data Link Layer Protocol for Satellite IP Networks
A New Data Link Layer Protocolfor Satellite IP NetworksA New Data Link Layer Protocolfor Satellite IP Networks
A New Data Link Layer Protocol for Satellite IP Networks
 
QOS of MPLS
QOS of MPLSQOS of MPLS
QOS of MPLS
 
Jc3615871591
Jc3615871591Jc3615871591
Jc3615871591
 
Cm chou 20050124
Cm chou 20050124Cm chou 20050124
Cm chou 20050124
 
Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...
Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...
Performance Evaluation of UDP, DCCP, SCTP and TFRC for Different Traffic Flow...
 
TecDoc
TecDocTecDoc
TecDoc
 
40520130101003
4052013010100340520130101003
40520130101003
 
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...
 
FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...
FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...
FREQUENCY AND TIME DOMAIN PACKET SCHEDULING BASED ON CHANNEL PREDICTION WITH ...
 
Networks
NetworksNetworks
Networks
 
94
9494
94
 
Performance analysis of fls, exp, log and
Performance analysis of fls, exp, log andPerformance analysis of fls, exp, log and
Performance analysis of fls, exp, log and
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Transport Layer
Transport LayerTransport Layer
Transport Layer
 
A Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System AssessmentA Whole Lot of Ports: Juniper Networks QFabric System Assessment
A Whole Lot of Ports: Juniper Networks QFabric System Assessment
 
Enhancing Power Unbiased Cooperative Media Access Control Protocol in Manets
Enhancing Power Unbiased Cooperative Media Access Control Protocol in ManetsEnhancing Power Unbiased Cooperative Media Access Control Protocol in Manets
Enhancing Power Unbiased Cooperative Media Access Control Protocol in Manets
 
A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...
A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...
A QoS oriented distributed routing protocol for Hybrid Wireless Network :Firs...
 

Viewers also liked

2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...
2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...
2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...Marcel Tutor Ale, PhD
 
Matt Watters Resume 2017
Matt Watters Resume 2017Matt Watters Resume 2017
Matt Watters Resume 2017Matt Watters
 
Titanic swede powerpoint
Titanic swede powerpointTitanic swede powerpoint
Titanic swede powerpointkatiesnook121
 
Titanic swede powerpoint
Titanic swede powerpointTitanic swede powerpoint
Titanic swede powerpointeleanorwest
 
Cast UK Corporate Brochure
Cast UK Corporate BrochureCast UK Corporate Brochure
Cast UK Corporate BrochureMark Nesbit
 

Viewers also liked (10)

2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...
2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...
2016_DNA-Based Identification and Chemical characteristics of Hypnea from coa...
 
Catalogue
CatalogueCatalogue
Catalogue
 
Matt Watters Resume 2017
Matt Watters Resume 2017Matt Watters Resume 2017
Matt Watters Resume 2017
 
gassaly CV (1) final
gassaly CV (1) finalgassaly CV (1) final
gassaly CV (1) final
 
PCR-Online PD2
PCR-Online PD2PCR-Online PD2
PCR-Online PD2
 
Titanic swede powerpoint
Titanic swede powerpointTitanic swede powerpoint
Titanic swede powerpoint
 
Titanic swede powerpoint
Titanic swede powerpointTitanic swede powerpoint
Titanic swede powerpoint
 
Presentation1,.
Presentation1,.Presentation1,.
Presentation1,.
 
Cast UK Corporate Brochure
Cast UK Corporate BrochureCast UK Corporate Brochure
Cast UK Corporate Brochure
 
EF English Certificate
EF English CertificateEF English Certificate
EF English Certificate
 

Similar to D3.2b_v1.0_final

An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos TechniquesKatie Gulley
 
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...ijngnjournal
 
Thesis - Differentiated Optical QoS Service
Thesis - Differentiated Optical QoS ServiceThesis - Differentiated Optical QoS Service
Thesis - Differentiated Optical QoS ServiceLui Spatz Izarra
 
Approach to an Intelligent Based IP over MPLS VPLS Network for Packet Scheduling
Approach to an Intelligent Based IP over MPLS VPLS Network for Packet SchedulingApproach to an Intelligent Based IP over MPLS VPLS Network for Packet Scheduling
Approach to an Intelligent Based IP over MPLS VPLS Network for Packet SchedulingIRJET Journal
 
Dynamic routing of ip traffic
Dynamic routing of ip trafficDynamic routing of ip traffic
Dynamic routing of ip trafficIJCNCJournal
 
Next Generation Internet Over Satellite
Next Generation Internet Over SatelliteNext Generation Internet Over Satellite
Next Generation Internet Over SatelliteReza Gh
 
Performing Network Simulators of TCP with E2E Network Model over UMTS Networks
Performing Network Simulators of TCP with E2E Network Model over UMTS NetworksPerforming Network Simulators of TCP with E2E Network Model over UMTS Networks
Performing Network Simulators of TCP with E2E Network Model over UMTS NetworksAM Publications,India
 
Towards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-function
Towards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-functionTowards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-function
Towards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-functionEiko Seidel
 
DIFFERENTIATED SERVICES ENSURING QOS ON INTERNET
DIFFERENTIATED SERVICES ENSURING QOS ON INTERNETDIFFERENTIATED SERVICES ENSURING QOS ON INTERNET
DIFFERENTIATED SERVICES ENSURING QOS ON INTERNETijcseit
 
2002023
20020232002023
2002023pglehn
 
Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...
Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...
Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...TELKOMNIKA JOURNAL
 
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETs
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETsVEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETs
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETsCSCJournals
 
ON THE SUPPORT OF MULTIMEDIA APPLICATIONS OVER WIRELESS MESH NETWORKS
ON THE SUPPORT OF MULTIMEDIA APPLICATIONS  OVER WIRELESS MESH NETWORKS ON THE SUPPORT OF MULTIMEDIA APPLICATIONS  OVER WIRELESS MESH NETWORKS
ON THE SUPPORT OF MULTIMEDIA APPLICATIONS OVER WIRELESS MESH NETWORKS ijwmn
 
Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1Sean Andersen
 
Conference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5GConference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5GEricsson
 
2004 qof is_mpls_ospf
2004 qof is_mpls_ospf2004 qof is_mpls_ospf
2004 qof is_mpls_ospfAdi Nugroho
 

Similar to D3.2b_v1.0_final (20)

An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos Techniques
 
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...
NETWORK PERFORMANCE EVALUATION WITH REAL TIME APPLICATION ENSURING QUALITY OF...
 
Thesis - Differentiated Optical QoS Service
Thesis - Differentiated Optical QoS ServiceThesis - Differentiated Optical QoS Service
Thesis - Differentiated Optical QoS Service
 
Approach to an Intelligent Based IP over MPLS VPLS Network for Packet Scheduling
Approach to an Intelligent Based IP over MPLS VPLS Network for Packet SchedulingApproach to an Intelligent Based IP over MPLS VPLS Network for Packet Scheduling
Approach to an Intelligent Based IP over MPLS VPLS Network for Packet Scheduling
 
Dynamic routing of ip traffic
Dynamic routing of ip trafficDynamic routing of ip traffic
Dynamic routing of ip traffic
 
Next Generation Internet Over Satellite
Next Generation Internet Over SatelliteNext Generation Internet Over Satellite
Next Generation Internet Over Satellite
 
Performing Network Simulators of TCP with E2E Network Model over UMTS Networks
Performing Network Simulators of TCP with E2E Network Model over UMTS NetworksPerforming Network Simulators of TCP with E2E Network Model over UMTS Networks
Performing Network Simulators of TCP with E2E Network Model over UMTS Networks
 
Towards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-function
Towards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-functionTowards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-function
Towards achieving-high-performance-in-5g-mobile-packet-cores-user-plane-function
 
DIFFERENTIATED SERVICES ENSURING QOS ON INTERNET
DIFFERENTIATED SERVICES ENSURING QOS ON INTERNETDIFFERENTIATED SERVICES ENSURING QOS ON INTERNET
DIFFERENTIATED SERVICES ENSURING QOS ON INTERNET
 
2002023
20020232002023
2002023
 
Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...
Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...
Performance of MPLS-based Virtual Private Networks and Classic Virtual Privat...
 
1720 1724
1720 17241720 1724
1720 1724
 
1720 1724
1720 17241720 1724
1720 1724
 
How to implement mpls
How to implement mplsHow to implement mpls
How to implement mpls
 
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETs
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETsVEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETs
VEGAS: Better Performance than other TCP Congestion Control Algorithms on MANETs
 
ON THE SUPPORT OF MULTIMEDIA APPLICATIONS OVER WIRELESS MESH NETWORKS
ON THE SUPPORT OF MULTIMEDIA APPLICATIONS  OVER WIRELESS MESH NETWORKS ON THE SUPPORT OF MULTIMEDIA APPLICATIONS  OVER WIRELESS MESH NETWORKS
ON THE SUPPORT OF MULTIMEDIA APPLICATIONS OVER WIRELESS MESH NETWORKS
 
Qos videoconferencing service
Qos videoconferencing serviceQos videoconferencing service
Qos videoconferencing service
 
Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1Mplswc2006 white paper-v1.1
Mplswc2006 white paper-v1.1
 
Conference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5GConference Paper: Towards High Performance Packet Processing for 5G
Conference Paper: Towards High Performance Packet Processing for 5G
 
2004 qof is_mpls_ospf
2004 qof is_mpls_ospf2004 qof is_mpls_ospf
2004 qof is_mpls_ospf
 

D3.2b_v1.0_final

  • 1. PHOENIX Deliverable D3.2b PHOENIX IST-2003-001812 DELIVERABLE D3.2B REFINEMENT OF SPECIFICATION, INTERMEDIATE DESIGN AND ANALYSIS OF TRANSPORT AND NETWORK LAYER PROTOCOLS AND MECHANISMS Contractual Date of Delivery to the CEC: 30.04.2005 Actual Date of Delivery to the CEC: 30.04.2005 Author(s): BUTE (editor) Participant(s): BUTE, VTT, CEFRIEL, THALES, SIEMENS Workpackage: WP3 Est. person months: 16.5 Security: PP Nature: R Version: 1.0 Total number of pages: 119 Abstract: The purpose of this document is to provide the vital information for the specification and design of transport and network layer protocols and mechanisms deployed in Phoenix project. The document describes the transport and network layer functionalities in detail, including also multicasting and Quality of Service issues and a brief survey on testing and simulation experiments. This document focuses on studying and optimizing existing network and transport layer protocols and methods as well as developing new mechanisms in order to improve the QoS guarantees of the multimedia streams for Phoenix system. In Chapter 2 the varying network protocols designed for layered video streams and multimedia are evaluated and optimized in order to provide information required and delivered by JSCC controllers through network and network protocol stack. Nevertheless multicast protocols are also taken into consideration together with IPv6 and its mobility support. Chapter 3 is concentrating on questions raised about providing Quality of Service in IP networks. Also in this chapter a proposal is introduced for a light and consistent measurement process of the traffic. Chapter 4 is about initial transport layer design and Chapter 5 contains testing and simulation issues regarded the transport protocols, the mobility and the multicast data transmission. Keyword list: Transport layer protocols, Network layer protocols, IPv6, Mobility, Quality of Service, Multicasting, Simulation, Testing Page 1/120
  • 2. PHOENIX Deliverable D3.2b EXECUTIVE SUMMARY....................................................................................................................................3 1 INTRODUCTION................................................................................................................................................4 2 NETWORK LAYER PROTOCOLS AND MECHANISMS...........................................................................5 3 QUALITY OF SERVICE..................................................................................................................................31 4 INITIAL TRANSPORT LAYER DESIGN.....................................................................................................84 5 SIMULATIONS AND TESTING.....................................................................................................................88 6 CONCLUSIONS...............................................................................................................................................113 INDEX OF FIGURES AND TABLES.............................................................................................................114 INDEX OF FIGURES AND TABLES.............................................................................................................114 REFERENCES...................................................................................................................................................118 REFERENCES...................................................................................................................................................118 LIST OF ACRONYMS......................................................................................................................................120 LIST OF ACRONYMS......................................................................................................................................120 Page 2/120
  • 3. PHOENIX Deliverable D3.2b Executive Summary The main purpose of this document is to provide the vital information regarded the specification and design issues of transport and network layer protocols and mechanisms in Phoenix project. The document describes the transport and network layer functionalities including also multicasting and Quality of Service and a survey on testing and simulation methods, experiments and results. This document focuses on studying and optimizing existing network and transport layer protocols and methods as well as developing new mechanisms in order to improve the QoS guarantees of the multimedia streams for Phoenix system. The first chapter gives a brief introduction about the parts and the main elements of the document. In the second chapter the varying network protocols designed for layered video streams and multimedia are evaluated and optimized in order to provide information required and delivered by JSCC controllers through network and network protocol stack. This chapter gives a short review of the UDP, UDP-Lite, DCCP, DCCP- Lite, SCTP, RTP and RTCP protocols, extended with new elements and results. Nevertheless multicast protocols are also taken into consideration together with IPv6 and its mobility support. There is a brief presentation about the basic concepts related to the advantages of multicast distribution, and the main principles of the multicast group management and routing protocols have been chosen to use in the Phoenix project. Then a description of the signaling information traveling back and forth the source and the receivers is coming. This part also considers the different adaptive video distribution schemes for the multicast scenario. Finally the basics of the IPv6 (the main network layer protocol of the Phoenix system) is detailed as well as its integrated mobility extension called Mobile IPv6. Chapter 3 is concentrating on questions raised about providing Quality of Service in IP networks. After a very brief description of the QoS concept and the already conducted analysis and achieved results, chapter 3 provides initially a more simulations results and studies about the very basic proposal of a dynamic WFQ, a wider set of working scenarios, design choices and configuration settings. A light and consistent measurement process is proposed and evaluated for different types of traffic aggregates, relevant figures and setup options. In particular the application of such a mechanism to the basic dynamic WFQ is studied and assessed against the already deployed measurement processes, also with traffic aggregates variable in average rate and real-time characteristics of its component flows. Finally, a more sophisticated and performing dynamic WFQ scheduling schema is conceived and analyzed in detail, as the final solution for the very last resulting Phoenix system. Chapter 4 is about initial transport layer design. The chapter starts with a short description of the transport layer requirements regarded the Phoenix system. Then the most important facts about the UDP-Lite modules of the Basic Chain simulator are coming, followed by the details of transport layer mechanism implementations, namely the partial checksum, congestion control and PMTU discovery mechanisms. Chapter 5 contains testing and simulation issues regarded the transport protocols, the mobility and the multicast data transmission. The first part of the chapter is about the simulation models of the UDP-Lite and the DCCP. The simulation results from transport protocols are not presented here since the performed simulations are highly dependent on the link layer solution presented in the Deliverable 3.4b for the wireless radio interface. The second part of the chapter deals with the Transport protocol testbed which provides a platform for performing protocol tests and measurements in the equal environment. The packet loss rate, throughput and packet jitter, i.e. packet arrival time variation were the main attributes and they were compared using different BER values. Theoretical packet loss probabilities were also used in the comparison. The results presented in this chapter were used in the indicative theoretical analysis performed to the UDP-Lite. Then the description of the Mobility testbed created to examine and analyze the properties of different mobility methods in real physical environment is done followed by the measurement results of the unicast and multicast multimedia streaming over Mobile IPv6. Finally the issues of the Multicast testbed are detailed with the results regarded the comparing tests of the adaptive simulcast to other video distribution methods. Last chapter (Chapter 6) draws the conclusions and some future work regarded the transport and network layer issues of the overall system. Page 3/120
  • 4. PHOENIX Deliverable D3.2b 1 Introduction The work in task 3.2 IP Networking concentrates on studying and optimizing existing network and transport layer protocols as well as developing new mechanisms in order to improve the QoS guarantees of the multimedia streams for Phoenix system. The transmission protocols for layered video streams and multimedia are evaluated and optimized in order to maintain and monitor the required end-to-end QoS level, and to provide information required and delivered by JSCC controllers through network and network protocol stack. At transport layer several different protocols e.g. UDP, UDP-Lite, DCCP, DCCP-Lite, SCTP and RTP/RTCP will be studied in order to form a basis for optimization of end-to-end transport protocols and for possible new protocol extensions (like RTP payload format for SVC). At the network layer, some multicast distribution models and different multicast group management schemes will be considered, in order to bundle together receivers with similar QoS requirements. The effect of mobility in the wireless link is also considered at the IP layer. Furthermore the simulation issues regarded the transport layer modeling will be described as well as the examinations done in different transport and network protocol testbeds. Page 4/120
  • 5. PHOENIX Deliverable D3.2b 2 Network layer protocols and mechanisms The main network layer protocol considered for the PHOENIX system at the first phase is the IPv6 protocol. The transport layer protocols include the UDP and UDP-Lite protocols. These protocols form also the basics of the Basic Chain system. RTP & RTCP have to be used for streaming support with UDP and UDP-Lite supported with the features of multicasting. Other promising candidates include DCCP, DCCP-Lite and SCTP. Basic introduction to all of these protocols were already given in the D3.2a. Here is still a short review of them extended with some new elements and results. 2.1 Transport protocols: UDP, UDP-Lite, DCCP, DCCP-lite, SCTP, RTP & RTCP 2.1.1 UDP & UDP-Lite UDP [6] provides connectionless, uncontrolled-loss i.e. best-effort, maybe-duplicates, unordered service. It adds demultiplexing (port numbers) and optional data integrity service (checksum) to IP. Checksum is optional with IPv4 but mandatory with IPv6 because IPv6 does not have a checksum of its own. The UDP checksum always includes the network layer pseudoheader, which differs a little bit for IPv4 and IPv6. Disabling UDP checksum with IPv6 is not really an option because then damaged headers might pass unverified and packets might be misdelivered. UDP has neither error reporting nor error recovery mechanism: damaged packets will be discarded. The data rate is set by the sending application. UDP has no congestion control. UDP-Lite [7] introduces partial checksum to UDP: If errors are detected in the sensitive part (network layer pseudoheader, UDP-Lite header or part of the beginning of the packet payload) of the packet, it will be discarded. If errors are in the insensitive part of the packet (payload), packet is not discarded. This enables e.g. RTP to checksum its header but not its payload. The partial checksum is implemented by changing the UDP Length field to Coverage field – it specifies the coverage of the checksum and it is defined by the sending application on a per-packet basis. This change is possible because there is some redundancy (UDP length field = IP length field – size of IP header). Setting Coverage equal or bigger than packet length turns UDP-Lite into traditional UDP. 2.1.2 DCCP & DCCP-Lite DCCP is designed for use with streaming media (packet stream, application is responsible for framing). It provides unreliable flow of datagrams with acknowledgements and with a reliable three way handshaking for connection setup and teardown. However, there are no retransmission methods for datagrams. Only options are retransmitted as required to make feature negotiation and acknowledgement information reliable. Feature negotiation means that endpoints can agree on the values of features or properties of the connection, for example the congestion control mechanism to be used. Two TCP-friendly congestion control mechanisms are currently available: • TCP-like [9] (CCID 2) for flows that want to quickly take advantage of available bandwidth, and can cope with quickly changing send rates, and • TCP-Friendly Rate Control [10] (CCID 3) for flows that require a steadier send rate, such as streaming applications. DCCP has 10 different packet types. DCCP-Request, Response, and Ack packet sequence is used in the connection setup. After that the data transmission phase is implemented with Data, Ack and possibly DataAck packets. CloseReq, Close and Reset packets are used in the tearing down of the connection. Reset packet can be also used in closing the connection abnormally. Sync and SyncAck packets are used to re-synchronize the sender and receiver for example after a burst of losses. All DCCP packets begin with a generic DCCP packet header. All DCCP packets may also contain options, which occupy space at the end of the header and are a multiple of 8 bits in length. The partial checksum mechanism in DCCP always includes the whole header including the options and the network layer pseudoheader. The data payload can be protected as a whole or only the first n*4 bytes (0≤n<15). DCCP-Lite [14] is a simplified version of the DCCP. Unfortunately, it does not support partial checksum. Page 5/120
  • 6. PHOENIX Deliverable D3.2b 2.1.3 SCTP SCTP’s [12] design has many of the strengths of TCP, such as rate-adaptive window-based congestion control, error detection, and fast retransmission method similar to TCP SACK. It provides acknowledged error-free non- duplicated transfer of user data. Partial reliability extension is also specified for the real-time multimedia traffic. SCTP has also several new features such as multihoming (allows two endpoints to set up an association with multiple IP addresses for each endpoint) and multistreaming (each stream is a subflow within the overall data flow, and the delivery of each subflow is independent of each other). SCTP has also security features: the service availability of reliable and timely data transport (elimination of DoS attacks by utilizing a four-way handshake sequence and a cookie mechanism) and the integrity of the user-to-user information carried by SCTP (by using IPSec [18], [19] or Transport Layer Security (TLS) [17]). Each SCTP packet is composed of a common header followed by one or more chunks. Chunks contain either control information or user data. 2.1.4 RTP & RTCP TCP is considered to be too slow protocol for real time multimedia data such as audio and video because of its three way hand shaking. That is why UDP is usually used instead TCP over IP. But this has also problems, because UDP is unreliable, there are no retransmissions upon packet losses. RTP [13] was designed for real-time multimedia applications’ transport protocol by IETF. Strictly speaking, RTP is not a transport protocol since it does not provide a complete transport service. Instead, the RTP PDUs must be encapsulated within another transport protocol (e.g. UDP) that provides framing, checksums and end-to-end delivery. RTP only provides timestamps and sequence numbers, which may be used by an application written on top of RTP to provide error detection, re-sequencing of out-of-order data, and/or error recovery. Note, RTP itself does not provide any error detection/recovery, it is the application on top of RTP that may provide these. RTP also incorporates some presentation layer functions: RTP profiles make it possible for the application to identify the format of data, i.e. audio/video, what compression method, etc. RTP sequence numbers can be also used e.g. in video decoding; packets do not necessarily have to be decoded in sequence. The overhead of RTP header is quite large, and header compression is proposed. RTCP takes care of QoS monitoring, inter-media synchronization, identification, and session size estimation/scaling. The control traffic load is scaled to be max. 5% of the data traffic load. RTP receivers provide reception quality feedback using RTCP report packets that can be of two types: sender or receiver type. The receiver sends SR (sender reports) if it is actively participating in the session. Otherwise it will send the receiver reports (RR). In addition to these, RTCP has SDES (Source Description), BYE (sent at the end of RTP session) and APP (Application defined, intended for experimental use as new applications and new features are developed) packet types. 2.1.4.1 Scalable Video Coding (SVC) Scalable Video Coding (SVC) is being designed as the scalable extension of MPEG-4 AVC. E.g. the SVC base layer is compatible with the MPEG 4 AVC main profile. The SVC draft distinguishes between a video coding layer (VCL) and a network abstraction layer (NAL). The VCL contains all signal processing functionality of the codec. The NAL encapsulates the output of the VCL encoder into network abstraction layer units (NAL unit). Again, we call all data containing update information from one particular quality to the next quality “belonging to one scalability level”. Alternatively levels could be combined to form scalability layers which update the video from one quality to the next. Since SVC uses an AVC compatible base layer, the RTP payload format incorporates almost the same features as the RTP payload format for AVC. Modifications and extensions are made to support scalability features. In order to deliver a scalable bit stream to a wide audience, media aware network elements (MANE) should be used instead of normal routers and gateways. This network elements, application layer gateway or RTP proxy are capable of parsing the RTP payload header and reacting on the contents. MANEs could drop packets due to Page 6/120
  • 7. PHOENIX Deliverable D3.2b network congestions or on the user’s request. On a scalable bit stream, we call this operation the scaling operation. MANEs, also, could adjust error protection or channel coding properties on packet basis for use in unequal error protection schemes. There are different strategies to obtain the maximum quality for given conditions. The RTP payload format supports the two scalability modes: fast scalability and full scalability (see also deliverables D2.1b and D2.4b). 2.1.4.2 Packetization Rules The RTP payload format for SVC transmits the network abstraction layer units (NAL units) as is encapsulated in RTP packets, but no additional RTP payload header is added. The NAL unit header co serves as RTP payload header. To maintain backward compatibility to AVC, the RTP payload format for SVC is based on the RTP payload format for AVC and can be seen as an extension. To enable simple adaptation operations on network nodes, NAL units in the scalable extension (called here “SVC NAL units”) are encapsulated in a virtual NAL unit (type SCAL_EX) providing scalability level or layering information depending on the scalability mode used. This ensures scalability capabilities by exploring the RTP payload header. In the RTP payload format encapsulating one NAL unit per RTP packet is defined as well as aggregation packets and fragmentation units. Table 2-1 NAL Unit Types in SVC Elementary Streams (differences to AVC are highlighted) Nal_unit_type Description 0 Unspecified 1 Coded slice of a non-IDR picture slice_layer_without_partitioning_rbsp( ) 2 Coded slice data partition A slice_data_partition_a_layer_rbsp( ) 3 Coded slice data partition B slice_data_partition_b_layer_rbsp( ) 4 Coded slice data partition C slice_data_partition_c_layer_rbsp( ) 5 Coded slice of an IDR picture slice_layer_without_partitioning_rbsp( ) 6 Supplemental enhancement information (SEI) sei_rbsp( ) 7 Sequence parameter set seq_parameter_set_rbsp( ) 8 Picture parameter set pic_parameter_set_rbsp( ) 9 Access unit delimiter access_unit_delimiter_rbsp( ) 10 End of sequence end_of_seq_rbsp( ) 11 End of stream end_of_stream_rbsp( ) 12 Filler data filler_data_rbsp( ) 13 Sequence parameter set extension seq_parameter_set_extension_rbsp( ) 14 Sequence parameter set in scalable extension seq_parameter_set_rbsp( ) 15 Picture parameter set in scalable extension pic_parameter_set_rbsp( ) 16..18 Reserved 19 Coded slice of an auxiliary coded picture without partitioning slice_layer_without_partitioning_rbsp( ) 20 Coded slice of a non-IDR picture in scalable extension Page 7/120
  • 8. PHOENIX Deliverable D3.2b slice_layer_in_scalable_extension_rbsp( ) 21 Coded slice of an IDR picture in scalable extension slice_layer_in_scalable_extension_rbsp( ) 22..23 Reserved 24..29 Unspecified by MPEG but used for RTP transport 30 Scalability extension in File Format and for RTP transport (SCAL_EX) 31 Unspecified 2.1.4.2.1 AVC NAL Units The NAL units of the AVC base layer are transmitted as defined in RFC 3984 (RTP Payload Format for H.264 Video). To enable temporal scalability features to the AVC base layer the NRI field of the NAL unit header is used to provide the picture priority. Using the NRI field (nal_ref_idc in the AVC specification), 4 levels of temporally scalability are supported. Frames of different level of temporal decomposition may contain to the same temporal scalability level. 2.1.4.2.2 SVC NAL Units To enable simple scalability operations all SVC NAL units belonging to one particular scalability level are transmitted in a virtual aggregation NAL unit of type SCAL_EX. Alternatively, if the layered scalability mode is used, all SVC NAL units belonging to one particular layer are transmitted in a virtual aggregation NAL unit of type SCAL_EX. The structure of such virtual aggregation NAL unit is depicted in Figure 2-1. The use of this virtual aggregation NAL unit allows simple and fast adaptation operations by exploiting the first two (or three) bytes of this NAL unit. The temporally scalability level of AVC base layer NAL units do not differ from the temporally scalability level of corresponding SCAL_EX NAL units. extensionheaderwith levelID SVCNALunit length length SVCNALunit scalability level A Figure 2-1 Aggregated SVC NAL units 2.1.4.2.3 Fragmentation of Progressive Refinement NAL Units This is also related to D2.4b, therefore some information is repeated here for convenience. The quality base layer is encoded using AVC entropy coding, including the block transform, quantization and CABAC as specified in H.264/AVC (ISO/IEC 14496-10). To provide SNR scalability the texture of higher quality levels is coded by repeatedly decreasing the quantization step size and applying modified CABAC entropy coding. This mode is referred to as progressive refinement in the SVC definition. The use of progressive refinements enables fine grain scalability (FGS): a progressive refinement NAL unit can be truncated at any arbitrary point. One or more FGS refinement levels could be supplied with the bit stream. To enable simple adaptation operations each of the FGS refinement layers could be stored divided at pre defined rate points. These points could be provided by the encoder or by a bit stream extraction tool with special rate control features. An adaptation operation can be performed by exploiting the SNRLevel field. The fragments Page 8/120
  • 9. PHOENIX Deliverable D3.2b of such divided progressive refinement NAL unit are encapsulated in SCAL_EX NAL units with ascending SNRLevel IDs. extensionheaderwith SNRlevelID=a SVC progressive refinement NAL unitlength SVC progressive refinement NAL unit fragment 0 SVCNALunitheader SVCNALunitheader extensionheaderwith SNRlevelID=a+1 length SVC progressive refinement NAL unit fragment 1extensionheader SVC progressive refinement NAL unit fragment 2 extensionheaderwith SNRlevelID=a+2 length extensionheader Figure 2-2 Fragmentation of a progressive refinement NAL unit 2.1.4.3 RTP Payload Format The payload format defines three different basic payload structures. A receiver can identify the payload structure by the first bytes of the RTP payload, which co-serve as part of the RTP payload header. Depending on type up to three more bytes co-serve as RTP payload header. This byte(s) is(are) always structured as a NAL unit header. • Single NAL unit packet: Contains only a single NAL unit in the payload. • Aggregation packet: Packet type used to aggregate multiple NAL units into a single RTP payload. This packet exists in three versions, the single-time aggregation packet type A (STAP-A), the single-time aggregation packet type B (STAP-B) and the multi-time aggregation packet type with 16 bit offset an 24 bit offset (MTAP16 and MTAP24). To obtain scalability features, only NAL units of the same scalability level or of the same layer (depending on the scalability mode used) should be combined to an aggregation packet. This could lead to an increasing delay. Aggregation packets with NAL units of consecutive scalability levels or layers might be build as well. • Fragmentation unit: Used to fragment a single NAL unit over multiple RTP packets, with two versions FU-A and FU-B. The RTP payload format for SVC incorporates the same mechanisms as the SVC file format- the packetization for RTP transport can be easily done. Table 2-2 Summary of allowed NAL unit types for RTP transport Page 9/120
  • 10. PHOENIX Deliverable D3.2b Type Packet Single NAL Non-interleaved Interleaved Unit mode mode mode ------------------------------------------------------------------- 0 undefined ignore ignore ignore 1-23 NAL unit yes yes no 24 STAP-A no yes no 25 STAP-B no no yes 26 MTAP16 no no yes 27 MTAP24 no no yes 28 FU-A no yes yes 29 FU-B no no yes 30 SCAL_EX yes yes no 31 undefined ignore ignore ignore All RTP packets start with one octet with the following format: 0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |F|NRI| Type | +-+-+-+-+-+-+-+-+ F: Forbidden Zero Bit: Must be set to 0. Network nodes may set this bit to 1 if some transmission error occurred. Then the decoder shall not rely on the NAL unit content. NRI: nal_ref_idc: 2 bit Provides temporal scalability information (4 level). This is an extension to the AVC specification that also applies for AVC base layer NAL units Type: 5 bit NAL unit type ID Figure 2-3 1st octet of the NAL unit header / of the RTP payload header For NAL units of type SCAL_EX (see sub clause 2.1.4.2.2) the RTP payload header consists of the first two (or three) bytes of the NAL unit header depending on the scalability mode used: Page 10/120
  • 11. PHOENIX Deliverable D3.2b 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+ |F|NRI| type |L=0| level_ID | +-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+ F and NRI as defined above type: 5 bit ScalabilityExtensionNAL unit type ID: set constantly to a value of 30 L: 1 bit Layer-ID provided: A value of 0 indicates that a struct providing the scalability level is following. A value of 1 Indicates that the layer-ID is following. level_ID: 7 bit or 15 bit Structure indicating the scalability level for each direction (temporal, spatial and SNR) The size of the level_ID field and its structure is defined during initialization. Otherwise a default size of the level_ID field and a default structure apply. Figure 2-4 TP extension header for full scalability mode 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+ |F|NRI| type |L=1| layer_ID | +-+-+-+-+-+-+-+-+---+-+-+-+-+-+-+-+ F and NRI as defined above type: 5 bit ScalabilityExtensionNAL unit type ID: set constantly to a value of 30 L: 1 bit Layer-ID provided: A value of 0 indicates that a struct providing the scalability level is following. A value of 1 Indicates that the layer-ID is following. layer_ID: 7 bit LayerID Figure 2-5 RTP extension header for layered mode Page 11/120
  • 12. PHOENIX Deliverable D3.2b 2.1.4.3.1 Single NAL unit mode The single NAL unit packet must contain one and only one NAL unit. This means that neither an aggregation packet nor a fragmentation unit can be used within a single NAL unit packet. A NAL unit stream composed by decapsulating single NAL unit packets in RTP sequence number order must conform to the NAL unit decoding order. The first byte(s) of a NAL unit co-serve(s) as the RTP payload header. The number of bytes co-serving as RTP payload header depends on type. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |F|NRI| type : additional header depending on type | +-+-+-+-+-+-+-+-+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.| | | | Single NAL unit | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ F and NRI as defined above type: 5 bit NAL unit type ID Figure 2-6 RTP payload format for single NAL unit packet 2.1.4.3.2 Aggregation packets To reflect the dramatically different MTU sizes of two key target networks - wireline IP networks (with an MTU size that is often limited by the Ethernet MTU size - roughly 1500 bytes), and IP or non-IP (e.g. ITU T H.324/M) based wireless communication systems with preferred transmission unit sizes of 254 bytes or less – an aggregation packets scheme is defined. This scheme is used in order to prevent media transcoding between the worlds, and to avoid undesirable packetization overhead. Two types of aggregation packets are defined: • Single-time aggregation packet (STAP) aggregates NAL units with identical NALU-time. Two types of STAPs are defined, one without DON (STAP-A) and another one including DON (STAP-B) (DON - decoding order number). • Multi-time aggregation packet (MTAP) aggregates NAL units with potentially differing NALU-time. Two different MTAPs are defined that differ in the length of the NAL unit timestamp offset. The term NALU-time is defined as the value that the RTP timestamp would have if that NAL unit would be transported in its own RTP packet. Each NAL unit to be carried in an aggregation packet is encapsulated in an aggregation unit. In an aggregation packet NAL units of the AVC compatible base layer and SVC NAL units shall exist in the same aggregation packet. The NRI value shall have the greatest value of all NAL units within the packet. The RTP payload format will be illustrated here just for Single Time Aggregation Packets Type A. Page 12/120
  • 13. PHOENIX Deliverable D3.2b 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |F|NRI| type : | +-+-+-+-+-+-+-+-+ | | | | one or more aggregation units | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ F and NRI as defined above type: 5 bit RTP packet type ID (type 24..27) Figure 2-7 RTP payload format for AVC base layer aggregation packets If the aggregation packet contains SCAL_EX NAL units, then the first 2 or 3 bytes of the SCAL_EX NAL unit with all the lowest scalability level ID shall be copied to the start position of the aggregation packet. This allows fast and simple adaptation operations by examining the first bytes of each packet. The STAP header follows as specified in Figure 2-8. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |F|NRI| 30 |1| layer_ID |STAP-A NAL HDR | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | | one or more aggregation units | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Figure 2-8 Single Time Extended Aggregation Packet Type A for SCAL_EX NAL units in layered scalability mode Single-time aggregation packets (STAP) should be used whenever aggregating NAL units that all share the same NALU-time. The payload of an STAP-A does not include DON (decoding order number) and consists of at least one single-time aggregation unit as presented in Figure 2-9. The payload of an STAP-B consists of a 16-bit unsigned decoding order number (DON) (in network byte order) followed by at least one single-time aggregation unit as presented in Figure 2-10. Page 13/120
  • 14. PHOENIX Deliverable D3.2b 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : | +-+-+-+-+-+-+-+-+ | | | | single-time aggregation units | : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Figure 2-9 Payload format for STAP-A The DON field specifies the value of DON for the first NAL unit in an STAP-B in transmission order. The value of DON for each successive NAL unit in appearance order in an STAP-B is equal to (the value of DON of the previous NAL unit in the STAP-B + 1) % 65536, in which '%' stands for the modulo operation. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : decoding order number (DON) | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | | single-time aggregation units | : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Figure 2-10 Payload format for STAP-B A single-time aggregation unit consists of 16-bit unsigned size information (in network byte order) that indicates the size of the following NAL unit in bytes (excluding these two octets, but including the NAL unit type octet of the NAL unit), followed by the NAL unit itself including its NAL unit type byte(s). A single-time aggregation unit is byte-aligned within the RTP payload but it may not be aligned on a 32-bit word boundary. Figure 2-11 presents the structure of the single-time aggregation unit payload. Page 14/120
  • 15. PHOENIX Deliverable D3.2b 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ : NAL unit size | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | | NAL unit | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | : +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Figure 2-11 Structure for single-time aggregation unit STAP As an example Figure 2-12 shows the resulting structure of a Single Time Aggregation Packet Type A containing SCAL_EX NAL units in layered scalability mode. Similar definitions apply for Multi Time Aggregation Packets and Aggregation Units. 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |F|NRI| 30 |1| layer_ID |STAP-A NAL HDR | NALU 1 Size +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ NALU 1 Size | | +-+-+-+-+-+-+-+-+ | | | | NAL unit 1 | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | NALU 2 Size | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | | NAL unit 2 | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | :...OPTIONAL RTP padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Figure 2-12 Single Time Extended Aggregation Packet Type A for SCAL_EX NAL units (layered representation) 2.2 Multicasting The scope of this part of document is to describe how multicast video data and the related signaling information should be handled in the framework of the Phoenix project. First we present briefly the basic concepts related to the advantages of multicast distribution, and the main principles of the multicast group management and routing protocols that we have chosen to use in the project. Then we describe the signaling information traveling back and forth the source and the receivers. We also consider the different adaptive video distribution schemes for the multicast scenario. Page 15/120
  • 16. PHOENIX Deliverable D3.2b 2.2.1 Multicast distribution models In IP multicast the sender only sends a packet once. The multicast routers along the path of the multicast flow duplicate a packet if necessary. That is how multicast decreases the bandwidth usage of the underlying network. While in IPv4 networks multicast is somewhere between unicast and broadcast, in IPv6 it completely replaces broadcast. Multicast addresses are dynamically allocated. Multicasting consists of the following logical units: • Group management protocols: implemented at the edges of the network, in the access infrastructure; they handle group creation and group dynamics (members entering or leaving a group) • Multicast routing protocols: implemented at the core of the network; they are responsible for creating and maintaining the multicast tree, and for the transmission of the multicast flow along this tree. Multicasting builds on the multicast group concept. A multicast group is a set of hosts that are interested in the reception of the multicast flow sent to this group. The group is identified by a multicast address. The hosts use group management protocols to enter or leave a given group. Multicasting uses multicast routing protocols for delivering the multicast packets to the hosts. 2.2.1.1 Multicast models Currently, we differentiate the following three different multicast distribution models: • Any-Source Multicast – ASM • Source-Filtered Multicast – SFM • Source Specific Multicast – SSM 2.2.1.1.1 ASM ASM designates today the first multicast distribution model, as it was imagined by Steve Deering back in early 90’s. The ASM model is an open one, where anyone is allowed to send data to a multicast group address, even without being a member of the group, and everyone listening to that address will receive it. This is the most common model; however, it has many disadvantages. One of them is that of unauthorized senders or the installation being very complex. Because the address allocation is completely free in this model, another problem is handling the address conflicts, an important problem especially in IPv4, rather than in IPv6. Furthermore, the model raises inter-domain security and scalability concerns. On the other hand, one of the advantages is that only IGMPv2 or MLDv1 support is needed in the network. 2.2.1.1.2 SFM SFM is an extension of the ASM model, with the ability of the receivers to tell which sources are they interested in. With this, unauthorized and unwanted senders can be filtered out, even if the network could only support the ASM service. For the usage of the SFM service IGMPv3 or MLDv2 group management support is necessary. 2.2.1.1.3 SSM The SSM model is a newer and more advanced model compared to the SFM service. As in the SFM model, the ability to filter out sources is present in this model too. That is why the support of IGMPv3 or MLDv2 is necessary. On the other hand, the SSM service is less complex and more error resilient than the ASM approach. In SSM multicast groups are replaced by multicast channels, identified by an (S, G) address pair, where S is the unicast address of the source and G is the multicast address of the group. This solves the allocation problem, as the multicast addresses G do not have to be globally unique anymore; two SSM channels that use the same multicast address G will be differentiated by the unicast addresses of their respective sources S1 and S2. 2.2.1.2 Group Management Currently IGMP (Internet Group Management Protocol) (IPv4) and MLD (Multicast Listener Discovery) (IPv6) [20] are used for group management in IP networks. The most important task of these protocols is servicing the group membership information. Multicast routers use the group membership information to create a multicast distribution tree. 2.2.1.2.1 The IGMP protocol Hosts use the IGMP protocol to communicate with their local multicast routers about their group membership needs. Currently there are three versions of the IGMP protocol. IGMPv1 was the first group management protocol in the IP environment. By using it, joining a group was a fast procedure, but leaving it needed a longer timeframe. IGMPv2 was created to solve this problem, by introducing a fast leave mechanism. Both of these versions only Page 16/120
  • 17. PHOENIX Deliverable D3.2b support the ASM model; to support the SSM model, the protocol needed to be extended. Therefore, the latest version of the protocol, IGMPv3, introduces the notion of source lists; receivers can explicitly specify the sources they want to listen, or the sources they want to filter out, for a given multicast address. The format of the IGMP packets had to be thoroughly modified to support these features. Figure 2-13 shows the message format of the IGMPv2 protocol, while Figure 2 -14 shows the IGMPv3 message format. Figure 2-13 IGMPv2 message format Figure 2-14 IGMPv3 message format 2.2.1.2.2 The MLD protocol While the MLDv1 [21] protocol is the IPv6 version of IGMPv2, MLDv2 [22] is the IPv6 version of the IGMPv3 protocol. In the IPv6 header the Next Header field is set to 58, which identifies it as an MLD message. MLD is used to inform the routers if there are any hosts interested in a multicast flow. When the last group member leaves, the router itself can also leave the group, and rejoin the multicast tree any time later. In MLDv2, the latest version of the protocol, there are two types of messages defined: • Multicast Listener Queries • Multicast Listener Report To ensure interoperability with earlier versions of the protocol, Multicast Listener Done messages are also supported. The query messages are used by the routers to query the state of the multicast listeners on their interfaces. There are three types of query messages: • General Query: used to learn which multicast addresses have listeners on an attached link; • Multicast Address Specific Query: used to learn if a particular multicast address has any listeners on an attached link; • Multicast Address and Source Specific Query: used to learn if a particular multicast address and a given source have any listeners on an attached link. The first two query messages are part of both the first and second version of the protocol, while the third message is only part of MLDv2. Report messages are sent by the hosts to the neighboring multicast routers to report their current multicast listener states or the changes in it. Every report message is valid for only one multicast address; thus, if a host has more than one membership to report, it has to send a message for every one of them separately. Page 17/120
  • 18. PHOENIX Deliverable D3.2b 2.2.1.3 Multicast Routing The group membership information collected by the group management protocols are used by the multicast routing protocols to forward the multicast packets to the group members. There are two types of multicast trees: • Source-Based trees: Using source-based trees the root of the multicast tree will be the source of the multicast flow. The building of the tree can be source-driven, based on a flood and prune techniques (e.g., DVMRP), or receiver driven, based on explicit join messages of the receivers (e.g., PIM-SSM). • Shared trees: A shared tree algorithm builds only one shared tree for a group. This tree has some core multicast routers through which every multicast packet must travel. The building of this tree is receiver driven. When a receiver wants to join a shared tree it sends a join message which travels back to the core routers. This makes it more scalable than building a source-based tree using a flood and prune technique (e.g., in DVMRP). However, this tree won’t be an optimal one, and the load concentration on the core routers can be very high. The distribution of the group members is relevant when selecting the tree type: • Dense mode: the pre-assumption of this mode is that the subnets containing the hosts have many group members. Other assumption is that there is plenty of bandwidth available. This mode is useful when there are only a few sources and many destination hosts, or the multicast flow’s needed bandwidth is high and constant. Most dense mode protocols use source-based trees. One of the application areas of the dense mode protocols is video broadcast (like LAN or ADSL TV). • Sparse mode: in this mode the hosts are distributed widely in the whole network. This means that there can be as much members as in the dense mode but they are much more separated. This mode does not need high bandwidth. This mode is useful when we have sources that only need low bandwidth or the multicast flow is not constant (like in a video conferencing application). Most sparse mode protocols use shared trees. 2.2.1.3.1 Intra-domain routing The Internet Engineering Task Force (IETF) developed many protocols for both modes: • Distance-Vector Multicast Routing Protocol – DVMRP [23] • Multicast Extensions to Open Shortest Path First – MOSPF [24] • Protocol-Independent Multicast – PIM • Protocol-Independent Multicast – Dense Mode – PIM-DM [25] • Protocol-Independent Multicast – Sparse Mode – PIM-SM [26] • Core-Based Tree Protocol – CBT [27] 2.2.1.3.1.1 The DVMRP protocol The DVMRP protocol was mainly used in the Multicast Backbone (MBone). It works well in one domain, but with sparse mode applications it creates unwanted overload on the network, because it uses flooding for building the multicast tree. That is why it is mainly used with dense mode applications. 2.2.1.3.1.2 The MOSPF protocol The MOSPF protocol is the modification of the OSPF unicast routing protocol to support multicast routing. It is also created for intra-domain routing. The most important difference compared to other protocols is that it is capable of handling different application needs such as QoS or balancing the load on different links. Another advantage of the protocol is that it creates very effective multicast distribution trees. It is mostly used in dense mode applications. 2.2.1.3.1.3 The CBT protocol The CBT protocol creates only one, but bi-directional multicast tree. Therefore it is more scalable than the DVMRP protocol and more suitable for wide area networks. But the concentration on the central routers is much higher, which can become the bottleneck of the network. 2.2.1.3.1.4 The PIM protocols The PIM protocols do not depend on any unicast routing protocols. That means that they can use the unicast routing table of any unicast routing protocol (like OSPF or RIP). The PIM protocol supports both the sparse Page 18/120
  • 19. PHOENIX Deliverable D3.2b mode (PIM-SM) and the dense mode (PIM-DM) multicasting. To support multicasting however, it introduces to new entities: • Rendezvous Point (RP): Every multicast group has a shared multicast distribution tree. The root of this tree is the RP (RP-tree). • Designated Router (DR): A subnet can join the Internet through several routers. This means that a subnet can have more multicast routers. If these routers would work independently from each other, the hosts received every multicast packet duplicated, which would be the wasting of bandwidth. That is why they choose a designated router among themselves which will work as the only multicast router of the given subnet. In time the function of the DR can be overtaken by another multicast router which is closer to the source or the RP of the tree. 2.2.1.3.1.4.1 The PIM-DM protocol The PIM-DM protocol is similar to DVMRP in that it uses flooding for the building of the multicast tree. It uses the unicast routing information for the flooding process. It floods the network with multicast packets and then uses prune messages to cut of those routers that don’t have any members in their subnets. 2.2.1.3.1.4.2 The PIM-SM protocol The PIM-SM protocol can either use the routing information gathered by any unicast routing protocol or build on the information gathered by other multicast routing protocols. It builds one-way shared trees for every multicast group. The root of this tree is the Rendezvous point (RP-tree). One great advantage of this protocol is that it can change from the RP-tree to a shortest path tree (which is mainly a dense mode structure). The shortest path tree’s root is the source itself. One difference compared to the PIM-DM protocol is that the group members should explicitly join a multicast group in order to receive the multicast flow. Another advantage of this protocol is that it does not use flooding for building the tree. After joining a multicast group the DR can change when needed from the RP-tree to a shortest path tree. This reduces the bandwidth utilization furthermore. These are some of the features that make this protocol so popular. 2.2.1.3.1.4.3 The PIM-SSM protocol The PIM-SM protocol has another version which supports the SSM model and it is called PIM-SSM (PIM- Source Specific Multicast). Because the multicast trees always have the source as root, the PIM–SSM protocol doesn’t have shared trees, only source-based trees. Therefore it doesn’t need an RP and the MSDP protocol which is used in inter-domain communication between the different RP-s. Only one source can send to a given channel, which is defined by the multicast group and the source. The sources are responsible for the address conflicts between the channels which use their source id. It is currently the base routing protocol of the SSM model. 2.2.1.3.2 Inter-domain routing The protocols mentioned in the chapter above are mainly used for intra-domain routing. That is why the IETF developed the following protocols for the support of inter-domain multicast routing: • Multiprotocol Border Gateway Protocol – MBGP [28] • Border Gateway Multicast Protocol – BGMP [29] • Multicast Source Discovery Protocol – MSDP [30] 2.2.1.3.2.1 The MBGP (BGP-4)/ MSDP/PIM-SM protocol stack The BGP based inter-domain routing comes from the idea that multicast routing should be hierarchical as well as unicast routing. With that MBGP is a scalable inter-domain routing protocol, which supports hierarchical routing and capable of handling different routing policies. One of the disadvantages of the MBGP protocol is that while it is capable of calculating the next hop, it cannot construct the whole multicast tree. This is why an intra-domain multicast routing protocol is needed to construct the tree. Mostly the PIM-SM protocol is used for this purpose because of its effectiveness and flexibility. But the PIM-SM protocol needs knowledge about the multicast sources and listeners in other domains. This task is done by the MSDP protocol, which communicates with the PIM-SM routers and tells them that information. This protocol stack supports only the ASM model, thus MLDv1 support is enough in the network. Its advantage is that there are no multicast trees which spawn through several domains, so every Internet provider can have its own PIM-SM domain. But its disadvantage is that the MSDP protocol and with that the whole protocol stack is not scalable to high number of sources. 2.2.1.3.2.2 BGMP protocol Page 19/120
  • 20. PHOENIX Deliverable D3.2b The BGMP protocol was developed to solve the MSDP’s scalability problem. The difference between the BGMP protocol and the protocol stack mentioned before is that it creates bi-directional multicast trees from the domains. It supports both the ASM and SSM model and inside one domain it can work together with any multicast routing protocol. 2.2.1.4 Overview Currently, we use the ASM model with the PIM-SM routing protocol. This is because of the advantages mentioned in chapter 2.2.1.3.1.4. Another reason is that we currently don’t have a stable implementation of the MLDv2 group management protocol. This is needed for the usability of the SSM model and the PIM-SSM routing protocol. This method would be better because of security issues and for reason mentioned in chapter 2.2.1.1.3. 2.2.2 Multicast feedback In this chapter we will introduce the used signaling information packets and give a short briefing on the used multicasting protocols. 2.2.2.1 Signaling Table 2-3 Overview of the signaling packets ID – Name Initiator Destination Synchronized SSI – Source Significant Information Source Channel coders, Receivers Yes, it’s proportional with the data stream SRI – Source A-priori Information Source Receivers, can be used by the intermediate stations (channel coders) Yes, the more we use the better the quality CSI – Channel State Information Wireless Receiver (end terminal) Wireless transmitter, Source Not NSI – Network State Information Receivers (Channel Encoder, Wireless transmitters) Source Not SAI – Source A- posteriori Information Receivers Wireless transmitters, Source Not strictly DRI – Decision Reliability Information Wireless Receiver (Channel decoder) Receivers (Source decoder) Yes, but not strictly 2.2.2.1.1 Handling the downstream signaling information We have 3 signaling information traveling toward the end terminals. These should be synchronized with the actual data stream. 2 of that are generated by the source: SSI, SRI. For this information the easiest way to achieve synchronization is to put the signaling and the corresponding data stream into the same packet. That way when a user receives a video packet it also receives all the information it needs to decode the stream. This can be done because all the information is generated by the source. However, this is not the situation with the DRI information, because it is created by the wireless receivers and not the source. This method could be used by only reconstructing the whole packet, which would lead to a greater utilization of the channel decoder and maybe higher latency. Of course, when the stream is transcoded at the channel coder part, the DRI can be easily put in the packet without much extra utilization. 2.2.2.1.2 Handling the upstream signaling information In our work we focused mainly on the signaling information traveling back to the source, hence making it able to adapt itself to the network conditions. This information is the NSI, CSI and the SAI; they don’t need to be synchronized with the multicast data flow. This information is important for fine tuning the coding process and getting the feedback about the quality of the received stream. While CSI works with the radio channels quality and is more related to the physical level parameters (BER, signal quality), NSI deals with an end-to-end quality of the whole path and is mostly related to network level parameters (like packet loss, delay, jitter). SAI delivers information about the decoding process. SAI and CSI are mainly used by the channel encoder and only a reduced set of information travels back to the source. NSI is also used by the channel coders, but it’s more important for the source encoder. In our work we created a program that collects NSI and CSI information. This was needed for verifying the testbed’s capability of adaptation to the changing network conditions. The NSI packets we create contain 3 separate information (currently working on further information types): • Packet loss ratio Page 20/120
  • 21. PHOENIX Deliverable D3.2b • Packet delay • Packet delay variation (jitter) While the CSI packets have the following data: • Bit error rate (BER) • Signal quality • Signal strength Table 2-4 Overview of the used scenario groups by the Phoenix project Single User & Multi-user Regroups requirements relating to multimedia scenarios on which there is a single or more users. Unicast and Multicast Regroups requirements relating to the communication between a single sender to a receiver or between a single sender and multiple receivers on a network Streaming and Communication Regroups requirements relating to streaming applications like video streaming, video telephony or video conference. Single-Hop and Multi-Hop Regroups requirements relating to the mobility and roaming of the user between different type of networks. Confidentiality and Security Regroups requirements relating to the protection of the application from damage attach and against unauthorized acquisition of user’s information. For the download part (streaming the appropriate data and the relevant signaling information) we propose normal multicast streaming (doesn’t matter if one or more receivers are in the network). For the upload part we propose unicast methods (this part is mainly for signaling – CSI, NSI, SAI). Figure 2-15 Single-hop scenario with no network after the first wireless hop We are mainly working with the scenario shown on the Figure 2-15. In this case, the channel and source decoders are on the same node, namely the destination terminal. Because of this we don’t have any network part in between the channel and the source decoder. The SSI and SRI information is put into the same packet as the streamed data; it will reach the destination node when the data packet reaches it. The DRI information is generated by the source decoder, i.e., it is generated by the same node that will use it. Thus, the two coders will communicate with each other directly. The other set of the signaling messages is more difficult to handle. While SAI is mainly used in between the source decoder and the channel decoder, a reduced or aggregated set should be sent towards the source as well. 2.2.2.2 Problems with the current propagation method Currently, we use unicast for the feedback information traveling from the receivers and channel coders back to the source (NSI, CSI). This means that every receiver and access point (channel coders) sends its NSI and CSI information separately to the source. There are two main problems with this solution. First of all, it creates Page 21/120
  • 22. PHOENIX Deliverable D3.2b tremendous overhead in the network. Secondly the source is already used for video streaming, transcoding, and other applications that use most of the CPU power. There are several methods to solve these problems. 2.2.2.3 Possible solutions 2.2.2.3.1 Multicast propagation One solution can be to use multicast for the feedback propagation as well. There are several possibilities for the multicast scenario as well. We can send the feedback packets using the same multicast tree that we use for the video streams. The advantage of this method is that if a receiver joins a multicast group, which it must do if it wants to receive the video stream, then it will also receive the feedback information from the other receivers. The problem is that we will put an extra load with this on that multicast tree, and also it is not always necessary for the receivers to get the others’ NSI, CSI packets. Another possibility is that we use a separate multicast tree for the feedback information. In this case, every receiver has to join at least two multicast groups: one for the video stream and one for the feedback channel. It is important that in both cases we can lower the bandwidth utilization and the source’s computational load by using feedback suppression. This means that when a receiver gets a feedback packet which is similar or nearly identical (in between given parameters) to what it would have sent, then it won’t send any feedback information or only a smaller packet with those parts that are not identical. There are clearly some disadvantages of the multicast propagation of the feedback information. The most important one is the question of what kind of multicast should be used. Depending on the routing algorithm used in the given protocol, the time each receiver gets the feedback packets from the other receivers can vary in wide ranges. With core based algorithms the placement of the core is a very important factor in the former question. This is clearly a drawback because in a real life scenario we don’t know where the receivers will be. 2.2.2.3.2 Feedback aggregators Another method to solve the feedback implosion problem is to put feedback aggregators into the network. In this case, we are still using unicast propagation, but the responsibility and the task done by the source can be divided between the feedback aggregators and the source. With the feedback aggregators, a similar functionality can be introduced into to the network, which is already present in the channel coders. Namely, the channel coders only send an aggregated CSI packet towards the source. This packet only contains the information relevant to the source for its core functionality (adaptation). On the other hand, it aggregates the CSI information for all receivers that are connected to the given access point and sends only one CSI packet per access point and not per receivers. Whit this aggregation, we can significantly lower the network utilization and also the computational problem at the source can be solved. Also, the aggregators could filter all outliers or notorious receivers who tend to send very bad or very good feedback about the network or wireless channel. This can be beneficial because of several security issues. First of all, a receiver which wants to decrease the quality of a given group can try to send much feedback with very bad parameters. This would result in the source having its quality decreased, so everyone in the group would get a worse stream than what it could normally receive. But even a bigger problem can be when an attacker sends feedback information about very good condition and when the source changes to better qualities, the network becomes congested and quality of the service decreases significantly. If the receiver is really having trouble, than first it should change its main quality to a lesser one (until it reaches the worst quality or one which is suitable for it). If a feedback aggregator can filter these receivers, then they won’t be able to degrade the quality of the whole service. Having these aggregators put inside the network means that we put extra intelligence inside the network, like we put on the access points (channel coders). As with the channel coders, if we already have these nodes in the network, we could use them not only to aggregate the feedback information, but to help the system to adapt more quickly and more efficiently. Like the channel coders, they could implement another coder inside of them, which would be able to slightly change the coding scheme, which would fit more to the network conditions on that branch of the multicast tree. Some of the adaptive multicast distribution schemes can benefit from these intelligent nodes. Of course, this method has its disadvantages as well. One of them is that to enable this feature, we have to get inside the network which is not always possible. Even if we can put our own software on the inner network nodes, these nodes can be already overloaded and it is not sure that they are able to handle the extra computational needs of a transcoder, or even an aggregator. Of course if we are able to put our own feedback or transcoder server machine into the network, than the only thing we must do is get an address from the network operator and we can use our servers. 2.2.2.4 Feedback aggregator placement Another question is about network engineering. Namely, where should we put these aggregators? One solution is to put the aggregator nodes into the branches of the multicast tree. But the multicast tree can dynamically change when some users join or leave the group, so we don’t where the branches will be. If we know the actual topology then we can distribute the inner coders so every third or forth level in the multicast tree will have one or more aggregators. Another method is to put the aggregators into the network anywhere we want, independently from Page 22/120
  • 23. PHOENIX Deliverable D3.2b the video distribution topology. In this case, we just put some aggregators in the given networks: some in the core network for general reachability. One aggregator should be put in the home network (where the source is) for avoiding the problem when there are many receivers near to the source. Also, when a network experiences that there are many receivers in that network constantly, than it can be beneficial to put an aggregator into that network. It also depends whether or not the aggregators are also transcoders as well or not. The aggregators are mainly responsible for the handling of feedback information. That is why if an aggregator receives many feedback it can still handle them because it has no other tasks (or the service operator can easily change the hardware if the quality of the service decreases significantly). Placing new feedback aggregators is mainly good for load balancing between the aggregator units. 2.2.2.4.1 Aggregator communication The placement mainly depends on the communication form (e.g.: unicast, multicast, anycast) with which the receivers use to send data to the aggregators. 2.2.2.4.1.1 Unicast In a unicast case, the receiver should know to which aggregator it should send its feedback. But it is not clear how the receiver could know which feedback aggregator is the nearest (in any metric) one. Although it is possible to autoconfigure the receiver when it joins the service, it still needs reconfiguration when changing between networks (changing to another feedback aggregator range). This also needs extra communication between the source or the service and the receiver to inform it about the unicast address of the feedback aggregator or transcoder (maybe with some kind of a TTL limited router advertisement). 2.2.2.4.1.2 Multicast Other solution can be to propagate the feedback information back to the aggregators via multicast. From the aggregators back to the source it is still unicast, because it would only complicate to use other methods (and to ensure reliability). Because it is not a problem when some feedback from the receivers are lost (or there are methods for reliable multicasting), but the aggregator -> source delivery must be reliable, because the source wouldn’t function correctly, or the adaptation would be slower and even incorrect. This kind of distribution would probably need sparse mode multicasting (lot of sources and only a few receivers). The disadvantage is that the transcoders should also take care of group membership registration. If ASM (Any Source Multicast) would be used, security issues could arise, because attackers could easily send messages to the given multicast group. Also, this way every feedback aggregator receives every user’s feedback, so the source would get almost the same aggregated messages from every aggregator, which is not a good solution. Using SSM should take care of the security problem, but then the aggregator, to be able to join to all user, should know about them before they are starting to send their feedback. The other problem could be solved here as well, because we only subscribe the aggregators to a given subset of users. Altogether, multicasting may be not the best method to use with the communication with the aggregators. 2.2.2.4.1.3 Anycast There is the possibility to use anycast for forwarding or discovering the aggregators in the network. As mentioned in the unicast solution, it would be hard to find the nearest aggregator node for a receiver. Anycast was designed for this kind of service. When transmitting a message to an anycast group, it will be received by the nearest (by any metric the anycast routing protocol or scheme supports; mainly its hop number, but can be anything else as well). Currently there are 3 main usage areas of anycast: service discovery, query/reply services (like the root DNS service is currently implemented), routing services. This way the user application should only know the anycast address used by the aggregator service, and the network will take care of routing the packets to the nearest node. The biggest problem with anycast is that there is very little anycast deployment, although it was introduced in 1993. This way putting a new aggregator in the network would not need any extra changing to any software. The used anycast address needs to be assigned to this node as well and it is able to function as an aggregator. As I mentioned, it can be used either to use it for service discovery or for the service itself. This means that the receiver application could use it to find the nearest aggregator and then send its feedback via normal unicast. In the other case, the receiver would send its feedback via anycast always and no service discovery is needed then. This could be a better solution because, when an aggregator is switched off (for any reason) the routing will automatically route the packets to another aggregator node. While in the first case, the packets will be lost until the next service discovery when another aggregator address will be received. 2.2.2.5 Proposed solution The most suitable solution is to use anycast to transmit from the receivers to the aggregator nodes. It can significantly decrease the complexity of the system (the task of finding the nearest feedback aggregator is handled by the routing system). While from the aggregators back to the source normal unicast is sufficient. This Page 23/120
  • 24. PHOENIX Deliverable D3.2b part of the communication can be replaced with anycast as well, so to create a multi-level aggregation method. This method can be good when the number of aggregators would increase so that the source would again experience feedback implosion. But with only a few aggregators (10-30) a one-level aggregation method is enough. One aggregator should be placed in the network where the source is (in a video distribution scenario). In a video conference scenario, this is not possible because the sources can be in any network. The aggregators should be placed in any other network. When considering not a single-hop scenario (Figure 2-15), it is advisable to aggregate the feedback information before sending back to the source on the wireless channel. 2.2.2.5.1 Tasks of the aggregators The main task of the aggregator nodes is to lessen the network utilization in the upstream part of the communication by aggregating the feedback information of several receivers and sending back only one feedback packet per aggregator. Thus, the aggregators should have some of the functionality of the source. This means that they must be able to calculate some kind of an average of the received parameters. They should also be able to handle outliers (mentioned earlier in chapter 2.2.2.3.2). There is also the possibility of using more intelligent nodes, which not only have the ability to aggregate feedback information, but also transcode or change the quality of the multimedia stream going through it (active agents). With these kinds of nodes there is the possibility of creating a multimedia overlay network and a more efficient adaptation method. Tasks overview: • Aggregate feedback from the receivers o Calculate averaged values for packet loss, delay, jitter, etc. o Handle outlier feedbacks • In case of active agent nodes o Transcode multimedia stream to given quality based on the feedback 2.2.3 Adaptive video distribution One of the most important factors in the project is how the server can adapt itself to the needs of the various receivers, which depends on the way the information is delivered to the receivers. In the following chapter, we describe different adaptive multicast distribution methods. 2.2.3.1 Video distribution schemes The following schemes give different solutions to the adaptation problem. There are currently 5 different approaches [31]: • Single-rate non-adaptive • Single-rate adaptive • Simulcast • Layered multicast • Agent-based multicast Table 2-5 shows a small comparison of the mentioned video distribution schemes. Table 2-5 Comparison of different adaptive video distribution schemes 2.2.3.1.1 Single-rate non-adaptive This method is the simplest multicast distribution method. When using this method the server only has one quality of the given video stream and no adaptation is available from the server. Because of this, the method is out of the scope of the project. 2.2.3.1.2 Single-rate adaptive Page 24/120 Distribution scheme Adaptation mechanism Network requirement Coding requirement Single-rate non-adaptive None - None Single-rate adaptive Scalable feedback control - Rate control Simulcast Scalable feedback control - Rate control, Transcoding Layered: Network driven Priority dropping Priority identification Scalable coding Layered: Receiver driven Joining/leaving groups - Scalable coding Agent-based multicast Transcoding in agents Active service node Transcoding
  • 25. PHOENIX Deliverable D3.2b This scheme is a modified version of the above mentioned one, where the source has more version of the appropriate video stream in different qualities. This enables it to adapt itself to the underlying network parameters. The source only sends one stream as well, but the quality of the stream can change in time. The quality is determined by the feedback the receivers send back to the source. For fast adaptation, the receivers need to send their feedback very often. The more often they send, the more accurate the source can handle the quality parameter. But it also takes higher bandwidth consumption. This means that we must have high bandwidth in the upstream part as well. It can also lead to the flooding of the source. One solution to this problem is that the source generates a 16 bit random key and sends it and a number indicating how many digits of the key are significant. Then each receiver generates a key as well, but only responds to the solicitation if the keys and the states match. In small networks, less matching bit is enough, while in large networks the source should use higher number of significant bits. This is good because in larger networks the probability of getting appropriate and enough feedback is higher. The scalability of this method is good. Although this distribution scheme is very simple and the source implosion problem can be easily eliminated, it has a very big disadvantage. Namely the source has to adapt itself to the worst receiver. This means that when we have one bad receiver, then all the other receivers will get the same bad quality stream, even though they are able to receive better quality streams. This is not affordable, because especially in the wireless networks there is a high probability of hosts that have low quality channels and low bandwidth. 2.2.3.1.3 Simulcast The base concept of this distribution scheme is that the source sends out several streams of the same video but with different qualities. Then the receiver chooses which quality is best suitable for its network conditions. A working implementation of this scheme is the DSG (Destination Group Setting) protocol. In this protocol there are 3 different video qualities. All of them are forwarded towards the receivers and they decide which one to receive actually. If the network conditions change then the receiver changes to a better or worse quality. Changing between the different qualities means changing (joining / leaving) the different multicast groups. One multicast group only receives the quality that is set to that group, so the members of a group will only receive only one stream and not all 3 of them. The only parts where more streams have to be forwarded are the parts where the multicast trees intersect each other. There is no support needed from the network, because normal IP multicasting support is enough to handle the changes in group memberships. The problem of flooding is not apparent here because there is no feedback needed to travel back to the source. This is because the receivers can adapt themselves to the network conditions. Of course, there is a problem with very good and very bad receivers. When a receiver has worse or better conditions than the worst or best quality then there is no way of adapting it to those conditions. It is a bigger problem with the bad receivers, because they will get the worst quality stream with high packet loss, while the very good receivers will get the best quality (even though they could receive better quality) but with no problem. 2.2.3.1.4 Layered adaptation Layered multicasting is a distribution method where we can send more qualities of a video stream at once. The essence of this method is that we separate the video stream into layers: a base layer and one or more enhancement layers. The base layer must be decodable by itself and the enhancement layers only give extra information to it raising the quality of the video stream. Figure 2-16 shows an example of a simple layered coding scheme of a video stream. There are two ways to determine how to separate the layers: • Network-driven approach • Receiver-driven approach Figure 2-16 Example of layer coded video stream Page 25/120
  • 26. PHOENIX Deliverable D3.2b 2.2.3.1.4.1 Network-driven adaptation When there is not enough bandwidth the routers can drop as many packets from the enhancement layers as they need. This method guarantees that the receivers will get the best quality that they could receive. When we add priority to the enhancement layers, the scheme can be further enhanced. The base layer must always arrive to the receivers and with the priority information we can tell the routers which enhancement layers give the most extra information to the base layer, thus which layers should be forwarded with higher priority. This means that the method gives the routers have the task to decide if they can forward the given packets. So it needs the support of the routers that can handle the priority information. 2.2.3.1.4.2 Receiver-driven adaptation When the receiver can set which layers it wants to receive than it is called receiver-driven layered multicasting. In this case, we don’t need any support from the network. A receiver periodically joins a higher layer to explore the available bandwidth. If packet loss exceeds some threshold after the join experiment, the receiver should leave the group. Otherwise it will stay at the new subscription level. There is a possibility of receiver-driven congestion control. In this case, the sender temporarily increases the sending rate on a layer, and a receiver will join a higher layer only if there’s no packet loss during this experiment. 2.2.3.1.5 Agent based adaptation The significance of this method is that we control the video stream all the way along its path from the source to the receivers. For this we need to put some intelligence into the network. This intelligence is called an agent. The tasks of the agents are to send feedback to the source, to the receivers, to the other agents and also to handle the multicast stream. There are two types of agents: • Passive: it only collects information and sends out its feedback. • Active: it can actively change the video stream. But this should be done not so often, because it takes a lot of computational resources and also, changing the coding of the stream will effect the remaining part of the network. With this method we can optimize the coding of the video stream at every part of the network. In the RTP protocol some of these tasks are already built in. These services are called: • RTP-level mixer • Translator 2.2.3.2 Proposed solution: Adaptive simulcast While the methods described in Chapter 2 were mainly developed for the wired Internet, the mobile environment creates some requirements that must be solved as well. The methods using feedback information must face the fact that the upload part of the wireless networks has much less bandwidth than a wired network. Another important fact is that the parameters of the wireless channel change more often. Thus the adaptation rate must be much higher and faster. The base of our work is the joint source and channel coding method. The concept of this technique is to control and adapt both the source and the channel coding parameters according to the network parameters. However, in our case, to allow faster adaptation and more bandwidth saving, the coders are separated. The source coder can be found at the server node, but the channel coder is right before the wireless part of the network, at the access points. This helps the whole process in several aspects. First of all, the channel coders will get the channel parameters of the wireless channels more quickly, thus they can adapt rapidly. Also, they will only send as much information about the wireless channel back to the source as is needed by the source coder to fulfill its task [32] [33]. Another task of the channel coder is to gather the parameters of the wired network part needed by the source coder and send them back. In this way, the channel coders are acting like active agents. Although they cannot modify the source coding parameters of the media flow, but they can modify the channel parameters in real time. This mainly means choosing the channel coding method best suitable for the given receivers and setting the redundancy used in that method. For the base of the video distribution method, we chose simulcast. The two most popular techniques are simulcast and layered multicast. We decided for the favor of simulcast because of two reasons. Firstly, layered multicast needs redundant coding, while simulcast only needs transcoding the media file into the required qualities, which is much simpler. Secondly, the layered coding technique creates a very big overhead. It can reach a hundred percent surplus, which means that the bandwidth consumption can be equal or more than the bandwidth used by the simultaneously sent simulcast flows. It is also easier to create an end user software for handling simulcast, because there is no need to synchronize the data coming from the several layers of the layered approach [34] [35]. Page 26/120
  • 27. PHOENIX Deliverable D3.2b Our main extension to simulcast is to add adaptation features to the server side as well. So while letting the user application choose from the quality it wishes to receive, the server side fine tunes the received media feed based on the feedback information it receives from the channel coders (network level parameters like packet loss, jitter, etc.) and the receivers (wireless channel parameters like bit error rate, etc.). This fine tuning is needed because of the rough setting of the different simulcast qualities. We cannot allow having a different quality for every second or third user with great user numbers, as that would effectively result in changing back to unicast. The connection between these methods can be seen in Figure 2-17. We can extend the adaptation capabilities by letting for example the users who have the best channel parameters receive a little bit better stream if they are able to do so. This is done by extending simulcast to not only send the different feeds in different qualities but in quality ranges. For example, instead of having a 500 kbit/s quality feed, the server has 450 kbit/s – 550 kbit/s quality range. The server will change real-time coding parameters based on the feedback information and tries to adapt to the users. Figure 2-17 The comparison between unicast, simulcast and normal multicast (Source: [35]) Page 27/120
  • 28. PHOENIX Deliverable D3.2b 2.2.4 Multicast distribution architecture In this chapter, we give a brief conclusion about the architectural elements we have chosen are best suitable for the Phoenix project. 2.2.4.1 Routing and multicast model There are two parameters which we must consider in using or selecting a multicast protocol. One parameter is the model we want to use and the other is which mode is better for the applications in the project. 2.2.4.1.1 Dense-Sparse mode One of the main conditions of the dense mode protocols is that we have enough bandwidth and there are many users and also their distribution is dense. Because of the wireless parts we can’t say that we have high bandwidth, thus we must use sparse mode protocols. Among the sparse mode protocols PIM-SM is the best choice, because of the advantages we already mentioned in chapter 2.2.1. 2.2.4.1.2 Multicast model The SSM model and with it the PIM-SSM and MLDv2 protocols are more suitable for us, but there are currently no working implementations of the mentioned protocols. In fact, the architectural description of the PIM-SSM protocol is only an Internet Draft yet. That’s why we use the PIM-SM and MLDv1 protocols for multicast routing and group management. 2.2.4.2 Feedback propagation For the feedback propagation, we suggest using feedback aggregation mechanism. The feedback aggregators should be reached by anycast or by anyvast service discovery and then by normal unicast. 2.2.4.3 Video adaptation The video distribution method we use is the so called adaptive simulcast method. The base of this method is the simulcast approach, which is then extended with adaptation capabilities. The other extension to the simulcast method is the agent based extension. The example of this is the channel coder that can be found on the access points and can real-time change the channel coding parameters. Another usage of agents can be the placement of active transcoders (combined with the feedback aggregators) in the network. 2.3 IPv6 IPv6 (initially called “IPng” or Internet Protocol Next Generation) is the new version of the IP protocol developed by the IETF (Internet Engineering Tasking Force, the main organization concerned with Internet information transmission standards) at the beginning of the 90s. The previous version, called IPv4, is the protocol implemented in most of the computers and networks nowadays. The future Internet needs security, authentication and privacy mechanisms, and users need enough capacity to support new multimedia applications (which use more and more bandwidth) with guaranteed video and audio flow delays. IPv6 was developed because some different reasons. The most important one is the increasing number of machines being connected to Internet everyday. In a close future, the 32 bits used in IPv4 (which means 232 different IPv4 addresses) will not be enough to index their IP addresses. In IPv6, the number of addresses to be used has been dramatically increased until 2128 (128 bits per address). This deficit of addresses in IPv4 comes from the original development of the IP protocol in the 70s, when the huge success of this protocol in many fields, not only scientific or educational, but also in the daily life was not predicted by its creators. This problem could partially be solved using address reassignment or NAT (Network Address Translation). The second solution consists of using one public IP address for the whole network and private IP addresses inside it. However, with this solution many applications would be constrained to the Intranets, since many protocols are not capable of going through NAT devices, like Real Time protocols, IPsec. In addition, there is also a problem that would remain, which is the great size of the routing tables in the Internet backbone, which leads to an adverse effect on the response times. After IPv4 was created, and due to many new applications that use it, it has been necessary to create “patches”. The most known are some related to the use of Quality of Service (QoS), Security (IPsec) and mobility (Mobile IP). The inconvenient using those extensions arises when using them simultaneously (because they were designed independently afterwards). Page 28/120
  • 29. PHOENIX Deliverable D3.2b The main new characteristics of IPv6 are: o Expanded addressing capabilities. IPv6 increases the size of the IP address from 32 bits to 128 bits. Solutions designed to solve the lack of addresses, like NAT, are not necessary now with IPv6. The scalability of multicast routing is improved by adding a "scope" field to multicast addresses. In addition, a new type of address called an "anycast address" is defined, used to send a packet to any one of a group of nodes. o Flexible and simplified header format: The fixed size of the IPv6 header (40 bytes) and the possible additional headers to place next to it makes possible for the routers to save processing time and make packet routing more efficient. o Enhanced support for extensions and options. The changes in the header options codification make limits less stringent and more flexible to introduce new options in the future. Moreover, the IP packets are delivered more efficiently because there is no fragmentation in the routers. Hence, routing is more efficient in the network backbone and there is an increase in the speed. o Autoconfiguration. It is a possibility to configure without needing servers. Devices can configure their own Ipv6 addresses using the information they obtain from the network router. Moreover, there are reconfiguration facilities. o Security. In Ipv4, the only way to solve security problems was using SSL to have transport level security, or using SSH or HTTP to have application level security. The third solution was IPsec (network level security) but it is not commonly used. However, in IPv6 is mandatory to use IPsec. o Quality of Service (QoS) and Class of Service (CoS). Packets from particular traffic flows can be labeled. Senders can request special treatment for them, like quality of service or real time service. It is possible to reserve network resources to multimedia applications with guaranteed bandwidth and delays. o Mobility. More and more, the tendency is to have network connection in every place and to take advantage of the same functionality regardless of the place where to connect. It is possible to use multimedia services like voice over IP or video on demand without interrupting the active connections with protocols like MIP (Mobile IP) or HMIP (Hierarchical MIP), even when making a network change, as it will be explained later. Again, they are patches for the IPv4 protocol, but in IPv6 is a mandatory functionality, so every system, which uses IPv6, must implement it, because it has been included initially with the protocol. This feature will be very important when UMTS mobile telephone networks start to operate. o Authentication and privacy capabilities: IPv6 specifies extensions that use authentication, data integrity and confidentiality. It is necessary to remark these are basic characteristics. The own structure of the protocol allows itself to grow and to be scalable, depending on the requirements of the new applications or services. Scalability is the most important feature of IPv6, compared with IPv4. IPv6 is a fundamental ingredient for the vision we have of the Society of Mobile Information. Nowadays, the number of wireless phone surpasses already fully the number of fixed terminals of Internet. Presently, IPv6 is outlined as the only viable architecture that can accommodate the new wave of cellular devices that support Internet. In addition, IPv6 allows to the supply of services and benefits demanded by mobile infrastructures (GPRS, General Packet Radio Service, or UMTS), networks of broadband, electronic of consumption and terminals, and the subsequent interoperability/management 2.4 Mobile IPv6 Mobile IPv6 provides layer 3 transparent mobility for the superior levels (i.e. UDP), so a mobile node can be reachable with its home address no matter it is connected to its Home Network or another [36]. The transition or handover between networks is transparent for the superior levels and the connectivity loss produced during the handover is due to the exchange of the corresponding signaling messages. Every mobile node (Mobile Node, MN) has a local address (home address), which it is its original network address. This address remains although the mobile node passes to another network. Packets sent to the mobile node, when staying in its original network, will be routed normally as if the node was not mobile. The prefix of this address is the same that the network’s prefix where the node is originally connected. When a mobile node goes to a different network from the original, it will obtain a new “guest” address (Care-of Address, CoA, belonging to the address space of the visited network). The mobile node can acquire its care-of address through conventional IPv6 mechanisms, such as stateless or stateful auto-configuration. From now on, it can be reached also with this new address (apart from the home address). After obtaining the new address, the mobile node contacts with one router from its original network (Home Agent, HA) and, with a registration, Page 29/120
  • 30. PHOENIX Deliverable D3.2b communicates its current CoA to it. Afterwards, when a packet is sent to the mobile node home address, the Home Agent will intercept it and tunnel it to the mobile node CoA. This correspondence between the mobile node home address and its current CoA (while it stays in the new network) is called binding. With this mechanism, the packets reach the mobile node current location, because the CoA belongs to the address space of the subnet where it is connected, so the IP routing delivers the packet sent by the home agent (which has inside the encapsulation original IP packet) to the mobile node. The mobile node can have more than one CoA address. This scene is for example the normal structure of wireless networks (for example in cellular telephone system), where one mobile node can be connected simultaneously to several networks (for example several overlapped cells) and it must be reachable by any of them. Normally, a mobile node obtains one CoA address with a stateless autoconfiguration, although it can use also stateful methods (like DHCPv6) or static preassignment. One typical packets exchange between one mobile node and one correspondent node is shown in Figure 2-18. One packet sent by the correspondent node to the mobile node, when the latter is visiting one network, arrives to the Home Network, where the home agent intercepts it and sends it to the mobile node current location. In addition, it can be observed that packets sent by the mobile node are directly delivered. This has been changed in the last specifications of mobile IPv6 (due to security problems). In most basic cases, packets sent by the mobile node go through a reverse tunnel. Therefore, MIPv6 provides mechanisms which allow a direct connection between the mobile and correspondent node, without being necessary the intervention of the home agent. When the home agent is necessary for that type of communication, it is called “triangle routing” and it is less efficient. This mechanism appears in IPv4, but in IPv6 there is one route optimization mechanism that surpasses it. This optimization routing procedure allows the mobile node to put its CoA address (temporal address while visiting a network) as the original address in the IPv6 packets it sends to correspondent nodes. The mobile node includes one IPv6 destination option called “home address” which contains its home address, so the correspondent nodes can identify from which mobile node the packets come from. At the same time, the packets sent by a correspondent node to the mobile node have the CoA of this mobile node as the destination address. Once again, to achieve transparency in the mobility, these packets have one special IPv6 routing header, containing only one hop, similar to the mobile node home address. Figure 2-18 Route optimization in Mobile IPv6 Page 30/120
  • 31. PHOENIX Deliverable D3.2b 3 Quality of Service In this section, first of all a brief description of the Quality of Service (QoS) concept if the IP world is provided in terms of general definitions, specifications. Second, the descriptions of the traffic and reference working scenarios are introduced. Then a summary of the main principles and the analysis-simulations results are reported and discussed. Some considerations are included about the design choices of deploying and examining a specific scheduling discipline (WFQ). Finally the application of the measurement process to the specific WFQ proposal is described. 3.1 Providing Quality of Service in IP networks In the previous release of this deliverable (D3.2a), the issue of providing Quality of Service (QoS) on an IP network was tackled focusing the attention on a single interface and considering a specific family of scheduling discipline, namely the GPS (General Processor Sharing). The concerns related to the resource management in order to achieve target QoS guarantees (more specifically, delay and loss) was analyzed with reference to a well- known scheduling schema, the WFQ (Weighted Fair Queuing). A dynamic version of this algorithm was proposed to better exploit the available transmission resources. The collected simulation results have shown the benefits of such a dynamic WFQ. It is possible to obtain relative delay assurance differentiation for the various traffic aggregates at a given interface, in relation to numerical factors assigned to each associated queue (termed as Pi, for the i-th queue). With a proper planning and resource provisioning of the network, it is even possible to achieve absolute QoS guarantees. However, the network scenarios and the design, as well configuration, parameters are enormous and further investigations about the behavior of a dynamic version of A gps-like scheduler are needed to effectively provide the desired QoS in whatever working conditions. It will be highlighted as the measurement process plays a fundamental role in order to obtain both an effective and consistent system. Moreover, the related computational overhead must be properly taken into account, being the most expensive drawback of the proposed novel scheduling module. The designed schema was essentially devised in order to investigate about the performance that can be achieved with a dynamic version of WFQ, but it is foreseeable that better algorithms exist, both in terms of resource exploitation and stricter control of the target QoS guarantees. After a very brief description of the already conducted analysis and achieved results, the following sections provide initially a more simulation results and studies about the very basic proposal of a dynamic WFQ. a wider set of working scenarios, design choices and configuration settings are concerned. This way, helpful conclusions can be gained for a network designer or administrator. A light and consistent measurement process is proposed and evaluated for different types of traffic aggregates, relevant figures (e.g. buffer dimensions, peak/mean rates) and setup options. in particular, the application of such a mechanism to the basic dynamic WFQ is studied and assessed against the already deployed measurement processes, also with traffic aggregates variable in average rate and real-time characteristics of its component flows. finally, a more sophisticated and performing dynamic WFQ scheduling schema is conceived and analyzed in detail, as the final solution for the very last resulting PHOENIX system. 3.2 Traffic and reference working scenarios descriptions First of all, for the sake of completeness we remind very shortly the considered network traffic and the reference scenario, including the scheduler main configuration parameters and link interface characteristics. As already stated in D3.2a, the traffic to be generated must represent a typical aggregate in a Differentiated Service network. Concerning the first set of simulations, we employed H.263 video flows at different bit rates, ranging from 64 to 256 Kbit/s as mean value, created starting from real traces of video streaming and conferencing applications. Figure 3 -19 shows the traffic generated by such a source. Considering the nature of a typical compressed video flow, the bit rate can be highly variable with a burstiness factor (peak to mean rate ratio) of even 10. Page 31/120