3G networks faced elemental issues in trying to accommodate the projected demand for mobile
Internet service. One such issue is the high cost of either expanding the network or the network
operation in general. Such costs became a substantial consideration when addressing the 3G
network performance in densely populated areas or when trying to overcome coverage dead
spots. Of particular importance is the performance at the cell-edge, that is, connection quality at
overlaps between the coverage areas of neighboring cells, which have been repeatedly remarked
to be low in 3G networks. Such problems would usually be addressed by increasing the
deployment of Base Stations (BS), which in addition to their high costs entail additional
interconnection and frequency optimization challenges. Certain performance aspects of 3G
networks were also expected to be more pronounced. Some aspects were due to the scaling
properties of the 3G networks, for example, delay performance due to increased traffic demand.
The general support for different levels of mobility also suffered greatly in WCDMA-based
networks. Perhaps most critical was the indoors and dead spot performance of 3G networks,
especially when various studies have indicated that the bulk of network usage is made while
being at either the office or at home. Combined, the above issues made it cumbersome for
operators to respond to the ever increasing demand. Meanwhile, handling specific
heterogeneities have made it harder for both operators and user equipment vendors to maintain
homogeneous and streamlined service and production structures. For example, the spectrum
mismatch between even neighboring countries in 3G deployments prevented users from roaming
between different networks – and at times even requiring the user to utilize different handsets. At
the same time, despite the availability of multi-modal user equipment for a long time, it has thus
far been difficult to maintain handovers across the different technologies.
1.3 The ITU-R Requirements for IMT-Advanced Networks
The general requirements for IMT-Advanced surpass the performance levels of 3G networks.
Enhanced support of basic services (i.e., conversational, interactive, streaming and background)
is expected. While the data rates are perhaps a key defining characteristics of IMTAdvanced
networks, the requirements in general will enable such networks to exhibit other important
features, including the following :
A high degree of commonality of functionality worldwide with flexibility to support a wide range
of services and applications in a cost efficient manner. Emphasis here is on service easiness and
application distribution and deployment.
Compatibility of services within IMT and with fixed networks. In other words, IMT-Advanced
should fully realize extending broadband Internet activity over wireless and on the move.
Capability of interworking with other radio access systems. An advantage for both operators and
users, as it expands the viability of using the RIT most appropriate for a certain location, traffic
and mobility. It also strengthens the economic stance of the users.
High quality mobile services. Emphasis here is not just on high data rates, but sustainable high
data rates, that is, connection performance that overcomes both mobility and medium challenges.
User equipment suitable for worldwide use. A clear emphasis on eliminating, as much as
possible, handset and user equipment incompatibility across the different regions.
User-friendly applications, services and equipment. Ease and clarity of use in both the physical
and the virtual interfaces.
Worldwide roaming capability. An emphasis on exploiting harmonized spectrum allocations.
Enhanced peak data rates to support advanced services and applications (100 Mbit/s for high
mobility and 1Gbit/s for low mobility). Such values are to be considered as the minimum
supported rates, with high rates encouraged to be sought by the contending candidates.
WiMAX, the Worldwide Interoperability for Microwave Access, is a telecommunications
technology that provides for the wireless transmission of data in a variety of ways, ranging from
point-to-point links to full mobile cellular-type access. The WiMAX forum describes WiMAX as
a standards-based technology enabling the delivery of last mile wireless broadband access as an
alternative to cable and Digital Subscriber Line (DSL). WiMAX network operators face a big
challenge to enable interoperability between vendors which brings lower costs, greater flexibility
and freedom. So it is important for network operators to understand the methods of establishing
interoperability and how different products, solutions and applications from different vendors
can coexist in the same WiMAX network.
Introduction to WiMAX Technology
WiMAX is a metropolitan area network service that typically uses one or more base stations that
can each provide service to users within a 30-mile radius for distributing broadband wireless data
over wide geographic areas. WiMAX offers a rich set of features with a great deal of flexibility
in terms of deployment options and potential service offerings. It can provide two forms of
• Non-Line-of-Sight (NLoS) service – This is aWiFi sort of service. Here a small antenna on the
computer connects to the WiMAX tower. In this mode, WiMAX uses a lower frequency range
(∼2 GHz to 11 GHz) similar to WiFi.
• Line-of-Sight (LoS) service – Here a fixed dish antenna points straight at the WiMAX tower
from a rooftop or pole. The LoS connection is stronger and more stable, so it’s able to send a lot
of data with fewer errors. LoS transmissions use higher frequencies, with ranges reaching a
possible 66 GHz.
A WiMAX system consists of two parts:
• A WiMAX Base Station (BS) – According to IEEE 802.16 the specification range of WiMAX I
is a 30-mile (50-km) radius from base station.
• A WiMAX receiver – The receiver and antenna could be a small box or Personal Computer
Memory Card International Association (PCMCIA) card, or they could be built into a laptop the
way WiFi access is today. Figure 1.1 explains the basic block diagram of WiMAX technology.
A WiMAX base station can provide coverage to a very large area up to a radius of six miles. Any
wireless device within the coverage area would be able to access the Internet. It uses the MAC
layer defined in standard IEEE 802.16. This common interface that makes the networks
interoperable would allocate uplink and downlink bandwidth to subscribers according to their
needs, on an essentially real-time basis. Each base station provides wireless coverage over an
area called a cell. Theoretically, the maximum radius of a cell is 50 km or 30 miles. However,
practical considerations limit it to about 10 km or six miles. The WiMAX transmitter station can
connect directly to the Internet using a high-bandwidth, wired connection (for example, a T3
line). It can also connect to another WiMAX transmitter using LoS microwave link. This
connection to a second base station (often referred to as a backhaul), along with the ability of a
single base station to cover up to 3000 square miles, is what allows WiMAX to provide coverage
to remote rural areas. It is possible to connect several base stations to one another using high-
speed backhaul microwave links. This would also allow for roaming by a WiMAX subscriber
from one base station coverage area to another, similar to the roaming enabled by cell phones. A
WiMAX receiver may have a separate antenna or could be a stand-alone box or a PCMCIA card
sitting on user laptop or computer or any other device.
A typical WiMAX operation will comprise of WiMAX BSs to provide ubiquitous coverage over
a metropolitan area. WiMAX BSs can be connected to the edge network by means of a wireless
point-to-point link or, where available, a fibre link. Combining a wireless router with the
WiMAX terminal will enable wireless distribution within the building premises by means of a
WiFi LAN. Because of the relatively limited spectrum assignments in the lower-frequency
bands, WiMAX deployments will usually have a limited capacity, requiring BS spacing on the
order of two to three km. In lower density rural areas, deployments will often have a limited
range, thus taking advantage of the full coverage capability of WiMAX, which can achieve
NLoS coverage over an area of 75 sq km in the 3.5-GHz band. WiMAX has been increasingly
called the technology of the future. Belonging to the IEEE 802.16 series, WiMAX will support
data transfer rates up to 70 Mbps over link distances up to 30 miles. Supporters of this standard
promote it for a wide range of applications in fixed, portable, mobile and nomadic environments,
including wireless backhaul for WiFi hot spots and cell sites, hot spots with wide area coverage,
broadband data services at pedestrian and vehicular speeds, last-mile broadband access, etc. So
WiMAX systems are expected to deliver broadband access services to residential and enterprise
customers in an economical way. WiMAX would operate in a similar manner to WiFi but at
higher speeds, over greater distances and for a greater number of users. WiMAX has the ability
to provide a service even in areas that are difficult for wired infrastructure to reach and with the
ability to overcome the physical limitations of a traditional wired infrastructure.
Need for WiMAX
WiMAX can satisfy a variety of access needs. Potential applications include:
• Extending broadband capabilities to bring them closer to subscribers, filling gaps in cable, DSL
and T1 services, WiFi and cellular backhaul, providing last-100 meter access from fibre to the
curb and giving service providers another cost-effective option for supporting broadband
• Supporting very high bandwidth solutions where large spectrum deployments (i.e. >10 MHz)
are desired using existing infrastructure, keeping costs down while delivering the bandwidth
needed to support a full range of high-value, multimedia services.
• Helping service providers meet many of the challenges they face owing to increasing consumer
demand. WiMAX can help them in this regard without discarding their existing infrastructure
investments because it has the ability to interoperate seamlessly across various network types.
• Providing wide area coverage and quality of service capabilities for applications ranging from
real-time delay-sensitive Voice-over-Internet Protocol (VoIP) to real-time streaming video and
non-real-time downloads, ensuring that subscribers obtain the performance they expect for all
types of communications.
• Being an IP-based wireless broadband technology, WiMAX can be integrated into both wide-
area third-generation (3G) mobile and wireless and wireline networks, allowing it to become part
of a seamless anytime, anywhere broadband access solution.
• Ultimately, serving as the next step in the evolution of 3G mobile phones, via a potential
combination of WiMAX and Code Division Multiple access (CDMA) standards called Fourth
IEEE 802.16 Standards and Connectivity Modes
A WiMAX1 network topology is organized in a cellular-like architecture. The network is
deployed to provide access to a large urban or rural area. Figure 2.1 shows the WiMAX network
topology, where a cell is composed of one or several base stations, denoted by BSs, and a set of
Mobile or Subscriber Stations, denoted by MSs or SSs respectively. Depending on the version of
the IEEE 802.16 standard and the frequency employed, the Subscribers Stations may or not be in
the Line-of-Sight of the Base Station antenna. To extend the network and connect to backhaul,
the architecture supports the use of Repeater Stations (RSs).
The 802.16 specification has evolved during the last decade and undergone gradual expansion.
The original specification  was approved in 2001, making the IEEE 802.16 a wireless MAN
standard. It was developed to provide a high data rate and Point To Point (PTP) communication
between fixed subscriber stations. The standard introduced the use of licensed frequencies
ranging from 10 to 66 GHz so that interference is reduced. However, due to the short wavelength
of the used signal, a Line of Sight (LoS) condition is required. Moreover, in this standard
multipath propagation was not supported. A typical usage of this standard consists in connecting,
via point to fixed point backhaul, a tower to a fixed location which is connected to a wired
network. In this specification, directional antennae are used at both sides, and Frequency
Division Duplexing (FDD) or Time Division Duplexing (TDD) is supported. Using highly
directional antennae, the network could support a data rate of 32–134 Mbps with a channel of 28
MHz. The cell radius ranges between 2 and 5 kms. The security protection techniques are
rudimentary and rely on antenna directivity to protect against intrusions. .
Wi-MAX is based on IEEE 802.16 standards and the Wi-MAX forum Network Working Group
(NWG) specification. The specification of the physical and MAC layer of the radio link, which is
the focus of the IEEE 802.16 standards, is not sufficient to build an interoperable broadband
wireless network. In fact, end-to-end services such as IP connectivity, QoS and security and
mobility management are a requisite. In this context, the standardization and development of the
end-to-end related aspects, including requirements, architecture and protocols, are beyond the
802.16 standards and are the responsibility of the WiMAX Forum’s Network Working Group
(NWG). A Network Reference Model (NRM), which provides a unified network, was developed
• promote interoperability between WiMAX operators and equipment by defining key functional
entities and interfaces between them (referred as reference points over which a network
interoperability framework is defined);
• support modularity and flexibility by allowing several types of decomposition of functions
• support fixed, nomadic and mobile deployments and provide decomposition of access network
and connectivity networks;
• share the network between several business models namely the Network Access Provider
(NAP), the Network Service Provider (NSP) and Application Service Provider (ASP). The first
owns the network and operations, the second provides IP connectivity and core network services
to the WiMAX network and the last provides application services. Figure 2.2 shows the
important functional entities of WiMAX reference model, namely, the Subscriber Station (SS) or
the Mobile Station (MS), the Access Service Network (ASN) and the Connectivity Service
Network (CSN). The first two are owned by NAP while the last is owned by NSP. Since the
architecture has changed as 802.16 versions have progressed, we stress on describing the model
that goes with last operational version, namely the IEEE 802.16e (mobile WiMAX). The mobile
station is the equipment used by the end user to access the network. It could be a mobile station
or any device that supports multiple hosts. The ASN represents the WiMAX access network for
subscribers, which provides the interface between the MS and the CSN. It comprises one or
several Base Stations (BS) and one or several ASN gateways. The ASN is in charge of managing
radio resources including handover control and execution, performing layer-two connectivity
with the MSs, interoperating with other ASNs, relaying functionalities between the CSN and the
MS to establish connectivity in IP layer and performing paging and location management. The
base station equipment is what actually provides the interface between the MS and the WiMAX
network. It implements functionalities related to WiMAX PHY and MAC layers in compliance
with 802.16 standard, and is characterized by a coverage radius and a frequency assignment. The
number of base stations within an ASN is equal to the number of assigned frequencies and their
deployment depends on the required bandwidth or the geographic coverage. The coverage radius
of a BS ranges between 500 and 900 metres in an urban area, and is planned by operators to
cover 4 km in a rural area . Such a radius is highly comparable to the area covered by base
stations in GSM and UMTS networks today.
The ASN-GW represents a layer-two traffic aggregation point within the ASN. It supports
functions related to connection and mobility management, paging, DHCP Proxying/relaying,
authentication of subscribers’ identities, AAA (Authentication, Authorization, and Accounting)
client functionality, service flow management, caching of subscribers’ profile and encryption
keys and routing to the CSN. The ASN-GW may also implement the Foreign Agent (FA) in
mobile IP, which provides connectivity to mobile users who visit the network and store
information about them. It advertises care-of addresses in order to route packets which are sent to
the mobile node. The ASN-GW plays the role of authenticator and key distributor by
transmitting signals to the AAA server and verifying the user credentials in the network re/entry
using EAP (Extensible Authentication Protocol). A security context is created when the AAA
session is established and keys are generated and shared between the MS and the BS. The AAA
client in the ASN-GW collects accounting information related to flow including the number of
bits transmitted or received, and duration. The ASN-GW is also responsible for managing
profiles and policy functions which include but are not limited to the allowed QoS rate, and the
type of flow. In addition a context per mobile subscriber and BS is maintained. This context,
which includes subscriber profile, security context and characteristics of the mobile equipment,
is exchanged between serving BSs during the handover. The CSN represents the core network of
WiMAX and is responsible for the transport, authentication and switching. It executes functions
related to admission control and billing and enables IP connectivity to the WiMAX subscribers
by allocating IP addresses to them. The CSN hosts also the DHCP and AAA servers, the VoIP
(Voice over IP) gateways and the Mobile IP Home Agent (HA). The CSN is responsible for
roaming through links to other and interconnecting with non-WiMAX networks including Public
Switching Telephone Network (PSTN) and 3G cellular telephonic networks (it hosts gateways to
these networks). The CSN enables Inter-CSN tunneling to support roaming between NSPs. The
set of reference points represents conceptual links that are used to connect two different
functional entities. The first reference point, defined by R1, connects the MS to the BS. It
represents the air interface defined in the physical layer and the Medium Access Control sublayer
and implements the IEEE.16 standard. The second reference point, which is R2, is a logical
interface that exists between the MS and ASN-GW or CSN. It is associated with authentication
and authorization, and is used for IP host configuration management, and mobility and service
management. The third reference point, say R3, is the interface between the ASN and CSN. It
supports authentication, authorization, policy and mobility management between ASN and CSN.
It also implements a tunnel between the ASN and CSN. The interface R4 exists between two
ASNs and ensures the interworking of ASNs when a mobile station moves between them. The
R5 reference point is an interface between two CSNs and is used for internetworking between
home and visited NSP when a mobile station is in visited network. This interface carries
activities such as roaming.
The IEEE 802.16 protocol architecture is composed of two main layers as shown in Figure 2.3,
namely the Physical (PHY) layer and the Medium Access Control (MAC) layer. The MAC layer
is itself composed of three sublayers.
The first layer is the Service Specific Convergence Sub-layer (CS). It communicates with high
layers, acquires external network data from CS Service Access Point (SAP), transforms them
into MAC Segment Data Units (SDUs) and maps high level transmission parameters into 802.16
MAC layer flows and associations. Several high-level protocols can be implemented in different
CSs. At present, there are two types of CS : the Asynchronous Transfer Mode (ATM)
Convergence Sub-layer which is used for ATM networks and services, and the Packet
Convergence Sub-layer which is used for packet services including Ethernet, Point-to-Point
(PPP) protocol, and TCP/IP.
The second layer is the Common Part Sub-layer (CPS). It represents the core of the standard and
is responsible for system access, bandwidth allocation, connection management and packing or
fragmenting multiple MAC SDU into MAC PDUs. Functions such as uplink scheduling,
bandwidth request and grant and connection control are also defined. Using packing and
fragmenting feature, in addition to the Packet Header Suppression (PHS), repetitive information,
especially the datagram header, are deleted, thus saving the available bandwidth. In this context,
it could be stated that the IEEE 802.16 standard does not fully respect the OSI model which
requires appending a header to the forwarded datagram and guaranteeing transparency and
independency between layers.
The third sublayer, which is the security sublayer, addresses the authentication, authorization,
key establishment and exchange and encryption and decryption of data exchanged between the
PHY and the MAC layers. The 802.16 standard offers several security measures to defeat a wide
variety of security attacks. The details of security properties, mechanisms and threats will be
discussed in greater detail later in this chapter. Roughly speaking, the secure key distribution is
done through the use of Privacy Key Management (PKM) and the nodes identification is
performed based on the use of X.509 technology. The security of connections is maintained
based on the use of Security Associations (SA), which support several algorithms (e.g. AES or
DES encryption) and can be of two types: data SA or authorization SA. Every SA gets a SA
Identifier (SAID) by the BS. Authentication between the base station and mobile/subscribers
station relies on the use of PKM.
Network Entry Procedure
To gain access to the network and perform initialization, the SS goes through a multi-step
First, an SS must scan for a suitable downlink signal from the BS and try to synchronize with it
by detecting the periodic frame preambles. This downlink channel will later be used for
establishing channels parameters. If a prior channel existed before, the SS tries to use the
operational parameters already determined. Otherwise, it will scan the channel using all the
bandwidth frequencies in the supported band.
Second, the SS looks for the Downlink Channel Descriptor (DCD) and the Uplink Channel
Descriptor (UCD), which are broadcast by the BS and contain information regarding the
characteristics of the uplink and downlink channels, the modulation type and the Forwarded
Error Correction (FEC) scheme. The SS also listens for the uplink-map and downlink-map
messages, which are denoted by UL-MAP and DL-MAP respectively, and detects their burst
Third, the SS performs initial ranging which allows it to set the physical parameters properly
(e.g. acquiring the timing offset of the network, requesting power adjustment). The SS performs
the initial ranging by sending a Ranging Request packet (RNG-REQ) in the initial ranging
contention slot. If the RNG-REQ message is received correctly by the BS, it responds with a
ranging response packet (RNG-RSP) to adjust the SS transmission time, frequency and power
and inform the SS about its primary management Connection ID (CID). The subsequent RNG-
REQ and RNG-REP messages will be exchanged using this CID. Note that ranging is also made
by the SS periodically and therefore the RNGREQ will be sent in the data intervals granted by
the BS in order to adjust power levels, time and frequency offsets. Once a ranging is completed,
the SS informs the BS on its physical capabilities (e.g., modulation, coding scheme, half or full
duplex support in FDD). At this stage, the BS can decide to accept or reject these capabilities.
Fourth, the SS requests an authorization to enter the network by executing the g protocol. The
procedure of authenticating the SS and performing key exchange will be discussed in the
description of the PKM protocol. Upon completion of this phase, the SS has a set of
authentication and encryption keys.
Fifth, the SS registers to the BS by sending a registration request message. The BS responds with
a registration response message which contains a secondary management CID and the IP version
used for that connection. The reception of the registration response message means that the SS is
now registered to the network and allowed to access it. Next, the SS invokes the Dynamic Host
Configuration Protocol (DHCP) to get parameters related to IP connectivity (e.g., IP address),
and obtains the network time of day. Optionally, the BS may proceed to set up connections for
the service flows pre-provisioned during SS initialization.
WiMAX PHY FRAME
The PHYframes are divided into downlink and uplink subframes for TTD mode. A downlink
subframe is separated from an uplink subframe by a TTG gap, and the gap between the uplink
subframe and the subsequent downlink subframe is called RTG gap as shown in Figure Z.1.
Z.2.1 OFDM PHY Downlink Subframe
The content of each downlink (DL) subframe is as follows.
● The preamble is used for synchronization; it is the first OFDM symbol of the frame.
● The frame control head (FCH) is one OFDM symbol long and follows the preamble. It
provides the frame configuration information such as MAP
message length, coding scheme, and available subchannels. It also contains the down link frame
prefix (DLFP) to specify the burst profiles and the length of burst profiles of one or several
downlink bursts immediately following the FCH.
● The downlink map (DL-MAP) indicates the burst profile, location, and duration of zones
within the DL frame. If present, the DL-MAP message is the first MAP PDU transmitted after
FCH. It is BPSK modulated and protected with rate 1/2 code by the mandatory coding scheme.
● The uplink map (UL-MAP) provides the subchannel and slot allocation and other control
information for the uplink (IL) subframe. It should immediately follow DL-MAP (if one is
transmitted) or the DLFP.
● The downlink channel descriptor (DCD) is transmitted by the BS at periodic intervals to define
the characteristics of a downlink frame.
● The uplink channel descriptor (UCD) is transmitted by the BS at periodic intervals to define
the characteristics of an uplink frame.
OFDM PHY Uplink Subframe
TheULsubframe is quite different as it requires coordination between the various SSs
transmitting upwards. In this book, we consider only the downlink scenario and thus the DL
subframes only. UL subframes are very shortly described. Their content is as follows.
1. Contention slots allowing initial ranging.
2. Contention slots allowing bandwidth requests.
3. One or more uplink PHY PDUs, each transmitted on a burst. Each of these PDUs is an uplink
subframe transmitted from a different SS. A PDU may transmit an SS MAC message.
WiMAX MAC PDUS
As seen in Section Z.2, in DL subframes, several MAC PDUs are aggregated in bursts. Section
8.3, page 232, presents techniques for performing a reliable burst segmentation in the presence of
noise. The content of each MAC PDU is introduced below. Each MAC PDU begins with a fixed-
size header, followed by a variable length payload and an optional CRC. There are two types of
MAC PDU headers.
● The generic MAC header (GMH) is the header of MAC frames containing either MAC
management messages or CS data. The CS data may be user data or other higher layer
management data. The generic MAC header frame isthe only one used in downlink.
● The bandwidth request header (BRH) is not followed by any MAC PDU payload or CRC. This
frame name has been introduced by the 802.16e amendment. Previously, in 802.16-2004, the
BRH was defined to request additional bandwidths. The content of these headers is presented in
Z.3.1 Generic MAC Header
As we are considering only the downlink case, where the connection is already established and
MAC PDUs inside the BS contain only CS data, so only the
GMH is possible inside BB. Format of GMH as specified in IEEE 802.16-2004 (IEEE, 2004) is
illustrated in Figure Z.2. The various fields of a GMH are as follows.
● The Header Type (HT) consists of a single bit set to 0 for GMH.
● The Encryption Control (EC) field specifies whether the payload is encrypted or not and is set
to 0 when the payload is not encrypted and to 1 when it is encrypted.
● The Type field indicates the subheaders and special payload types present in the message
payload (five possible subheader types).
● The Reserved (Rsv) field is of two bits and is set to 002.
● The Extended Subheader Field (ESF) field consists of a single bit set to 1 if the extended
subheader is present and follows theGMHimmediately (applicable in both the downlink and
● The CRC Indicator (CI) field is a single bit set to 1 if a CRC is included and is set to 0 if no
CRC is included.
● The Encryption Key Sequence (EKS) field is two bits. It is the index of the Traffic Encryption
Key (TEK) and initialization vector used to encrypt the payload. Obviously, this field is only
meaningful if the EC field is set to 1.
● The Length (LEN) field is 11 bits long. It specifies the length in bytes of the MAC PDU
including the MAC header and the CRC, if present.
● The Connection IDentifier (CID) field is 16 bits long and represents the connection identifier
of the user.
● The Header Check Sequence (HCS) field is 8 bits long and is used to detect errors in the
Bandwidth Request Header
Bandwidth request headers, shown in Figure Z.3, have the same size as generic MACframe
headers, but their content differs.Abandwidth request PDU consists only of a header and does not
contain any payload. The fields of the header are provided below.
● The HT field is a single bit set to 1 for BRHs.
● The EC is a single bit set to 0, indicating no encryption.
● The type field is 3 bits long and indicates the type of BRH. It can take two values, namely 000
for incremental and 001 for aggregate bandwidth request.
● The bandwidth request (BR) field is 19 bits long and indicates the number of bytes requested.
● The CID field is 16 bits long and represents the CID of the connection for which uplink
bandwidth is requested.
● The HCS field is 8 bits long and is used to detect errors in the header.
LTE: Long Term evolution
The 3GPP Technical Report (TR) 36.913  details the requirements for LTE-Advanced to
satisfy. The document stressed backward compatibility with LTE in targeting IMT-Advanced. It
does, however, also indicate that support for non-backward compatible entities will be made if
substantial gains can be achieved. Minimizing complexity and cost and enhanced service
delivery are strongly emphasized. The objective of reduced complexity is an involved one, but it
includes minimizing system complexity in order to stabilize the system and inter-operability
in earlier stages and decreases the cost of terminal and core network elements. For these
requirements, the standard will seek to minimize the number of deployment options, abandon
redundant mandatory features and reduce the number of necessary test cases. The latter can be a
result of reducing the number of states of protocols, minimizing the number of procedures, and
offering appropriate parameter range and granularity. Similarly, a low operational complexity of
the UE can be achieved through supporting different RIT, minimizing mandatory and optional
features and ensuring no redundant operational states. Enhanced service delivery, with special
care to Multimedia Broadcast/ Multicast Service (MBMS), will be made. MBMS is aimed at
realizing TV broadcast over the cellular infrastructure. It is expected, however, that such services
will be undersubscribed in 3G networks. It is hence very critical to enhance MBMS services for
4G networks as it will be a key differentiating and attractive service.
LTE-Advanced will feature several operational features. These include relaying, where different
levels of wireless multihop relay will be applied, and synchronization between various network
elements without relying on dedicated synchronization sources. Enabling co-deployment (joint
LTE and LTE-Advanced) and co-existence (with other IMT-Advanced technologies) is also to
be supported. Facilitating self-organization/healing/optimization will facilitate plug-n-play
addition of infrastructure components, especially in the case of relay and in-door BS. The use of
femtocells, very short-range coverage BSs, will enhance indoors service delivery. Finally, LTE-
Advanced systems will also feature facilitating advanced radio resource management
functionalities, with special emphasis on flexibility and opportunism, and advanced antenna
techniques, where multiple antennas and multi-cell MIMO techniques will be applied.
LTE-Advanced will support peak data rates of 1 Gbps for the downlink, and a minimum of 100
Mbps for the uplink. The target uplink data rate, however, is 500 Mbps. For latencies, the
requirements are 50 ms for idle to connected and 10 ms for dormant to connected. The system
will be optimized for 0–190 km/h mobility, and will support up to 500 km/h, depending on
operating band. For spectral efficiency, LTE-Advanced requirements generally exceed those of
IMTAdvanced, for example, the system targets a peak of 30 bps/Hz for the downlink and 15
bps/Hz for the uplink, while average spectrum efficiency (bps/Hz/cell) are expected to reach 3.7
(4 × 4 configuration) for the downlink and 2.0 (2 × 4configuration) for the uplink. Support for
both TDD and FDD, including half duplex FDD, will be made possible. The following spectrum
bands are targeted.
• 790–862MHz (*)
• 4400–4990MHz (*)
The (*) marked bands are not within the requirements of the IMT-Advanced requirements, and
some IMT-Advanced may not be supported by LTE-Advanced. These bands are the 1710–2025,
2110–2200 and the 2500–2690MHz bands.
3GPP’s Long Term Evolution (LTE) is a mobile broadband access technology founded as a
response to the need for the improvement of to support the increasing demand for high data rates.
The standard for LTE is a milestone in the development of 3GPP technologies. It came as an
answer to the competition in performance and cost of IEEE 802.16-2009 to maintain the 3GPP
systems share of the cellular communications market.
Overview of LTE Networks
LTE is the Radio Access Network (RAN) of the Evolved Packet System (EPS). The network
core component of EPS, called Evolved Packet Core (EPC) or System Architecture Evolution
(SAE), is designed to be a completely IP-centric network that provides QoS support and ensures
revenue and security. Figure 9.1 shows the basic architecture components of LTE, which consists
of enhanced nodeBs (eNBs) at the RAN, and Mobility Management Entities (MMEs) and
Serving Gateways (S-GW) at the core. The eNBs interconnect through an interface called the X2
interface, while they are connected to entities at the core (MMEs and S-GWs) using the S1
The LTE architecture depends on a network configuration that is simpler that its predecessor, the
UMTS Terrestrial Access Network (UTRAN). In LTE, which is also called evolved UTRAN
(EUTRAN), RAN considerations and decisions are all handled by the eNB, while relevant
considerations for the core network are processed at the core. The split further identifies the
boundaries between the two network management and control stratums, where the Access
Stratum (AS) is mostly handled by the eNBs and the Non-Access Stratum (NAS) is handled by
the various entities at the core.
The LTE Channel Model in Downlink Direction
All higher layer signaling and user data traffic are organized in channels. As in UMTS, logical
channels, transport channels and physical channels have been defined as shown in Figure 4.9.
Their aim is to offer different pipes for different kinds of data on the logical layer and to separate
the logical data flows from the properties of the physical channel below.
On the logical layer, data for each user is transmitted in a logical Dedicated Traffic Channel
(DTCH). Each user has an individual DTCH. On the air interface, however, all dedicated
channels are mapped to a single shared channel that occupies all resource blocks. As described
above, some symbols in each resource block are assigned for other purposes and hence cannot be
used for user data. Which resource blocks are assigned to which user is decided by the scheduler
in the eNode-B for each subframe, that is, once per millisecond.
Mapping DTCHs to a single shared channel is done in two steps. First, the logical DTCHs of all
users are mapped to a transport layer Downlink Shared Channel (DL-SCH). In the second step,
this data stream is then mapped to the Physical Downlink Shared Channel (PDSCH).
Transport channels can not only multiplex data streams from several users but also multiplex
several logical channels of a single user before they are finally mapped to a physical channel. An
example is as follows: A UE that has been assigned a DTCH also requires a control channel for
the management of the connection. Here, the messages that are required, for example, for
handover control, neighbor cell measurements and channel reconfigurations are sent. The DTCH
and the DCCH are multiplexed on the DL-SCH before they are mapped to the PDSCH, that is, to
individual resource blocks. In addition, even most of the cell-specific information that is sent on
the logical broadcast control channel (BCCH) is also multiplexed on the transport downlink
shared channel as shown in Figure 4.9. In LTE, all higher layer data flows are eventually mapped
to the physical shared channel, including the Paging Control Channel (PCCH), which is used for
contacting mobile devices that are in a dormant state to inform them of new IP packets arriving
from the network side. The PCCH is first mapped to the transport layer Paging Channel (PCH),
which is then mapped to the PDSCH. The only exception to the general mapping of all higher
layer information to the shared channel is the transmission of a small number of system
parameters that are required by mobile devices to synchronize to the cell. They are transmitted
on the Physical Broadcast Channel (PBCH), which occupies three symbols on 72 subcarriers (=
6 RBs) in the middle of a channel every fourth frame. Hence, it is broadcast every 40
milliseconds and follows the primary and secondary synchronization signals.
Downlink Management Channels
As discussed in the previous section, most channels are mapped to a single PDSCH, which
occupies all resource blocks in the downlink direction except for those symbols in each resource
block which are statically assigned for other purposes. Consequently, a mechanism is required to
indicate to each mobile device as to when, where and what kind of data is scheduled for them on
the shared channel and which resource blocks they are allowed to use in the uplink direction.
This is done via Physical Downlink Control Channel (PDCCH) messages.
The downlink control information occupies the first one to four symbols over the whole channel
bandwidth in each subframe. The number of symbols that are used for this purpose is broadcast
via the Physical Control Format Indicator Channel (PCFICH), which occupies 16 symbols. This
flexibility was introduced so that the system can react on changing signaling bandwidth
requirements, that is, the number of users that are to be scheduled in a subframe. This topic is
discussed in more detail in Section 4.5. Downlink control data is organized in Control Channel
Elements (CCEs). One or more CCEs contain one signaling message that is either addressed to
one device, or, in the case of broadcast information, to all mobile devices in the cell. To reduce
the processing requirements and power consumption of mobile devices for decoding the control
information, the control region is split into search spaces. A mobile device therefore does not
have to decode all CCEs to find a message addressed to itself but only those in the search spaces
assigned to it. And finally, some symbols are reserved to acknowledge the proper reception of
uplink data blocks or to signal to the mobile device that a block was not received correctly. This
functionality is referred to as Hybrid Automatic Retransmission Request (HARQ) and the
corresponding channel is the Physical Hybrid Automatic Retransmission Request Indicator
In summary, it can be said that the PDSCH is transmitted in all resource blocks over the
complete system bandwidth. In each resource block, however, some symbols are reserved for
other purposes such as the reference signals, the synchronization signals, the broadcast channel,
the control channel, the physical control format indicator channel and the HARQ indicator
channel. The number of symbols that are not available to the shared channel depends on the
resource block and its location in the resource grid. For each signal and channel, a mathematical
formula is given in 3GPP TS 36.211  so that the mobile device can calculate where it can
find a particular kind of information.
4.3.7 System Information Messages
As in GSM and UMTS, LTE uses system information messages to convey information that is
required by all mobile devices that are currently in the cell. Unlike in previous systems, however,
only the Master Information Block (MIB) is transported over the broadcast channel. All other SI
is scheduled in the PDSCH and their presence is announced on the PDCCH in a search space that
has to be observed by all mobile devices.
Table 4.4 gives an overview of the different system information blocks, their content and
example repetition periods as described in 3GPP TS 36.331 . The most important SI is
contained in the MIB and is repeated every 40 milliseconds. Cell-related parameters are
contained in the system information block 1 that is repeated every 80 milliseconds. All other
System Information Blocks SIBs are grouped into SI messages whose periodicities are variable.
Which SIBs are contained in which SI message and their periodicities are announced in SIB 1.
Not all of the SIBs shown in Table 4.4 are usually broadcast as some of them are functionality
dependent. In practice, the MIB and, in addition, SIB-1 and SIB-2 are always broadcast because
they are mandatory. They are followed by SIB-3 and optionally SIB-4. SIB-5 is required only if
the LTE network uses more than one carrier frequency. The broadcast of SIB-5 to -7 then
depends on the other radio technologies used by the network operator.
The LTE Channel Model in Uplink Direction
In the uplink direction, a similar channel model is used as in the downlink direction. There are
again logical, transport and physical channels to separate logical data streams from the physical
transmission over the air interface and to multiplex different data streams onto a single channel.
As shown in Figure 4.10, the most important channel is the Physical Uplink Shared Channel
(PUSCH). Its main task is to carry user data in addition to signaling information and signal
quality feedback. Data from the PUSCH are split into three different logical channels. The
channel that transports the user data is referred to as the DTCH. In addition, the DCCH is used
for higher layer signaling information. During connection establishment, signaling messages are
transported over the Common Control Channel (CCCH).
Network has to request the assignment of resources on the PUSCH. This is required in the
• The mobile has been dormant for some time and wants to reestablish the connection.
• A radio link failure has occurred and the mobile has found a suitable cell again.
• During a handover process, the mobile needs to synchronize with a new cell before user data
traffic can be resumed.
• Optionally for requesting uplink resources.
Synchronizing and requesting initial uplink resources is performed with a random access
procedure on the Physical Random Access Channel (PRACH). In most cases, the network does
not know in advance that a mobile device wants to establish communication. In these cases, a
contention-based procedure is performed as it is possible that several devices try to access the
network with the same Random Access Channel (RACH) parameters at the same time. This will
result in either only one signal being received or, in the network, no transmissions being received
at all. In both cases, a contention resolution procedure ensures that the connection establishment
attempt is repeated. Figure 4.11 shows the message exchange during a random access procedure.
In the first step, the mobile sends one of the 64 possible random access preambles on the RACH.
If correctly received, the network replies with a random access response on the PDSCH which
• a timing advance value so that the mobile can synchronize its transmissions;
• a scheduling grant to send data via the PUSCH;
• a temporary identifier for the UE that is only valid in the cell to identify further messages
between a particular mobile device and the eNode-B. The mobile device then returns the
received temporary UE identifier to the network. If properly received, that is, only a single
mobile tries to reply with the temporary UE identifier, the network finalizes the contention
resolution procedure with a random access response message. During handovers, the new eNode-
B is aware that the mobile is attempting a random access procedure and can thus reserve
dedicated resources for the process. In this case, there is no risk of several devices using the
same resources for the random access procedure and no contention resolution is required. As this
random access procedure is contention free, only the first two messages shown in Figure 4.11 are
On the network side, the eNode-B measures the time of the incoming transmission relative to its
own timing. The farther the mobile device is from the base station, the later the signal arrives. As
in GSM, the network then informs the mobile device to adjust its transmission time. The farther
the mobile device is from the base station, the earlier it has to start its transmissions for the signal
to reach the base station at the right time. This is also referred to as the timing advance and the
system can compensate transmission distances of up to 100 km.
When a mobile device has been granted resources, that is, it has been assigned resource blocks
on the PUSCH, the shared channel is used for transmitting user data and also for transmitting
lower layer signaling data, which is required to keep the uplink connection in place and to
optimize the data transmission over it. The following lower layer information is sent alongside
user data packets:
• The Channel Quality Indicator (CQI) that the eNode-B uses to adapt the modulation and coding
scheme for the downlink direction.
• MIMO-related parameters (see Section 4.3.9).
• HARQ acknowledgments so that the network can quickly retransmit faulty. In some cases,
lower layer signaling information has to be transferred in the uplink direction, while no resources
are assigned to a mobile device on the uplink shared channel. In this case, the mobile can send its
signaling data such as HARQ acknowledgments and scheduling requests on the Physical Uplink
Control Channel (PUCCH), which uses the first and the last resource blocks of a carrier. In other
words, the PUCCH is located at the edges of the transmission channel. As in the downlink
direction, some information is required for the receiver, in this case the eNode-B, to estimate the
uplink channel characteristics per mobile device. This is done in two ways:
During uplink transmissions, Demodulation Reference Signals (DRS) are embedded in all
resource blocks for which a mobile has received uplink scheduling grants. The symbols use the
fourth symbol row of resource block for the purpose. As the eNode-B knows the content of the
DRS symbols, it can estimate the way the data transmission on each subcarrier is altered by the
transmission path. As the same frequency is used by all mobile devices, independent of the
eNode-B with which they communicate, different phase shifts are used for the DRS symbols
depending on the eNode-B for which the transmission is intended. This way, the eNode-B can
filter out transmissions intended for other eNode-Bs, which are, from its point of view, noise.
A second optional reference is the Sounding Reference Signal (SRS) that allows the network to
estimate the uplink channel quality in different parts of the overall channel for each mobile
device. As the uplink quality is not necessarily homogeneous for a mobile device over the
complete channel, this can help the scheduler to select the best RBs for each mobile device.
When activated by the network, mobile devices transmit the SRS in every last symbol of a
configured subframe. The bandwidth and number of SRS stripes for a UE can be chosen by the
network. Further, the SRS interval can be configured between 2 and 160 milliseconds.
4.3.9 MIMO Transmission
In addition to higher order modulation schemes such as 64-QAM, which encodes 6 bits in a
single transmission step, 3GPP Release 8 specifies and requires the use of multiantenna
techniques, also referred to as Multiple Input Multiple Output (MIMO) in the downlink
direction. While this functionality has also been specified for HSPA in the meantime, it is not yet
widely used for this technology because of backward compatibility issues and the necessity to
upgrade the hardware of already installed UMTS base stations. With LTE, however, MIMO has
been rolled out with the first network installations.
The basic idea behind MIMO techniques is to send several independent data streams over the
same air interface channel simultaneously. In 3GPP Release 8, the use of two or four
simultaneous streams is specified. In practice, up to two data streams are used today. MIMO is
only used for the shared channel and only to transmit those resource blocks assigned to users that
experience very good signal conditions. For other channels, only a single-stream operation with a
robust modulation and coding is used as the eNode-B has to ensure that the data transmitted over
those channels can reach all mobile devices independent of their location and current signal
conditions. Transmitting simultaneous data streams over the same channel is possible only if the
streams remain largely independent of each other on the way from the transmitter to the receiver.
This can be achieved if two basic requirements are met.
On the transmitter side, two or four independent hardware transmit chains are required to create
the simultaneous data streams. In addition, each data stream requires its own antenna. For two
streams, two antennas are required. In practice, this is done within a single antenna casing by
having one internal antenna that transmits a vertically polarized signal while the other antenna is
positioned in such a way as to transmit its signal with a horizontal polarization. It should be
noted at this point that polarized signals are already used today in other radio technologies such
as UMTS to create diversity, that is, to improve the reception of a single signal stream.
A MIMO receiver also requires two or four antennas and two or four independent reception
chains. For small mobile devices such as smartphones, this is challenging because of their
limited size. For other mobile devices, such as notebooks or netbooks, antennas for MIMO
operation with good performance are much easier to design and integrate. Here, antennas do not
have to be printed on the circuit board but can, for example, be placed around the screen or
through the casing of the device. The matter is further complicated because each radio interface
has to support more than one frequency band and possibly other radio technologies such as
GSM, UMTS and CDMA, which have their own frequencies and bandwidths.
The second requirement that has to be fulfilled for MIMO transmission is that the signals have to
remain as independent as possible on the transmission path between the transmitter and the
receiver. This can be achieved, for example, as shown in Figure 4.12, if the simultaneous
transmissions reach the mobile device via several independent paths. This is possible even in
environments where no direct line of sight exists between the transmitter and the receiver. Figure
4.12 is a simplification; however, as in most environments, the simultaneous transmissions
interfere with each other to some degree, which reduces the achievable speeds.
In theory, the use of two independent transmission paths can double the achievable throughput
and four independent transmission paths can quadruple the throughput. In practice, however,
throughput gains will be lower because of the signals interfering with each other. Once the
interference gets too strong, the modulation scheme has to be lowered, that is, instead of using
64-QAM and MIMO together, the modulation is reduced to 16-QAM. Whether it is more
favorable to use only one stream with 64-QAM or two streams with 16-QAM and MIMO
depends on the characteristics of the channel, and it is the eNode-B’s task to make a proper
decision on how to transmit the data. Only in very ideal conditions, that is, very short distances
between the transmitter and the receiver, can 64-QAM and MIMO be used simultaneously. As
modulation, coding and the use of MIMO can be changed every millisecond on a per device
basis, the system can react very quickly to changing radio conditions.
In some documents, the terms 2 × 2 MIMO or 4 × 4 MIMO are used to describe the two or four
transmitter chains and two or four receiver chains. In the LTE specifications, the term ‘rank’ is
often used to describe the use of MIMO. Rank 1 transmission mode signifies a single-stream
transmission while rank 2 signifies a two-stream MIMO transmission for a resource block. The
term is derived from the rank of the channel matrix that describes the effects that the independent
data streams have on each other when they interfere with each other on the way from the
transmitter to the receiver. On a somewhat more abstract level, the 3GPP specifications use the
term ‘layers’ to describe the number of simultaneous data streams. MIMO is not used when only
a single layer is transmitted. A 2 × 2 MIMO requires two layers. Each transmission layer uses its
own reference symbols as described in Section 4.3.4. The corresponding symbols on the other
layer(s) have to be left empty. This slightly reduces the overall throughput of MIMO as the
number of symbols that can be used in a resource block for the shared channel is lower compared
to that in single-stream transmission. Two different MIMO operation schemes can be used by the
eNode-B for downlink transmissions. In the closed-loop MIMO operation mode, a precoding
matrix is applied on the data streams before they are sent over the air to change the modulation
of the two signal paths in a favorable way to increase the overall throughput at the receiver side.
The mobile device can inform the eNode-B as to which parameters in the precoding matrix
would give the best results. This is done by sending a Precoding Matrix Indicator (PMI), a Rank
Indicator (RI) and a CQI over the control channel. The RI informs the eNode-B about the
number of data streams that can be sent over the channel from the receiver’s point of view. The
CQI information is used by the eNode-B to decide as to which modulation (QPSK, 16-QAM, 64-
QAM) and which coding rate, that is, the ratio between user data bits and error detection bits in
the data stream, should be used for the transmission. For fast moving users, it is difficult to adapt
the precoding matrix quickly enough. Such scenarios are thus better handled with open-loop
MIMO, for which only the RI and the CQI are reported by the mobile device to the network.
And finally, multiple antennas can also be used for transmit and receive diversity. Here, the same
data stream is transmitted over several antennas with a different coding scheme on each. This
does not increase the transmission speed beyond what is possible with a single stream but it helps
the receiver to better decode the signal and, as a result, enhances datarates beyond what would be
possible with a single transmit antenna. In total, 12 different possibilities exist for using MIMO
or transmit diversity subfeatures so that the eNode-B has a wide range of options to adapt to
changing signal conditions.
In the uplink direction, only single-stream transmission from the point of view of the mobile
device has been defined at this point. The eNode-B, however, can instruct several mobile devices
to transmit within the same resource block. The receiver then uses MIMO techniques to separate
the individually transmitted signals. This is not visible to the mobile devices and is referred to as
multiuser MIMO. The eNode-B is able to tell the data streams coming from different mobile
devices apart by instructing the mobile devices to apply a different cyclic shift to their reference
Protocol Layer Overview
In the previous sections, some of the most important functions of the different air interface
protocol layers such as HARQ, ARQ and ciphering have been discussed. These functions are
spread over different layers of the stack. Figure 4.14 shows how these layers are built on each
other and where the different functions are located.
On the vertical axis, the protocol stack is split into two parts. On the left side of Figure 4.14, the
control protocols are shown. The top layer is the NAS protocol that is used for mobility
management and other purposes between the mobile device and the MME. NAS messages are
tunneled through the radio network, and the eNode-B just forwards them transparently. NAS
messages are always encapsulated in Radio Resource Control (RRC) messages over the air
interface. The other purpose of RRC messages is to manage the air interface connection and they
are used, for example, for handover or bearer modification signaling. As a consequence, an RRC
message does not necessarily have to include a NAS message. This is different on the user data
plane shown on the right of Figure 4.14. Here, IP packets are always transporting user data and
are sent only if an application wants to transfer data. The first unifying protocol layer to transport
IP, RRC and NAS signaling messages is the PDCP layer. As discussed in the previous section, it
is responsible for encapsulating IP packets and signaling messages, for ciphering, header
compression and lossless handover support. One layer below is the RLC. It is responsible for
segmentation and reassembly of higher layer packets to adapt them to a packet size that can be
sent over the air interface. Further, it is responsible for detecting and retransmitting lost packets
(ARQ). Just above the physical layer is the MAC. It multiplexes data from different radio bearers
and ensures QoS by instructing the RLC layer about the number and the size of packets to be
provided. In addition, the MAC layer is responsible for the HARQ packet retransmission
functionality. And finally, the MAC header provides fields for addressing individual mobile
devices and for functionalities such as bandwidth requests and grants, power management and
time advance control.
The main functionalities carried out in each layer are summarized in the following [8–11].
• NAS (Non-Access Stratum)
– Connection/session management between UE and the core network.
– Bearer context activation/deactivation.
– Location registration management.
• RRC (Radio Resource Control)
– Broadcast system information related to Non-Access Stratum (NAS) and Access Stratum (AS).
– Establishment, maintenance, and release of RRC connection.
– Security functions including key management.
– Mobility functions.
– QoS management functions.
– UE measurement reporting and control of the reporting.
– NAS direct message transfer between UE and NAS.
• PDCP (Packet Data Convergence Protocol)
– Header compression.
– In-sequence delivery and retransmission of PDCP Session Data Units (SDUs) for acknowledge
mode radio bearers at handover.
– Duplicate detection.
– Ciphering and integrity protection.
• RLC (Radio Link Control)
– Error correction through Automatic Repeat request (ARQ).
– Segmentation according to the size of the transport block and re-segmentation in case a
retransmission is needed.
– Concatenation of SDUs for the same radio bearer.
– Protocol error detection and recovery.
– In-sequence delivery.
• MAC (Medium Access Control)
– Multiplexing/demultiplexing of RLC Packet Data Units (PDUs).
– Scheduling information reporting.
– Error correction through Hybrid ARQ (HARQ).
– Local Channel Prioritization.