Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Chalmers & TELECOM BretagneCoding for phase noise channelsIVAN LELLOUCHDepartment of Signals & SystemsChalmers University ...
Abstract
Acknowledgements
Contents1 Introduction                                                                                                    ...
List of Figures3.1   DT and converse for AWGN channel - SN R = 0 - Pe = 10−3 . . . . . . . 124.1   DT and constraint capac...
1                             IntroductionSince Shannon’s landmark paper [1], there has been a lot of studies regarding th...
CHAPTER 1. INTRODUCTIONThis thesis is in the framework of the MAGIC project involving Chalmers University ofTechnology, Er...
2                 Bounds and capacity2.1     Capacity and information densityWe denote by A and B the input and output set...
CHAPTER 2. BOUNDS AND CAPACITY                                              1                                    C=       ...
CHAPTER 2. BOUNDS AND CAPACITY2.3.1    Dependence Testing bound [5]We will present the technique proposed in [5] regarding...
CHAPTER 2. BOUNDS AND CAPACITYThe codebook is generated randomly according to the distribution PX , and we denote byy a re...
CHAPTER 2. BOUNDS AND CAPACITY2.3.2   Meta converse bound [5]The meta converse bound is an upper bound on the size of the ...
CHAPTER 2. BOUNDS AND CAPACITYTheorem 3. Every code with M codeword in A and an average probability of errorsatisfies      ...
3                       AWGN channelIn this chapter we will apply the bounds discussed in the previous chapter to the AWGN...
CHAPTER 3. AWGN CHANNEL3.1.1    Information densityWe can now define the information density of the channel using distribut...
CHAPTER 3. AWGN CHANNEL                                                             n                    √                ...
CHAPTER 3. AWGN CHANNEL                   0.6                   0.5                   0.4Rate, bit/ch.use                 ...
4                 Phase noise channelsIn this chapter we will focus on channels impaired by phase noise. First, we will se...
CHAPTER 4. PHASE NOISE CHANNELS4.1.1   Information densityThe information density is defined by                            ...
CHAPTER 4. PHASE NOISE CHANNELS   Given a probability of error , we want to find the highest M such as the followingexpress...
CHAPTER 4. PHASE NOISE CHANNELS                           V −1                       z−1                 V −z−1           ...
CHAPTER 4. PHASE NOISE CHANNELS4.2      Uniform phase noise AWGN channelWe consider an AWGN channel impaired with a unifor...
CHAPTER 4. PHASE NOISE CHANNELSwhich used in (4.26) gives                                         (a2 +α2 )           a   ...
CHAPTER 4. PHASE NOISE CHANNELS   We notice that the expression is independent of βk , the angle of y                     ...
CHAPTER 4. PHASE NOISE CHANNELSAmplitude modulation inputFor this constellation, we consider m points equally spaced with ...
CHAPTER 4. PHASE NOISE CHANNELSWe choose to study this model because it is a good approximation of the phase noiseerror in...
CHAPTER 4. PHASE NOISE CHANNELS          9                                                                                ...
CHAPTER 4. PHASE NOISE CHANNELS    Then we have                                              yk xk = aαei(b−β)            ...
CHAPTER 4. PHASE NOISE CHANNELS                                                                          √                ...
CHAPTER 4. PHASE NOISE CHANNELS                   64−QAM constellation                         Robust Circular QAM constel...
CHAPTER 4. PHASE NOISE CHANNELS                      1.5                       1                      0.5                 ...
CHAPTER 4. PHASE NOISE CHANNELS                       5                      4.5                       4                  ...
CHAPTER 4. PHASE NOISE CHANNELS                         1                        0.9                        0.8           ...
CHAPTER 4. PHASE NOISE CHANNELSthat the difference between these curves exist for small block length, and we will need alar...
5                              ConclusionIn this work we applied an achievability bound on phase noise channels in order t...
Bibliography[1] C. E. Shannon, A mathematical theory of communication, Bell System Technical    Journal (1948) 379–423.[2]...
Coding for phase noise channels
Coding for phase noise channels
Coding for phase noise channels
Upcoming SlideShare
Loading in …5
×

Coding for phase noise channels

674 views

Published on

  • Be the first to comment

Coding for phase noise channels

  1. 1. Chalmers & TELECOM BretagneCoding for phase noise channelsIVAN LELLOUCHDepartment of Signals & SystemsChalmers University of TechnologyGothenburg, Sweden 2011Master’s Thesis 2011:1
  2. 2. Abstract
  3. 3. Acknowledgements
  4. 4. Contents1 Introduction 12 Bounds and capacity 3 2.1 Capacity and information density . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Binary hypothesis testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3.1 Dependence Testing bound [5] . . . . . . . . . . . . . . . . . . . . 5 2.3.2 Meta converse bound [5] . . . . . . . . . . . . . . . . . . . . . . . . 73 AWGN channel 9 3.1 The AWGN channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.1.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.2 Depending testing bound . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.3 Meta converse bound . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Phase noise channels 13 4.1 Uniform phase noise channel . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.1.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.1.2 Depending testing bound . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Uniform phase noise AWGN channel . . . . . . . . . . . . . . . . . . . . . 17 4.2.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2.2 Dpending testing bound . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3 Tikhonov phase noise channel . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3.1 Information density . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3.2 Depending testing bound . . . . . . . . . . . . . . . . . . . . . . . 24 4.3.3 Meta converse bound . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Conclusion 30 Bibliography 31 i
  5. 5. List of Figures3.1 DT and converse for AWGN channel - SN R = 0 - Pe = 10−3 . . . . . . . 124.1 DT and constraint capacities for three uniform AM constellations. . . . . 204.2 Tikhonov probability density function . . . . . . . . . . . . . . . . . . . . 224.3 Two 64-QAM constellations in AWGN phase noise channel . . . . . . . . 254.4 Robust Circular QAM constellation with phase noise . . . . . . . . . . . . 264.5 DT curves for two 64-QAM constellations in AWGN phase noise channel with SNR = 0dB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.6 DT curves for two 64-QAM constellations in AWGN phase noise channel with SNR = 15dB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.7 Comparaison of DT curves for different phase noise power . . . . . . . . . 274.8 Comparaison of DT curves for different probabilities of error . . . . . . . 28 ii
  6. 6. 1 IntroductionSince Shannon’s landmark paper [1], there has been a lot of studies regarding the chan-nel capacity, which is the amount of information we can reliably sent through a channel.This result is a theoretical limit, that required to use an infinite block length. In practice,we want to know, for a given communication system, how far our system is from thisupper bound.When we design a system, there are two main parameters we need to determine. Theerror probability that the system can tolerate and the delay constraint, which is relatedto the size of the message we want to send, ie., the block length. Therefore, we want tofind, given these parameters what is the new upper bound for our system. Thus we willwork with finite block length and a given probability of error.Two bounds are defined in order to determine this new limit, the achievability boundand the converse bound.Achievability bound is a lower bound on the size of the codebook, given a block lengthand error probability.Converse bound is an upper bound on the size of the codebook, given a block lengthand error probability.By using both of this bounds, we can determine an approximation of the theoretical limitof information we can send through a channel, for a given block length and a probabilityof error.Achievability bounds already exist in information theory studies. Three main boundswere defined by Feinstein [2], Shannon [3] and Gallager [4]. An optimization of auxiliaryconstants was needed in order to compute those bounds. Thanks to this work we havesome new insights regarding how far systems can work from the capacity of the channelwith a finite blocklength.In a recent work [5] Polyanskiy et al defined a new achievability bound that do notrequired any auxiliary constant optimization and is tighter than the three bounds in[2, 3, 4] and one converse bound. 1
  7. 7. CHAPTER 1. INTRODUCTIONThis thesis is in the framework of the MAGIC project involving Chalmers University ofTechnology, Ericsson AB and Qamcom Technology AB and the context is the microwavebackhauling for IMT advanced and beyond. A part of this project is to investigate themodulations and coding techniques for channels impaired by phase noise. In digital com-munications systems, the use of low-cost oscillators at the receiver causes phase noisewhich can become a severe problem for high symbol rates and constellation sizes. Forchannels impaired by phase noise, the codes we are usually using do not perform as goodas they do in classical channels.In this thesis, we will deal with two bounds from [5], an achievable bound and a conversebound. We will apply those bounds to phase noise channels and see how far we are fromthe capacity and which rates we can reach given a block length and an error probability.The outline of the Thesis is as follows.In Chapter 2, we introduce the capacity and the bounds that will be used in the follow-ing chapters. The main result is the expression of the depending testing (DT) boundthat we used to find a lower bound on the maximal coding rate. We will explain howPolyanskiy et al derived this bound and show how we can use it for our channel.In Chapter 3, we first apply our results to the additive white Gaussian noise (AWGN)channel. It is useful to see how the equations works for continuous noise channel, butalso to determine the impact the phase noise will cause on the maximal coding rate.In Chapter 4, we find the main results of the thesis. We apply the DT bound on sev-eral partially coherent additive white Gaussian noise (PC-AWGN) channels, ie., AWGNchannel impaired by phase noise, and compare them with the bound for the AWGNchannel and the constrained capacity. Therefore, the loss induce by the phase noise willbe estimated, for a given channel, and this will leads to an approximation of the maximalcoding rate for the PC-AWGN we investigated. We also present a constellation designedfor phase noise channel and show the performance improvements. 2
  8. 8. 2 Bounds and capacity2.1 Capacity and information densityWe denote by A and B the input and output sets. If X and Y are random variables fromA and B respectively then x and y are their particular realizations. X and Y denotethe probability spaces and PX and PY are the probability density functions of X andY respectively. We also define PY |X , the conditional probability from A to B given acodebook (c1 ,...,cM ) and we denote by M its size.Since we are interested in finite block length analysis, a realisation x of a random variableX represent a n-dimensional vector, ie., x = (x1 ,...,xn ). The capacity C of a channel is the maximum of information we can reliably sendthrough it with a vanishing error probability and for an infinite block length. For inputand output X and Y distributed according to PX and PY respectively, the capacity Cis given by C = max {I(X; Y )} (2.1) X where I(X,Y ) is the mutual information between X and Y . The expression is max-imized with respect to the choice of the input distribution PX . pXY (x,y) I(X; Y ) = pXY (x,y) log2 dxdy X ,Y pX (x)pY (y) where the logarithmic term is called the information density : pX,Y (x,y) i(x; y) = log2 (2.2) pX (x)pY (y) It is proven in [6] that the capacity for the AWGN channel is achieved by a Gaussianinput distribution and is given by 3
  9. 9. CHAPTER 2. BOUNDS AND CAPACITY 1 C= log2 (1 + SNR) (2.3) 2 Pwhere SNR is the signal-to-noise ratio, ie., SNR = N , where P and N are the input andnoise power respectively. The capacity can be computed when we know which distribution maximized (2.1).In this case we say that the distribution is capacity achieving. For some channels, suchas phase noise channels, we have few informations regarding the capacity. For thosechannels, we will determine an input and use it for our calculations. Thus, we will workwith a constrained capacity, ie., constrained to a specific input distribution, which is anupper bound of the information that can be sent through the channel for a given inputdistribution. log2 MThe capacity can also be define by using the rate R = n 1 C = lim lim log2 M ∗ (n, ) (2.4) ←0 n←inf nwhere n is the block length, the probability of error and M ∗ defined as follow M ∗ (n, ) = max {M : ∃(n,M, ) − code} (2.5)2.2 Binary hypothesis testingLater in this thesis we will need to use a binary hypothesis testing in order to define anupper bound for the rate. We consider a random variable R defined as follow R : {P,Q} → {1,0} (2.6)where 1 indicates that P is chosen. We also consider the random transformation PZ|R : R → {1,0} (2.7)We define βα (P,Q), the maximal probability of error under Q if the probability of errorunder P is lower than α. We can denote this test by βα (P,Q) = inf PZ|R (1|r)Q(r) (2.8) PZ|R : r∈R PZ|R (1|r)P (r)≥α r∈R2.3 BoundsYury Polyanskiy defined in [5] the DT bound and the meta converse bound over randomcodes . In this chapter we will start by describing those bounds and apply them overcontinuous channels. 4
  10. 10. CHAPTER 2. BOUNDS AND CAPACITY2.3.1 Dependence Testing bound [5]We will present the technique proposed in [5] regarding the bounding of the error prob-ability for any input distribution given a channel.Theorem 1. Depending testing boundGiven an input distribution PX on A, there exists a code with codebook size M , and theaverage probability of error is bounded by + M −1 ≤ E exp − i(x,y) − log2 (2.9) 2where |u|+ = max(0,u) (2.10)Proof. Let Zx (y) be the following function Zx (y) = 1(i(x,y)>log M −1 ) (2.11) 2where 1A (.) is an indicator function : 1, if x ∈ A, 1A (x) = (2.12) 0, otherwise. For a given codebook (c1 ,...,cM ), the decoder computes (2.11) for the codeword cjstarting with c1 , until it finds Zcj (y) = 1, or the decoder returns an error. Therefore,there is no error with probability   Pr {Zcj (y) = 1} {Zci (y) = 0} (2.13) i<jThen, we can write the error for the j th codeword as   (cj ) = Pr {Zcj (y) = 0} {Zci (y) = 1} (2.14) i<jUsing the union bound on this expression and (2.11) (cj ) ≤ Pr {Zcj (y) = 0} + Pr [{Zci (y) = 1}] (2.15) i<j M −1 M −1 = Pr i(cj ,y) ≤ log + Pr i(ci ,y) > log (2.16) 2 2 i<j 5
  11. 11. CHAPTER 2. BOUNDS AND CAPACITYThe codebook is generated randomly according to the distribution PX , and we denote byy a realization of the random variable Y , but independent of the transmitted codeword.¯Thus, the probability of error, if we send the codeword cj , is bounded by M −1 M −1 (cj ) ≤ Pr i(x,y) ≤ log + (j − 1)Pr i(x,¯) > log y (2.17) 2 2 1Then, if we suppose that Pr(cj ) = M, we have M 1 = (cj ) (2.18) M j=1and M 1 M −1 M −1 ≤ Pr i(x,y) ≤ log + (j − 1)Pr i(x,¯) > log y (2.19) M 2 2 j=1which give us finally the following expression for the average error probability M −1 M −1 M −1 ≤ Pr i(x,y) ≤ log + Pr i(x,¯) > log y (2.20) 2 2 2We know that p(x)p(y) exp −|i(x,y) − log u|+ = 1(i(x,y)≤log u) + u 1 (2.21) p(x,y) (i(x,y)>log u)By averaging over p(x,y), and using y , we have ¯ exp −|i(x,y) − log u|+ = Pr(i(x,y) ≤ log u) + u p(x)p(¯)1(i(x,¯)>log u) (2.22) y y x y ¯and knowing that y is independent of x it leads us to ¯ exp −|i(x,y) − log u|+ = Pr(i(x,y) ≤ log u) + u p(x,¯)1(i(x,¯)>log u) y y (2.23) x y ¯and finally exp −|i(x,y) − log u|+ = Pr(i(x,y) ≤ log u) + uPr(i(x,¯) > log u) y (2.24) M −1Thus, replacing u = 2 and using (2.20) we obtain (2.9) which completes the proof. This expression needs no auxiliary constant optimization and can be computed fora given channel by knowing the information density. Applications over AWGN channelsand phase noise channels will be shown in following chapters. 6
  12. 12. CHAPTER 2. BOUNDS AND CAPACITY2.3.2 Meta converse bound [5]The meta converse bound is an upper bound on the size of the codebook for a given errorprobability and block length. To define this bound, we will use the binary hypothesistesting defined in (2.8).Theorem 2. Let denote by A and B the input and output alphabets respectively. Weconsider two random transformations PY |X and QY |X from A to B, and a code (f,g)with average probability of error under PY |X and under QY |X .The probability distribution induced by the encoder is PX = QX . Then we have β1− (PY |X ,QY |X ) ≤ 1 − (2.25)where β is the binary hypothesis testing defined in (2.8).Proof. We denote by s the input message chosen in (s1 ,...,sM ) and by x = f (s) theencoded message. Also y is the message before decoding and z = g(z) the decodedmessage. We define the following random variable to represent an error-free transmission Z = 1s=z (2.26)First we notice that the conditional distribution of Z given (X,Y ) is the same for bothchannels PY |X and QY |X . M P [Z = 1|X,Y ] = P [s = si ,z = si |X,Y ] (2.27) i=1Then, given (X,Y ), since the input and output messages are independent we have M P [Z = 1|X,Y ] = P [s = si |X,Y ] P [z = si |X,Y ] (2.28) i=1We can simplify the expression as follows M P [Z = 1|X,Y ] = P [s = si |X] P [z = si |Y ] (2.29) i=1We recognize in the second term of the product the decoding function, while the firstterm is independent of the choice of the channel given the definition of the probabilitydistributions induced by the encoder PX = QX that we defined earlier.Then, using PZ|XY = QZ|XY (2.30)we define the following binary hypothesis testing PZ|XY (1|xy)PXY (x,y) = 1 − (2.31) x∈A y∈B PZ|XY (1|xy)QXY (x,y) = 1 − (2.32) x∈A y∈Band using the definition in (2.8) we get (2.25). 7
  13. 13. CHAPTER 2. BOUNDS AND CAPACITYTheorem 3. Every code with M codeword in A and an average probability of errorsatisfies 1 M ≤ sup inf (2.33) PX QY β1− (PXY ,PX × QY )where PX describes all input distributions on A and QY all output distributions on B. 8
  14. 14. 3 AWGN channelIn this chapter we will apply the bounds discussed in the previous chapter to the AWGNchannel. We know several results for this channel, such as the capacity. We will seehow far the DT and the meta-converse bounds are from the capacity and if they aretight enough to have an idea of the maximum achievable rate given a block length anderror probability. The results for the AWGN channel will be useful in the next chapterto evaluate the effect of the phase noise on the achievable rates for AWGN channelsimpaired by phase noise.3.1 The AWGN channelLet us consider x ∈ X , y ∈ Y and the transition probability between X and Y , PXY .We have the following expression for the AWGN channel y =x+t (3.1)where the noise samples t ∼ N (0,σIn ) are independent and identically-distributed. Thus, we know the conditional output probability function n/2 (y−x)T (y−x) 1 PY |X=x = e− 2σ 2 (3.2) 2πσ 2where .T is the transpose operation. We know from [6] that the Gaussian input distri-bution achieves the capacity for this channel. So we will consider x ∼ N (0,P In ) anddenote by PX the corresponding probability distribution. Given the conditional output probability function and the input, the output distri-bution is defined as follow (summation of Gaussian variables) y ∼ N (0,(σ 2 + P )In ) (3.3) 9
  15. 15. CHAPTER 3. AWGN CHANNEL3.1.1 Information densityWe can now define the information density of the channel using distributions y|x ∼N (x,σ 2 In ) and y ∼ N (0,(σ 2 + P )In ). log2 (e) yT y (y − x)T (y − x) n P + σ2 i(x,y) = − + log2 (3.4) 2 P + σ2 σ2 2 σ2which can be rewritten as n n P + σ 2 log2 (e) 2 yi n2i i(x,y) = log2 + − 2 (3.5) 2 σ2 2 P + σ2 σ i=13.1.2 Depending testing boundTo compute the DT bound for the AWGN channel, we are using (3.5) in (2.9). n + n P + σ 2 log2 (e) 2 yi n2i M −1 ≤ E exp − log2 + − 2 − log2 (3.6) 2 σ2 2 P +σ 2 σ 2 i=1Then to compute the expectation we can use a Monte Carlo simulation. The samplesare generated according to the model description in section 3.1. For this simulation, we use the input distribution x ∼ N (0,P ). This leads to theDT bound for the maximal coding rate on this channel, ie., this bound will be an upperbound for all other DT’s for this channel. In practice, discrete constellations are used forour systems. Therefore we also look at the results for a know discrete input constellationwhich will be useful in the next chapter when we will compare results for the AWGNand the partially coherent AWGN channels.3.1.3 Meta converse boundWe know that the input distribution and the noise are Gaussian with parameters P andσ 2 respectively. We also know that the summation of two Gaussian random variable isa Gaussian random variable, thus we chose y ∼ N (0,σY In ) as the output distributionfor the computation of the converse bound. We can now define the information density n 2 n 2 log2 e yi 2 i(x,y) = log2 σY + 2 2 2 − (yi − xi ) σY (3.7) i=1 We choose the input such that ||x||2 = nP . To simplify calculations, we are using √ √x = x0 = ( P ,..., P ). This is possible because of the symmetry of the problem. Thus, using Zi ∼ N (0,1), Hn and Gn have the following distributions 10
  16. 16. CHAPTER 3. AWGN CHANNEL n √ P 1 Hn = n log2 σY − n log2 e + log2 e (1 − σY )Zi2 + 2 P σY Zi 2 (3.8) 2 2 i=1and n √ P 1 Gn = n log2 σY + n 2 log2 e + 2 log2 e (1 − σY )Zi2 + 2 P Zi 2 (3.9) 2σY 2σY i=1where Hn and Gn are the information density under PY |X and PY respectively. 2Then, by choosing σY = 1 + P , we have n n P 2 Hn = log2 (1 + P ) + log2 e 1 − Zi2 + √ Zi (3.10) 2 2(1 + P ) P i=1and n n P 1 Gn = log2 (1 + P ) − log2 e 1 + Zi2 − 2 1 + Zi (3.11) 2 2 P i=1Notice that Hn and Gn are non-central χ2 distributions, thus we have n P log2 e Hn = (log2 (1 + P ) + log2 e) − yn (3.12) 2 2(1 + P ) nwith yn ∼ χ2 ( P ), and n n P log2 e Gn = (log2 (1 + P ) + log2 e) − yn (3.13) 2 2 nwith yn ∼ χ2 (n + P ). nFinally, to compute the bound, we find γn such as Pr [Hn ≥ γn ] = 1 − (3.14)which lead to 1 M≤ (3.15) Pr [Gn ≥ γn ] Those expressions can be computed directly using closed-form expressions. For somechannels, when we do not have them, therefore we have to compute the bound with aMonte Carlo simulation and we will discuss the issue of calculate the second probabilityPr [Gn ≥ γn ], which decreases exponentially to 0, by this method. In Fig. 3.1, we plot the results for a real-valued AWGN channel of the rate, in bit perchannel use against the block length n. For this example, we use the capacity achievinginput distribution x ∼ N (0,σ) to compute the DT bound. For the following chapters,we will use discrete input distribution for the AWGN channel in order to compare withthe results we will find for partially coherent AWGN channels. We see that the gap between both curves, the DT and the converse, get smaller whenthe block length get larger. This result give a good approximation of the maximal codingrate for this channel given the error probability. We also know from the definition of thecapacity, that both curves will tend toward it when n grows to infinity. 11
  17. 17. CHAPTER 3. AWGN CHANNEL 0.6 0.5 0.4Rate, bit/ch.use 0.3 0.2 0.1 Meta−converse DT Capacity 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Blocklength, n Figure 3.1: DT and converse for AWGN channel - SN R = 0 - Pe = 10−3 12
  18. 18. 4 Phase noise channelsIn this chapter we will focus on channels impaired by phase noise. First, we will seesome results with a uniform phase noise and the equations that lead to the DT bound.Then we will focus on two more realistic channels, the AWGN channel impaired withuniform phase noise, and with Tikhonov phase noise.4.1 Uniform phase noise channelWe consider a uniform phase noise channel where θ is the additive noise on the phase.This noise is distributed uniformly between −a and a, ie., θ ∼ U (−a,a). If x ∈ X andy ∈ Y we have the following expressions y = xeiθ (4.1)and yk = xk eiθk (4.2) Using an interleaver, we can assume that the noise is memoryless, then we have theconditional output distribution n p(y|x) = p(yk |xk ) (4.3) k=1 Notice that regardless the phase noise, the noise cannot change the magnitude of thesample, ie., |x| = |y|. Then, for this channel we will only consider a constellation withall points belonging to the same ring. 13
  19. 19. CHAPTER 4. PHASE NOISE CHANNELS4.1.1 Information densityThe information density is defined by PY |X=x (y) i(x,y) = log2 (4.4) PY (y) We know that p(yk ) = p(θk ), thus we have  1 , if |θ | ≤ a  k p(θk ) = 2a (4.5) 0, otherwise  Using (4.3) and (4.5) we obtain the expression of the conditional output distribution  1 , if y ∈ Yx  n PY |X=x (y) = p(yi |xi ) = (2a)n (4.6) 0, otherwise  i=1where yi Yx = y ∈ Y, ∀i ∈ [1,n] ,| arg |≤a (4.7) xi Then, using the law of total probability, we obtain the output distribution 1 |Xy | PY (y) = p(y|x) = (4.8) |X | x |X |(2a)nwhere yi Xy = x ∈ X , ∀i ∈ [1,N ] ,| arg |≤a (4.9) xi Finally, the information density for the uniform phase noise channel is given by thefollowing expression   log |X | , if y ∈ Yx  2 i(x,y) = |Xy | (4.10)   0, otherwise4.1.2 Depending testing boundSince the capacity achieving distribution for this channel is not known, we will workwith a given input constellation. Let the input alphabet be distributed according to an m-PSK modulation. Let nbe the block length, M the size of the codebook M and E(M ) the set of all M -sizecodebooks.The codebook is randomly chosen. 14
  20. 20. CHAPTER 4. PHASE NOISE CHANNELS Given a probability of error , we want to find the highest M such as the followingexpression stands M −1 + ≤ E e−(i(x,y)−log2 ( 2 )) (4.11) Since we are using discrete input, we can rewrite the expression as follows by ex-panding the expectation over PXY M −1 + ≤ p(x,y)e(i(x,y)−log2 ( 2 )) dy (4.12) x∈X y∈Y Let z(x,y) = |Xy |, we obtain + 2M − log2 ≤ P (z)e (M −1)z (4.13) z∈N Then, the probability P (z) can be expanded as follows P (z) = P (z|x,y)p(y|x)p(x)dy (4.14) x∈X y∈Ywhich in (4.13) gives + 2M − log2 ≤ P (z|x,y)p(y|x)p(x)dy e (M −1)z (4.15) z∈N x∈X y∈Y Since the input is an m-PSK modulation and the phase noise is uniform we know theexpressions of p(y|x) and p(x) + 1 1 − log2 2M ≤ P (z|x,y) n mn dy e (M −1)z (4.16) y∈Yx (2a) z∈N x∈X Then, we can simplify the equation using the symmetry of the problem, by choosingx0 a realisation of X + 1 − log2 2M ≤ P (z|x0 ,y) dy e (M −1)z (4.17) y∈Yx0 (2a)n z∈Nand by expanding the integration over y we obtain a a + 1 − log2 2M ≤ ··· P (z|x0 ,y)dy1 · · · dyn e (M −1)z (4.18) (2a)n −a −a z∈N Let V (x,y) be the number of neighbours in X for y ∈ Yx . V = V (x,y) − 1.Then the probability P (z|x,y) can be written as follows 15
  21. 21. CHAPTER 4. PHASE NOISE CHANNELS V −1 z−1 V −z−1 V 1 P (z|x,y) = (M − 1 − j) (mn − M − j) (4.19) z (mn − 1 − j) j=0 j=0 j=0P (z|x,y) = 0 if z > V (x,y). Given the phase noise parameter a, and the number of points m in one ring of theconstellation, we can determine the function v(yk ).v(yk ) define the number of points the output yk can come from, in one ring of theconstellation. This function is a simple function with two values v1 and v2 as soon asthe points in each ring are equally spaced.Then we can define two constant d1 and d2 by the following expressions yk +a d1 = 1(v(yk ) = v1 )dyk (4.20) yk −a yk +a d2 = 1(v(yk ) = v2 )dyk (4.21) yk −a Finally we have the following expression to compute the bound max(v1 n ,v2 n ) n + 1 n − log2 2M ≤ d1 u d2 n−u p z|V = v1 u v2 n−u e (M −1)(z+1) (2a)n z=0 u=0 u (4.22) The complexity of this calculation depends on the complexity of p (z|V = v1 u v2 n−u ).This is a product of V terms, so the complexity is O(2n ). Of course, we can’t computethis expression because of its complexity. In the next chapters, we are using partiallycoherent AWGN channels, and the expressions do not need to calculate the probabilityp(z|x,y) which make the computation much faster. 16
  22. 22. CHAPTER 4. PHASE NOISE CHANNELS4.2 Uniform phase noise AWGN channelWe consider an AWGN channel impaired with a uniform phase noise θ ∼ U (−a,a). If 2x ∈ X , y ∈ Y and t ∼ N (0,σN ) we have y = xeiθ + t (4.23)and yk = xk eiθk + tk (4.24) For this channel we can define the information density as follows.4.2.1 Information densityWe need both expressions of PY and PY |X to determine the expression of i(x,y). First,we know that the noise is memoryless, which allows us to write n PY |X=x (y) = p(yk |xk ) (4.25) k=1 where xk , yk and tk can be written as xk = ak eibk tk = ck eidk yk = αk eiβkwhich give us the following expression for the conditional output distribution (usingpolar coordinates) p(yk |xk ) = αk p(θk |xk )p(tk |θk ,xk )dθk (4.26) θk a αk 1 |tk |2 = exp − 2 dθk (4.27) −a 2a 2πσ 2 2σwhere |tk |2 = |yk − xk eiθk |2 (4.28) We develop (4.28) as follows |tk |2 = |αk cos(βk ) + iαk sin(βk ) − |xk | cos(θk + arg(xk )) + i|xk | sin(θk + arg(xk ))|2 (4.29) = (αk cos(βk ) − |xk | cos(θk ) + arg(xk ))2 + (αk sin(βk ) − |xk | sin(θk + arg(xk )))2 (4.30) = α2 + a2 − 2αk ak cos(θk + bk − βk ) k k (4.31) 17
  23. 23. CHAPTER 4. PHASE NOISE CHANNELSwhich used in (4.26) gives (a2 +α2 ) a exp(− k 2 k ) 2σ αk ak cos(θk + bk − βk ) p(yk |xk ) = αk exp dθk (4.32) (2a)(2πσ 2 ) −a σ2 Then, using the law of total probabilities we obtain the expression of the outputdensity probability m−1 p(yk ) = p(xu,k )p(yk |xu,k ) (4.33) u=0 Now, we need to choose the input distribution to determine the expression of theinformation density. We will consider a set of M codewords (c1 ,...,cM ) with the sameprobability. Then we have m−1 (a2 +α2 ) a αk exp(− u,k 2 k ) 2σ αk au,k cos(θk + bu,k − βk ) p(yk ) = exp dθk (4.34) m (2a)(2πσ 2 ) −a σ2 u=0which finally gives the following expression for the information density (a2 +α2 )   exp(− k 2 k ) a αk ak cos(θk +bk −βk ) N −a exp 2σ (2a)(2πσ2 ) σ2 dθk i(x,αeiβ ) =   log2   (a2 +α2 )  (4.35)  k=1 m−1 1 exp(− u,k 2 k ) a αk au,k cos(θk +bu,k −βk ) −a exp 2σ u=0 m (2a)(2πσ2 ) σ2 dθkand with some simplifications we obtain   a2 a N exp(− 2σ2 ) −a exp αk ak cos(θk +bk −βk ) dθk k σ2 i(x,αeiβ ) = log2  a2  (4.36) m−1 1 u,k a αk au,k cos(θk +bu,k −βk ) k=1 u=0 m exp(− 2σ2 ) −a exp σ2 dθk We recognize in the information density expression the following integral s ek cos(x) dx, s ≤ π. (4.37) 0 We can find a closed-form expression only if we choose s = π, by using the Besselfunction of the first kind. For this case (4.32) becomes, (a2 +α2 ) π exp(− k 2 k ) 1 2σ αk ak cos(θk + bk − βk ) p(αk ,βk |ak ,bk ) = αk exp dθk (4.38) (2πσ 2 ) 2π −π σ2and using the properties of trigonometric functions we can rewrite the expression asfollows (a2 +α2 ) π exp(− k 2 k ) 1 2σ i2 αk ak sin(θk ) p(αk ,βk |ak ,bk ) = αk exp dθk (4.39) (2πσ 2 ) 2π −π σ2 18
  24. 24. CHAPTER 4. PHASE NOISE CHANNELS We notice that the expression is independent of βk , the angle of y π p(αk |ak ,bk ) = p(αk ,βk |ak ,bk )dβk (4.40) −π = (2π)p(αk ,βk |ak ,bk ) (4.41)which leads to αk a2 + α2 αk ak p(αk |ak ) = exp − k 2 k I0 (4.42) σ2 2σ σ2using I0 (.) the Bessel function of the first kind.M-PSK √Considering a M-PSK, we have ak = P and PY can be defined as follows m PY (αk ) = p(au,k )p(αk |ak ) = p(αk |ak ) (4.43) u=1which leads to i(x,y) = 0. Given the fact that we have a non-coherent AWGN chan-nel with an m-PSK modulation, we easily understand that no information can be sentthrough the channel.Amplitude modulationNow, we consider an amplitude modulation. If we have R points in our constellation, √and if ar = Pr then we have a2   αk ak N exp − 2σ2 k I0 σ2 i(x,y) = i(a,α) = log2  √  (4.44) R P αk Pr k=1 r=1 exp − 2σr2 I0 σ2 Once again, given the channel, there is no information on the phase. So we can workby using only the magnitude of each point.4.2.2 Dpending testing boundNow we want to determine the upper bound for this channel. First, we pick an inputconstellation and then we use it in (4.44) to determine the equation of the DT (2.9),which leads us to the expression +    a2   αk ak N exp − 2σ2 k I0 ≤ E exp −   log2  σ2 √  − log2 M − 1  (4.45)  R P − 2σr2 αk Pr 2 k=1 r=1 exp I0 σ2 We use a Monte Carlo simulation to calculate this expression. 19
  25. 25. CHAPTER 4. PHASE NOISE CHANNELSAmplitude modulation inputFor this constellation, we consider m points equally spaced with average power P = 1.In Fig. 4.1 we plot the rate, in bit per channel use, against the block length n. We chooseto present 3 constellations, with 8, 16 and 32 points. The Gaussian noise is defined bySNR = 15dB, and the error probability is Pe = 10−3 .For each constellation, the depending testing bound and the constrained capacity areplotted. 4.5 4 3.5 3 Rate, bit/Ch.use 2.5 2 1.5 1 DT and constraint capacity : 8−AM constellation 0.5 DT and constraint capacity : 16−AM constellation DT and constraint capacity : 32−AM constellation 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 blocklength, n Figure 4.1: DT and constraint capacities for three uniform AM constellations. We see in Fig. 4.1 that, for a given constellation, both curves, the DT bound andthe constrained capacity, are tight when the block length is large. We can also noticethat the gap between both curves decreases faster when we have fewer points in ourconstellation.4.3 Tikhonov phase noise channelA more realistic channel to describe a system impaired by a phase noise is the TikhonovAWGN channel. We have a closed-form expression for the noise and, using Lapidoth’sresult in [7] we can also have one for the conditional output. 20
  26. 26. CHAPTER 4. PHASE NOISE CHANNELSWe choose to study this model because it is a good approximation of the phase noiseerror induced by a first-order phase-locked loop [? ].We consider t ∼ N (0,σn ), the Gaussian noise, and θ the phase noise distributed accordingto the Tikhonov distribution. This distribution is presented in section 4.3. y = xeiθ + t (4.46)and yk = xk eiθk + tk (4.47)Tikhonov distributionThe Tikhonov distribution, also known as Von Mises distribution [? ], is an approxima-tion of the wrapped Gaussian which is defined as follows 1 −(θ−2kπ)2 pW (θ) = pΘ (θ + 2kπ) = √ e 2σ 2 (4.48) k∈Z 2πσ 2 k∈Z Its support is [−π; π] and is function of a parameter ρ. The probability densityfunction is given by eρ cos(x) p(x|ρ) = (4.49) 2πI0 (ρ) In Fig. 4.2 we see the Tikhonov distribution for 3 values of the parameter ρ. Thelarger the parameter is, the smaller the noise is.4.3.1 Information densityFirst, we need to determine the expression for the conditional output distribution. Weknow that the noise is memoryless, so we can focus on p(yk |xk ). π p(yk |xk ) = pn (yk |xk ,θk )pθ (θk )dθk (4.50) −πUsing both expressions of the Gaussian pdf and the Tikhonov pdf, we have 2 π 1 yk − xk ejθk eρ cos(θk ) p(yk |xk ) = exp − dθk (4.51) −π 2πσ 2 2σ 2 2πI0 (ρ) 2 1 π yk − xk ejθk = 2 σ 2 I (ρ) exp − + ρ cos(θk ) dθk (4.52) (2π) 0 −π 2σ 2 21
  27. 27. CHAPTER 4. PHASE NOISE CHANNELS 9 rho=500 8 rho=100 rho=10 7 6 5 4 3 2 1 0 −3 −2 −1 0 1 2 3 angle (rad) Figure 4.2: Tikhonov probability density functionwe can now expand the expression in the exponential 2 yk − xk ejθk = |yk |2 + |xk |2 − yk xk ejθk − yk x∗ e−jθk ∗ k (4.53) = |yk |2 + |xk |2 − yk xk (cos(θk ) + sin(θk )) − yk x∗ (cos(θk ) − sin(θk )) ∗ k (4.54) = |yk |2 + |xk |2 − 2 (yk xk ) cos(θk ) + 2 (yk xk ) sin(θk ) ∗ ∗ (4.55)which gives us the following expression for the conditional output distribution −(|yk |2 +|xk |2 ) exp 2σ2 π ∗ ∗ ( (yk xk ) + ρ) cos(θk ) − (yk xk ) sin(θk )p(yk |xk ) = 2 σ 2 I (ρ) exp dθk (2π) 0 −π σ2 (4.56) Because of the symmetry of the problem, we choose to work with polar coordinates,thus we define xk and yk by xk = ak eibk yk = αk eiβk 22
  28. 28. CHAPTER 4. PHASE NOISE CHANNELS Then we have yk xk = aαei(b−β) ∗ (4.57)and ∗ (yk xk ) = aα cos (i(b − β)) (4.58) ∗ (yk xk ) = aα sin (i(b − β)) (4.59) Using both equations we define aα u= (4.60) σ2and A = ( (yk xk ) + ρ)2 + ( (yk xk ))2 ∗ ∗ (4.61)which leads to −(a2 +α2 ) α exp 2σ2 π √ u cos(b − β) u sin(b − β)p(yk |xk ) = 2 σ 2 I (ρ) exp A cos(θk ) √ − sin(θk ) √ dθk (2π) 0 −π A A (4.62) We defined A such that A = (u cos(b − β))2 + (u sin(b − β))2 (4.63)so we can find z such that u cos(b − β) cos(z) = √ (4.64) A u sin(b − β) sin(z) = √ (4.65) A Then we have −(a2 +α2 ) α exp 2σ2 π √ p(yk |xk ) = 2 σ 2 I (ρ) exp A (cos(θk + z)) dθk (4.66) (2π) 0 −πwhich is equal to −(a2 +α2 ) α exp 2σ2 π √ p(yk |xk ) = exp A cos(θk ) dθk (4.67) (2π)2 σ 2 I0 (ρ) −πgiven that z and θk are independent. Finally, we recognize in (4.67) the Bessel function of the first kind which gives thefollowing expression for the conditional output distribution 23
  29. 29. CHAPTER 4. PHASE NOISE CHANNELS √ α α2 + a2 I0 ( A) pY |X (yk ,xk ) = exp − (4.68) 2πσ 2 2σ 2 I0 (ρ) To find an expression for the information density, we also need the output distributionPY . In this thesis, we consider a discrete input constellation with M codewords (c1 ,...,cM ) 1and P (ci ) = M . Given this input, the output distribution can be computed as follows M 1 PY (yk ) = p (yk |ci ) (4.69) M Y |X i=1and the information density is a2 √ exp − 2σ2 I0 ( A) i(x,y) = i(aeib ,αeiβ ) = log2 a2 √ (4.70) M 1 i=1 M exp − 2σ2 I0 ( Ai ) iwhere a2 α2 aα A= 4 + 2ρ 2 cos(b − β) + ρ2 (4.71) σ σ4.3.2 Depending testing boundWe compare two constellations for the AWGN channel with Tikhonov phase noise. Wehave the classic 64-QAM constellation and a robust circular QAM constellation designedspecifically for this channel [8].The second constellation is designed in order to minimize the average minimum distancebetween two points of the constellation. The algorithm presented in [8] gives an exampleof the constellation for a given phase noise.In Fig. 4.3 we plot both constellations and what happen to them through an AWGNchannel with SNR = 30dB impaired by the given phase noise ρ = 625 (σph = 0.04). In Fig. 4.4 we plot the robust circular 64-QAM constellation impaired by a Tikhonovphase noise with parameter ρ = 625. In Fig. 4.5 we plot the DT curve and the constrained capacity for both constellations.We choose SNR = 0dB, ρ = 625 and Pe = 10−3 for this simulation. In Fig. 4.6 we plot the DT bound and the constrained capacity for both constella-tions. We choose SNR = 15dB, ρ = 625 and Pe = 10−3 for this simulation. In Fig. 4.7 we plot the DT bound for the robust circular 64-QAM constellation fortwo power of phase noise. We also plot the DT bound and the constrained capacitywithout phase noise, ie., Gaussian noise only, and both constrained and unconstrainedcapacity for this channel. We choose SNR = 15dB and Pe = 10−3 for this simulation. 24
  30. 30. CHAPTER 4. PHASE NOISE CHANNELS 64−QAM constellation Robust Circular QAM constellation 2 2 1 1 0 0 −1 −1 −2 −2 −2 −1 0 1 2 −2 −1 0 1 2 64−QAM constellation with noise Robust Circular QAM constellation with noise 2 2 1 1 0 0 −1 −1 −2 −2 −2 −1 0 1 2 −2 −1 0 1 2 Figure 4.3: Two 64-QAM constellations in AWGN phase noise channel In Fig. 4.8 we plot the DT bound for the robust circular 64-QAM constellation fortwo probabilities of error. We also plot the constrained capacity for this channel. Wechoose SNR = 0dB and ρ = 100 for this simulation. 25
  31. 31. CHAPTER 4. PHASE NOISE CHANNELS 1.5 1 0.5 0 −0.5 −1 −1.5 −1.5 −1 −0.5 0 0.5 1 1.5 Figure 4.4: Robust Circular QAM constellation with phase noise 1 0.9 0.8 0.7 Rate, bit/Ch.use 0.6 0.5 0.4 0.3 0.2 DT for Robust Circular 64−QAM DT for regular 64−QAM 0.1 Constrained−capacity for 64−QAM 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Blocklength, nFigure 4.5: DT curves for two 64-QAM constellations in AWGN phase noise channel withSNR = 0dB 26
  32. 32. CHAPTER 4. PHASE NOISE CHANNELS 5 4.5 4 3.5 Rate, bit/Ch.use 3 2.5 2 1.5 DT for Robust Circular 64−QAM 1 DT for regular 64−QAM 0.5 Constrained−capacity for Robust Circular 64−QAM Constrained−capacity for Regular 64−QAM 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Blocklength, nFigure 4.6: DT curves for two 64-QAM constellations in AWGN phase noise channel withSNR = 15dB 5.5 5 4.5 4 3.5 Rate bit/Ch.use 3 2.5 2 1.5 DT for rho=100 1 DT for rho=625 DT and constrained−capacity without phase noise 0.5 Capacity 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Blocklength, n Figure 4.7: Comparaison of DT curves for different phase noise power 27
  33. 33. CHAPTER 4. PHASE NOISE CHANNELS 1 0.9 0.8 0.7 Rate, bit/Ch.use 0.6 0.5 0.4 0.3 0.2 DT with Pe=10E−3 DT with Pe=10E−9 0.1 Constrained−capacity 0 0 100 200 300 400 500 600 700 800 900 1000 Blocklength, n Figure 4.8: Comparaison of DT curves for different probabilities of error In Fig. 4.3 we see both constellations we used in our simulations. The robust circular64-QAM has been designed for phase noise channels with a noise power ρ = 625. Theoptimization criteria used is the maximization of the minimal distance between twoadjacent rings.In Fig. 4.5 we notice that both DT curves are the same for both constellations. Despitethe differences between these constellations, the power of the Gaussian noise is too highto make a difference between these constellations. We can also notice that, even for alarge block length (n = 2000), their is still a gap between the DT and the constrainedcapacity.In Fig. 4.6 we notice a difference between both constellations. For large block length(n ≤ 100) the robust circular 64-QAM performs better than the regular 64-QAM. Wealso notice that the DT bound and the constrained capacity are tight, which gives us abetter approximation of the maximal coding rate for this channel.From these curves, we can also see that for high SNR we reach the capacity much fasterthan for small SNR. Then the gap between the DT bound and the constrained capacityis tighter for large SNR.In Fig. 4.7 we see the impact of the phase noise power on the DT bound. We presentthe loss of coding rate between two channels with different phase noise power. We alsosee that with the parameter ρ = 625, the maximal coding rate is very close to the codingrate without any phase noise. We also notice the loss induce by our constellation inregard to the capacity achieving distribution.In Fig. 4.8 we see the impact of the probability of error on the coding rate. We notice 28
  34. 34. CHAPTER 4. PHASE NOISE CHANNELSthat the difference between these curves exist for small block length, and we will need alarger block length to have a smaller probability of error.4.3.3 Meta converse boundAs we define earlier, we know the conditional output for our channel R R2 + r 2 I0 (ν) pY |X (r,φ,R,ψ) = exp − (4.72) 2πσ 2 2σ 2 I0 (ρ)where R2 r 2 Rr ν= + 2ρ 2 cos(φ − ψ) + ρ2 σ4 σ The meta converse bound requires to pick an output distribution. For our case, weuse the following distribution, which is capacity achieving for high SNR [7]. R 2 ∼ χ1 2 (4.73) ψ ∼ U (−π,π) (4.74) 1 R2 PY (R,ψ) = √ exp − (4.75) 2π 2R2 Γ 1 2 2 Thus, we can define the information density given those two distributions N pY |X=xi (yi ) i(x,y) = log2 (4.76) PY (yi ) i=1   Ri R2 +r 2 I0 (νi ) N 2πσ2 exp − 2σ2 i i I0 (ρ) i(x,y) = log2  (4.77)   R2  i=1 √ 1 1 exp − 2i 2π 2R2 Γ( 2 ) i √ 1 N 2Γ 2 2 Ri + ri2 R2 i(x,y) = N log2 + log2 I0 (νi ) exp − − i (4.78) σ 2 I0 (ρ) 2σ 2 2 i=1 Then, we denote by Gn and Hn the information density under PY and PY |X respec-tively. To compute the converse bound, we have to find the parameter γn given the followingcondition P [Hn ≥ γn ] = 1 − (4.79)and then, we use this parameter to determine the following probability P [Gn ≥ γn ] (4.80) The main issue for this bound is the calculation of the probability P [Gn ≥ γn ].We know that this value decreases exponentially to 0 and we do not have any closed-form expression to compute it. In the real-value Gaussian case, we found a closed-formexpression using the chi-square distribution. 29
  35. 35. 5 ConclusionIn this work we applied an achievability bound on phase noise channels in order todetermine the maximum coding rate for such channel.First, we focused on a simple model with a uniform phase noise. We managed to finda closed-form expression for the DT bound, but the computation complexity was anissue. Then we moved on to two partially coherent AWGN channels. For the AWGNchannel impaired by uniform phase noise, a closed form expression has be found forthe non coherent case which gave some results. And finally, we found some results forthe AWGN channel impaired by a Tikhonov phase noise. We investigated the impactof all parameters (Noises power and probability of error) on the curves. Through bothapplications on phase noise channels we can see that the DT bound and the constraintcapacity associated with a constellation are really close for high SNR. This give us agood idea of the achievable rate for a given block length and an error probability.We also investigated the impact of different power of phase noise and the loss induced onthis rate. Moreover, we can see on the curves that for large block length (n > 500) andhigh SNR (SNR> 15), more than 95% of the constrained capacity is already achieved.Given those informations, we can evaluate the performances of codes and discuss theinterest of using larger block.For small SNR, the gap between the achievability bound and the constrained capacity isstill large, therefore, we do not have a tight approximation of the maximal coding rate.As a future work for our thesis, we can study the meta-converse and try to find anapproximation for the binary hypothesis testing. In that way we could compute theupper bound and have a tighter approximation.Another following of this thesis could be to investigate all the codes we already have andsee their performances over PC-AWGN channels. 30
  36. 36. Bibliography[1] C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal (1948) 379–423.[2] A. Feinstein, A new basic theorem of information theory, IRE trans. Inform. Theory (1954) pp. 2–22.[3] C. E. Shannon, Certain results in coding theory for noisy channels, Inf. Contr., vol. 1 (1957) pp. 6–25.[4] R. G. Gallager, A simple derivation of the coding theorem and some applications, IEEE Trans. Inf. Theory, vol. 40 (1965) pp.3–18.[5] Y. Polyanskiy, H. V. Poor, S. Verdu, Channel coding rate in the finite blocklength regime, IEEE Trans. Inf. Theory.[6] T. Cover, J. Thomas, Elements of Information Theory, Wiley, 2006.[7] A. Lapidoth, On phase noise channels at high snr, IEEE Trans. Inf. Theory.[8] A. Papadopoulos, K. N. Pappi, G. K. Karagiannidis, H. Mehrpouyan, Robust circular qam constellations in the presence of phase noise. 31

×