SlideShare a Scribd company logo
1 of 50
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



       .
                          Nonlinear communications:
                  achievable rates, encryption, estimation and
                                    decoding
       .

                                                N. Kalouptsidis

                    Dept. of Informatics & Telecommunications, University of Athens


                                Second Greek Signal Processing Jam


              Coworkers: B. Babadi, A. Katsiotis, N. Kolokotronis, G. Mileounis, I.
                                        Sason, V. Tarokh, K. Xenoulis

N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   1 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection



. Outline


        Encryption encoding and secrecy codes


        Channel encoding


        Channel modelling and achievable rates


        Channel estimation and symbol detection




 N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   2 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




            Requirements:
            MIMO and Nonlinearities


            Time varying channel
            Reliability


            Data integrity and
            confidentiality
            Complexity




N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   3 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




            Requirements:                                          Approach:
            MIMO and Nonlinearities                                Sieve structures and finite
                                                                   memory
            Time varying channel                                   Adaptive methods
            Reliability                                            Capacity approaching codes


            Data integrity and                                     Secrecy codes, encryption
            confidentiality
            Complexity                                             Simplifications (EM, relaxation,
                                                                   sparse models and sparsity
                                                                   aware schemes)


N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   3 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. Secrecy Codes

                                Encryption                                v
                  m              Encoder


                                          Encryption key
                   k symmetric key                           Public key
                     cryptography
                                                                              Channel
                                                           cryptography
                                         Decryption key


                                Decryption
                  ˆ
                  m              Decoder
                                                                                           Eavesdropper
                                                                          y         ˆ
                                                                                    y


       Symmetric key cryptography: a common secret key is shared by
       encoder/decoder
       Public key cryptography: Each user has a public key and private
       key. Sender encrypts with the public key of receiver. The receiver
       decrypts with its own private key
N. Kalouptsidis   SP JAM 2012        Nonlinear communications: achievable rates, encryption, estimation and decoding   4 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Information theoretic secrecy
        Symmetric key cryptography:
        (2nR , 2nRk , n) randomized encoder: generates codewords
        v(m, k) ∼ P (v|m, k) for each message-key pair
        (m, k) ∈ [1 : 2nR ] × [1 : 2nRk ]
        Decoder: assigns a message m(y, k) to each received vector y and
                                         ˆ
        key k
        Decoding rule: joint input-output typicality
        Performance characteristics:
                   Probability of error for the secrecy code
                                                 n
                                                Pe = P [m(y, k) = m]
                                                        ˆ
                                             n   1
                   Information leakage rate Rl = n I (M ; Y )

 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   5 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection



. Information theoretic secrecy

        A rate R is achievable at key rate Rk if there is a sequence of
        secrecy codes with Pe → 0 and Rl → 0.
                              n            n

        Secrecy capacity for the DMC channel C(Rk ): supremum of
        achievable rates at key rate Rk
        .
        Theorem.
        .
                                      {                  }
                          CRk = min Rk , max I(V ; Y )
        .                                   P (v)

        Secure communication is limited by the key rate until saturated by
        the channel capacity


 N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   6 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. The wiretap channel
                                                                           y                             ˆ
                                                                                                         m
                                                                                      Decoder
                                              v          Channel
              m             Encoder
                                                        P (y, y |v)
                                                              ˆ            ˜
                                                                           y
                                                                                   Eavesdropper

                                           (      )
                                   n   1
       Information leakage rate: Rl = n I M ; Y ˜
       If the channel to the eavesdropper is a physically degraded version
       of the channel to the receiver

                                        P (y, y |v) = P (y|v)P (˜|y)
                                              ˜                 y

       then the secrecy capacity is:
                                    (                 )
                        Cs = max I(V ; Y ) − I(V ; Y )
                                                   ˜
                                             P (v)


N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   7 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




                      Encryption            u           Channel                  v
         m
                       Encoder                          Encoder


                                Encryption key
         k symmetric key                       Public key
           cryptography                      cryptography                            Channel

                                Decryption key



         ˆ
         m            Decryption                        Channel
                                                                                                       Eavesdropper
                       Decoder              ˆ
                                            u           Decoder                  y             ˜
                                                                                               y




N. Kalouptsidis   SP JAM 2012       Nonlinear communications: achievable rates, encryption, estimation and decoding   8 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection




                  Encryption Encoder: one way function u = E(m, k)
                  Channel Encoder: (typically linear) adds redundancy to
                  combat the channel noise.
                  Channel Decoder: u = maxu P (u|y, k)
                                   ˆ
                  Decryption Decoder: m = E −1 (ˆ, k)
                                       ˆ          u
                  Both must computationally tractable
                  Eavesdropper (symmetric key):

                                           max P (k|˜),
                                                    y               max P (m|˜)
                                                                             y
                                              k                       m

                  must be computationally hard



N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   9 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Public key cryptography


                   Encryption encoder: u = G(m, kpub )
                   Decryption decoder: m = G−1 (ˆ, ksec )
                                       ˆ        u
                   Eavesdropper:

                                                  max P (ksec |˜, kpub )
                                                               y
                                                    ksec
                                                  max P (m|˜, kpub )
                                                           y
                                                     m




 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   10 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. McEliece public key encryption scheme

        Key generation
                   k, n, t fixed common integers
                   choose k × n matrix G which can correct t errors and for
                   which an efficient decoding algorithm is known (RS codes +
                   BM decoding, LDPC + sum product)
                   Draw k × k non-singular S.
                   Draw n × n permutation P .
                            ˆ
                   Compute G = SGP
                               ˆ
                   Public key (G, t)
                   Private key (S, G, P )


 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   11 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection




       Encryption
                                                             ˆ
                  Use the key of the intended recipient k = (G, t)
                  Represent the message m as a binary vector of length k
                  Draw a random binary vector z of weight t
                                            ˆ
                  Compute u = E(m, k) = mG + z
       Decryption
                  Compute uP −1 = mSG + zP −1 . Note w(zP −1 ) = w(z)
                  because P permutation.
                  Use the decoding algorithm to determine mS
                  Compute mSS −1 = m



N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   12 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection



. Fast decoding of regular LDPC codes
        (n, k) linear block code with parity check matrix H
        Codewords: n dimensional binary vectors: vH T = 0
        Syndrome s = rH T , r received vector
        error e = r − v satisfies

                                                        s = eH T
        Number of errors: = e 0 << n
        Minimum distance decoding:

                                       min e        0   subject to s = eH t


                                       min e        1   subject to s = eH t

              “Kalouptsidis, Kolokotronis. Fast decoding of regular LDPC codes using greedy approximation algorithms.”
 N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding      13 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection


       The error determines an epsilon sparse representation of the
       syndrome vector in the dictionary generated by the columns of H.
       The proposed algorithm is motivated by Matching Pursuit but
       operates mostly over finite fields
       Basic idea: Select columns of H mostly correlated with residual:

                                                      ⊕
                                                      v
                                             ˆ
                                             sv =            hλi ∈ F2
                                                                    m

                                                      i=1
       Performance guarantees for regular (γ, ρ) LDPC codes
       H is sparse
       Each column contains γ ones
       Each row contains ρ ones
       The minimum distance of the code satisfies
                                                  dmin ≥ γ + 1
       The code can correct                  ≤ |γ/2|
N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   14 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




          input: parity–check matrix H, received word r, maximum
          number ν of iterations
          initialization: Λ = ∅, i = 0
       1. s = rH t mod 2         syndrome
       2. While (s = 0) ∧ (i < ν)
       3.       λ ∈ arg max{ s, hω : ω ∈ Λ}
                                         /       choose randomly
       4.       s = s ⊕ hλ
       5.       Λ = Λ ∪ {λ}
       6.       i=i+1
       7. End
          output: residual s, error locations Λ



N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   15 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




       .
       Theorem.
       .
       Let C be a (γ, ρ)–regular LDPC (n, k) code. The proposed
       algorithm is capable of correcting all error patterns e satisfying
                                                               γ
                                                         ≤
                                                               2
       where
       .             = e 1.




N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   16 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   17 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. NL AWGN


       Multi Input Multi Output nonlinear channel with
       Additive White Gaussian Noise

                                 y(t) =D[v](t) + ξ(t)
                                    ξ(t) : i.i.d ∼ N (0, Q) ,                 Q>0

       Channel Operator D: shift invariant, causal, BIBO stable with
       fading memory
       Then D can be approximated by a finite memory architecture




N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   18 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. NL AWGN


                  v1 (t)            Shift Register 1
                                                                       .
                                                                       .
                                          ···                          .



                  v2 (t)            Shift Register 2                   .                          yi (t)
                                                                       .
                                                                       .      hi
                                          ···
                     .
                     .                     .
                                           .
                     .                     .
                                                                       .
                                                                       .
                                                                       .
                  vT (t)            Shift Register T

                                          ···



       Canonical finite memory form


N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   19 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection




                  Modeling options: nonparametric, semi–nonparametric,
                  parametric
                  Focus of this presentation: parametric forms
                  Each function hi (·) is a polynomial in several variables
                  More generally, hi (·) is approximated by a member of a sieve
                  family


                  Examples of linear sieves: Tensor products of Fourier series,
                  splines, wavelets
                  Consequence: Models linear in the parameters


                  Examples of nonlinear sieves: Neural Networks, Radial Basis

N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   20 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. Polynomial AWGN
       Multi–index i = (i1 , . . . , i ) ir ∈ N and xi = xi1 xi2 · · · xik
       Then                                            ∑
                         h(xi1 , xi2 , . . . , xik ) =   hi xi
                                                                        i∈I
       Each output:
                         ∑∑∑
                         L
                                               (j,k)
             yj (t) =                       hi         vk1 (t − i1 )vk2 (t − i2 ) · · · vk (t − i )
                           =1 k∈K i∈I
                                                                 ∑T        ∑q       (j,k)
       Examples: Linear Systems yj (t) =                             k=1       i=0 hi     vk (t      − i)
       Quadratic
                   ∑∑
                   T q
                                (j,k)
                                                       ∑ ∑ ∑ ∑
                                                       T T q q
                                                                                 (j,k ,k2 )
       yj (t) =               hi        uk (t−i)+                              hi1 i21        vk1 (t−i1 )vk2 (t−i2 )
                   k=1 i=0                          k1 =1 k2 =1 i1 =0 i2 =0

                                                                   (j,k)
         Sparsity: Most of the coefficients hi                               in each hj (·) are zero.
N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   21 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection



. Achievable rates for SISO polynomial channels
        SISO polynomial channel:

                       yt = D[v]t + ξt
                                       ∑∑
                                       L q                ∑
                                                          q
                   D[v]t = h0 +                     ...           hj (i1 , . . . , ij )vt−i1 · · · vt−ij
                                       j=1 i1 =0          ij =0

                       ξt ∼ N (0, σ 2 )

        Let vm denote the transmitted codeword and y the received
        vector. A maximum likelihood error occurs if

                             P (y|vm ) ≥ P (y|vm ), for some m = m


 N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   22 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




       Since noise is Gaussian, this is equivalent to

                                       Dvm − Dvm + ξ                  2
                                                                      2   ≤ ξ      2
                                                                                   2

       Chernoff’s bound and Gallager’s upper bound imply for a specific
       (N, R) code C
                      ∑        (                     )
                                      Dvm − Dvm 2
           Pe (m|C) ≤      exp −ρ                  2
                                                       ,0 ≤ ρ ≤ 1
                                           8σ 2
                                  m =m

       Using a random coding argument, the average error probability
       over an ensemble of codes C we obtain
                        ( )           [ ρ ∑N       ]
                    N R− ρ2 N Dv (Q)
             Pe ≤ e      4σ          E e 8σ2 i=1 Zi , 0 ≤ ρ ≤ 1


N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   23 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection


       Ft the minimal σ-algebra generated by V1 , V1 , . . . , Vt , Vt
                                                  ˜                 ˜
       (F0 {∅, Ω}). (Zt , Ft ) martingale difference sequence:
                                               
                  ∑t+q               ∑
                                     t+q
        Zt −E         wj Ft  + E 
                        2
                                         wj Ft−1  , wj [Dv]j − [D˜]j
                                          2
                                                                       v
                            j=t                          j=t



                                                         1 ∑( [         ]             )
                                                               n
         Output covariance: Dv (Q)                           E ([Dv]j )2 − (E[Dv]j )2
                                                         n
                                                             j=1

       Martingales enable the development of several concentration
       inequalities. For instance Bennett’s inequality:
                        [         (                  )]          (                              )N
                                       ρ ∑
                                                                          ρd               ρd
                                                                     γ2 e 8σ2 + e−γ2 8σ2
                                           N
                     E exp                   Zi              ≤
                                      8σ 2                                  1 + γ2
                                            i=1

N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   24 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection


       Suppose
                                                                  [         ]                            µ2
                    max |Zi | ≤ d,                max            E Zi2 |Fi−1 ≤ µ2 , γ2
                  vi ,˜i ,j≤i
                      v                      vj ,˜j ,j≤i−1
                                                 v                                                       d2
       Then, the bound on average error probability becomes:
                    (   [               ])
           P e ≤ exp −N R2 (σ 2 ) − R
                                                                                                       ( 1+γ    )
                     
                            (                           )                                                d   2
                                                                                                      γ2 e 8σ2 −1
                      D        γ2      2Dv (Q)      1                                 2Dv (Q)
            2            KL 1+γ2 + d(1+γ2 ) 1+γ2 ,                                       d        <                 1+γ2
       R2 (σ ) = max                                                                                            d
                                                                                                                     8σ 2
                  Q 
                                                                                                       1+γ2 e
                      Dv (Q)
                     
                                         d
                                        8σ 2 +e
                                                −γ2 d2
                               − ln γ2 e 1+γ2
                                                   8σ
                        4σ 2                           ,                               otherwise

         DKL the binary Kullback-Leibler divergence
                          DKL (p||q) = p log p/q + (1 − p) log(1 − p)/(1 − q)

             “Xenoulis,Kalouptsidis,Sason. New achievable rates for nonlinear Volterra channels
                                           via martingale inequalities.”
N. Kalouptsidis    SP JAM 2012       Nonlinear communications: achievable rates, encryption, estimation and decoding        25 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection


       Example: Discrete Memoryless binary input AWGN channel
       (input: u ∈ {−A, A} with Q(u = A) = α, SNR σ2 )   A

                               (           )
           R2 (SNR) = ln 2 − ln 1 + e− 2
                                       SNR
                                             in nats per channel use
                                                                  0.7
                       Achievable rates in nats per channel use




                                                                  0.6

                                                                  0.5

                                                                  0.4

                                                                  0.3
                                                                                                                      Capacity
                                                                  0.2
                                                                                                                      R2 SNR

                                                                  0.1


                                                                    0    1      2     3      4     5       6      7       8      9    10
                                                                                                  SNR

N. Kalouptsidis   SP JAM 2012                                           Nonlinear communications: achievable rates, encryption, estimation and decoding   26 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Sparse joint channel state and parameter estimation




                   Joint state and parameter estimation
                   Blind estimation via EM and smoothing algorithms




 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   27 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Joint detection and estimation
        Alternating state estimation and training based parameter
        estimation
                   Channel parameter: θ = [h, Q]
                   Channel input output form: y t = h(xt ) + ξt
                   xt = [vt , . . . , vt−q ]


                   Maximum Likelihood
                   max log P (y 1:n |x1:n ; θ)
                   v∈C,θ
                   = max        max log P (y 1:n |x1:n ; θ)
                        θ         v∈C


                   = max max log P (y 1:n |x1:n ; θ)
                        θ x1:n ∈S

 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   28 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection


       Stage 1: State Estimation
           Parameter estimate at step i : θ (i) = [h(i) , Q(i) ]
           State estimate: v = arg max log P (y 1:n |x1:n ; θ (i) )
                           ˆ
                                                  x1:n ∈S

           Convolutional codes of memory < q lead to a Hidden Markov
           Process (HMP) (y 1:n , x1:n )
       The HMP framework implies
                                                             ∑
                                                             n
                                   ˆ
                                   xi = arg max                    log P (y t |xt )
                                                  x1:n ∈S
                                                             t=1

       Optimization can be carried out by dynamic programming and the
       Viterbi algorithm.

       Several relaxations over the real numbers are available for special
       cases (decoding by linear programming, semidefinite relaxation)
N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   29 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




       Example: Binary Input Memoryless channel


                                       ∑
                                       n                               ∑
                                              log P (y t |xt ) =             vt γt
                                        t=1
                                               P (y t |1)
                                       γt =
                                               P (y t |0)

       Relax the constraints by replacing the convex hull of the codebook
       by the intersection of the convex hulls of the parity check
       equations.
       The problem is converted to a linear programming problem
       Decoding by the interior point algorithm


N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   30 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection




                  STEP 2: Parameter Estimation
                  Given the transmitted message estimate, the decoder updates
                  channel parameters by the rule
                                                                                 (i+1)
                                      θ (i+1) = arg max P (y 1:n |x1:n ; θ)
                                                              θ




N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   31 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection




             The solutions are given as follows:
           . Table look up
           1
                                         ∑n
                                            t=1 yt δ(xt , x)
                                h(x) = ∑n
                                ˆ
                                              t δ(xt , x)
                              ∑(
                               n                  )(        )H
                       ˆ = 1
                       Q                   ˆ            ˆ
                                    y(t) − h(x) y(t) − h(x)
                            n
                              t=1
                       (∑n          )      ∑n
           .
           2 h linear:            H ˆ
                          t=1 xt xt h =      t=1 yt xt
                            (∑n                   )
           .
           3 h polynomial:                          ˆ ∑n yt φ(xt )
                                                 H h=
                                t=1 φ(xt )φ(xt )       t=1
                  where φ(xt ) = [xt , ⊗2 xt , . . . , ⊗L xt ] with
                  xt = [v1 (t), . . . , v1 (t − q), · · · , vT (t), . . . , vT (t − q)]



N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   32 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




       Adding a sparsity term in the likelihood

                           θ (i+1) = arg max P (y 1:n |x1:n ; θ) + γ h                       1
                                                  θ

       a convex program results that can be solved by CS methods.




N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   33 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. Greedy algorithms: CoSaMP/SP
       The main ingredients of the CoSaMP/SP algorithms are outlined
       below:
         .
         1 locate the largest components of the proxy

         .
         2 form a union of two sets of indices

         .
         3 estimation via LS on the merged set

         .
         4 prune the LS estimates to s largest components

         .
         5 updates the error residual



       The proposed algorithm modifies the identification, estimation and
       error residual step. In order to:
            sequentially track system variations
            reduce the computational complexity
            while maintaining the superior performance of CoSaMP/SP
N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   34 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. The SpAdOMP Algorithm
                       Algorithm description                                       Complexity
          h(0) = 0, w(0) = 0, p(0) = 0
          r(0) = y(0)
          0<λ≤1
          0 < µ < 2λ−1
                     max
          For n := 1, 2, . . . do
          1:     p(n) = λp(n − 1) + v ∗ (n − 1)r(n − 1)                                    q
          2:     Ω = supp(p2s (n))                                                         q
          3:     Λ = Ω ∪ supp(h(n − 1))                                                    s
          4:     ε(n) = y(n) − v T (n)w|Λ (n − 1)
                                  |Λ                                                       s
          5:     w|Λ (n) = w|Λ (n − 1) + µv ∗ (n)ε(n)
                                             |Λ                                            s
          6:     Λs = max(|w|Λ (n)|, s)                                                    s
          7:     h|Λs (n) = w|Λs (n), h|Λc (n) = 0
                                           s
          8:     r(n) = y(n) − v T (n)h(n)                                               s
          end For                                                                       O(q)
             “Mileounis,Babadi,Kalouptsidis,Tarokh. An Adaptive Greedy Algorithm With Application to Nonlinear
                                                 Communications.”
N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   35 / 46
Encryption encoding and secrecy codes
                                          Channel encoding
                     Channel modelling and achievable rates
                    Channel estimation and symbol detection



. Steady-State MSE of SpAdOMP
       .
       Theorem.
       .
       The SpAdOMP algorithm produces an s-sparse approximation
       h(n) that satisfies the following steady-state bound

                  h − h(n)         2      C1 (n) ξ(n)           2   + C2 (n) v |Λ (n)          2   |eo (n)|,

       where
                  eo (n) is the estimation error of the optimum Wiener filter
                  C1 (n), C2 (n) are constants independent of h
       .
                  The first term is analogous to the SS error of the
                  CoSaMP/SP algorithm
                  The second term is induced by performing a single LMS
                  iteration (instead of using the LS estimate)
N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   36 / 46
Encryption encoding and secrecy codes
                                                        Channel encoding
                                   Channel modelling and achievable rates
                                  Channel estimation and symbol detection



. Simulations on sparse ARMA channels
  ARMA channel: yn = a1 yn−6 + a2 yn−48 + vn + b1 vn−13 + b2 vn−34 + ξn ,
                                 and 500 samples from CN (0, 1/5).
                         0                                                                   0.2
                                                                                                     LMS
                       −5                                                                    0.1     LOG−LMS
                                                                                                     SpAdOMP
                       −10                                                                    0
           NMSE (dB)




                                                                                NMSE (dB)
                       −15                                                                  −0.1


                       −20                                                                  −0.2


                       −25                                                                  −0.3

                                    LMS
                       −30                                                                  −0.4
                                    LOG−LMS
                                    SpAdOMP
                       −35                                                                  −0.5
                             0        200     400        600   800     1000                    250    300      350        400   450   500
                                                Iterations                                                       Iterations



           a. Learning curve (SNR=23dB)       b. Time evolution of                                                                (a1 )
                               (                          )
              NMSE = 10 log10 E{ h(n) − h 22 }/E{ h 22 }
              SpAdOMP converges very fast
              SpAdOMP achieves an average gain of nearly 19dB
N. Kalouptsidis                  SP JAM 2012        Nonlinear communications: achievable rates, encryption, estimation and decoding         37 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Blind estimation via EM and smoothing algorithms

                   Transmitted sequence unknown
                   Likelihood maximization is intractable
                   The Expectation Maximization (EM) method and the
                   underlying iterative algorithm provides an option that
                   inherently addresses symbol detection
                         Augmented likelihood function formed by the state and
                         received sequence. Given Q(θ, θ ) the expectation step forms

                                        Q(θ, θ ) = Eθ {log P (x1:n , y 1:n ; θ)|y 1:n }

                         Expectation over the state sequence given the received
                         sequence.


 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   38 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



       Marginal output likelihood

                                          Ln (θ) = log P (y 1:n ; θ)

       Jensens inequality implies

                                Ln (θ) − Ln (θ ) ≥ Q(θ, θ ) − Q(θ, θ)

       This suggests the second step
       Let θ (i) denote an estimate at step i. Then

                  θ (i+1) = arg max Q(θ i , θ) and Ln (θ (i+1) ) ≥ Ln (θ i )
                                         θ

       For Gaussian noise, P (y(t)|x(t)) is log concave, the minimizer of
       Q is unique and the sequence θ i converges to a stationary point of
       the likelihood
N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   39 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection


       The EM method leads to the following estimates
                    ∑n
       ˆ (i+1) (x) = ∑ P (xt = x; θ |y 1:n )yt
                                      (i)
                      t=1
       h               n
                       t=1 P (xt = x; θ |y 1:n )
                                       (i)


                   ∑ |M |                          (            )(           )H
                                   q+1
                    n  ∑
       ˆ (i+1) = 1
       Q                                                  ˆ            ˆ
                          P (xt = x; θ (i) |y 1:n ) y t − h(xl ) y t − h(xl )
                 n t=1
                                 l=1

       Determination of the smoothing probabilities P (xt |y 1:n ) by the
       forward backward recursions (Chang and Hancock)
                   P (xt , y 1:n ) = α(xt , y 1:t )β(y t+1:n |xt )
                                                     ∑
                                                     M
                   α(xt , y 1:t ) = b(yt |xt )                α(xt−1 , y 1:t−1 )αxt−1 xt
                                                    xt−1 =1

                                            ∑
                                            M
                   β(y t+1:n |xt ) =                αxt xt+1 β(y t+2:n |xt+1 )b(yt+1 |xt+1 )
                                          xt+1 =1


N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   40 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection




       normalized stable versions (for instance Lindgren)

                                       P (xt |y 1:t−1 )b(yt |xt )
                     α(xt |y 1:t ) = ∑M
                                      xt =1 P (xt |y 1:t−1 )b(yt |xt )
                                                ∑
                                                M
                     P (xt |y 1:t−1 ) =                 αxt−1 xt α(xt−1 |y1:t−1 ))
                                              xt−1 =1

                                                  ∑ αxt xt+1 P (xt+1 |y1:n )
                                                  M
                     P (xt |y1:n ) = α(xt |y1:t )
                                                         P (xt+1 |y1:t )
                                                             xt+1 =1

       Sparsity can be incorporated in the maximization step



N. Kalouptsidis   SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   41 / 46
Encryption encoding and secrecy codes
                                         Channel encoding
                    Channel modelling and achievable rates
                   Channel estimation and symbol detection



. The Sparse BW Algorithm

          Algorithmic description
          For := 0, 1, . . . , do
          1: {r ( ) , R( ) }:=Run {Forward/Backward recursions}                               {symbol detector}
                         sgn(r i ) [ ( )        ]
                                ( )
              ( +1)
          2: hi        =      ( )
                                     |r i | − γ
                                              2                                               {channel estimator}
                             Ri,i                +

              2 ( +1)
                              ∑
                              n                       2
          3: σq        =n  1
                                  yn − x( +1)T h( +1)
                                          n                                                   {noise variance estima
                                 n=1
          end For

                        ∑
                        n
                                                     ( )                     ∑
                                                                             n
                                                                                                         ( )
                               ∗             ˆ                                                     ˆ
             r( ) =           yi E{xi |y n ; h },                R( ) =             E{xi xH |y n ; h }
                                                                                          i
                        i=1                                                   i=1


             “Mileounis,Kalouptsidis,Babadi,Tarokh. Blind identification of sparse channels
                                                    and symbol detection via the EM algorithm.”
N. Kalouptsidis   SP JAM 2012       Nonlinear communications: achievable rates, encryption, estimation and decoding   42 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection


  Adaptive Channel Coding Based on Flexible Trellis
. (Convolutional) Codes
                   Popular adaptive coding schemes → variable rate punctured
                   convolutional codes
                   IEEE 802.22 standard for cognitive WRAN uses a rate 1/2
                   convolutional code and a set of puncturing matrices that lead
                   to rates 2/3, 3/4 and 5/6.

                                      Flexible Convolutional Codes

                   They can vary both their rate and the decoding complexity →
                   efficient management of the system resources.
                   Constructed by combining the techniques of path pruning and
                   puncturing.
                   Varying quantities associated with the complexity profile of
                   the trellis diagram→ Varying decoding complexity.
 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   43 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Flexible Convolutional Codes
                   Consider an (n, 1, m) mother convolutional code.
                   Let ut be the information bit and ut the input bit of the
                                                     ˆ
                   mother encoder at time instant t.
            every Tpr time units the single input bit of the encoder is not
            an information bit, rather it is computed as a linear
            combination of bits of the current state
            St = {ˆt−1 , · · · , ut−m }.
                   u             ˆ
                               {
                                    ut1 (Tpr −1)+t2 , if t2 = 0
                       ut =
                       ˆ           ∑d ˆ
                                     i=1 ci ut1 Tpr −i , if t2 = 0
                                             ˆ
                    ⌊ ⌋
                                                                   ˆ
        where t1 = Tt , t2 = t mod Tpr , t = 1, 2, . . . , and d is the
                      pr
                                               ∑m
        degree of the polynomial c(X) = i=1 ci X i .
              “Katsiotis,Rizomiliotis,Kalouptsidis. Flexible Convolutional Codes: Variable Rate and Complexity.”
 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   44 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Flexible Convolutional Codes


                   The final step involves periodic puncturing of the encoded bits
                   with period Tpu = pTpr , in order to adjust the rate.

                   The complexity profile of the resulting trellis depends solely on
                                           ˆ
                   the parameters m, Tpr , d and the puncturing matrix.

                   Large families of high-performance codes of various rates and
                   values of decoding complexity are constructed.




 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   45 / 46
Encryption encoding and secrecy codes
                                           Channel encoding
                      Channel modelling and achievable rates
                     Channel estimation and symbol detection



. Extending the Constructions
                                            Flexible Turbo Codes

                   Extending the analysis in the case where recursive mother
                   encoders are used.
                   The goal is to construct flexible parallel concatenated
                   powerful coding schemes.
                   Preliminary results indicate that varying the complexity profile
                   of the trellis can be more efficient than simply varying the
                   number of decoding iterations.

                                            Flexible Secret Codes

        Embedding secret keys in the procedures of pruning and
        puncturing can result to robust and flexible secret encoders.
 N. Kalouptsidis    SP JAM 2012      Nonlinear communications: achievable rates, encryption, estimation and decoding   46 / 46
Encryption encoding and secrecy codes
                                Channel encoding
           Channel modelling and achievable rates
          Channel estimation and symbol detection



. Publications
        G. Mileounis, N. Kalouptsidis, A sparsity driven approach to cumulant
        based identification, in IEEE Proc. SPAWC 2012, Turkey.
        K. Xenoulis, N. Kalouptsidis, I. Sason, New achievable rates for nonlinear
        Volterra channels via martingale inequalities, in IEEE proc. ISIT 2012.
        A. Katsiotis, P. Rizomiliotis, and N. Kalouptsidis, ”Flexible Convolutional
        Codes: Variable Rate and Complexity,” IEEE Trans. Commun., vol. 60,
        no. 3, pp. 608-613, March 2012.
        N. Kalouptsidis, G. Mileounis, B.Babadi, and V. Tarokh, ”Adaptive
        Algorithms for Sparse System Identification,” Signal Process., vol. 91, no.
        8, pp. 1910-1919, Aug. 2011.
        K. Xenoulis and N. Kalouptsidis, ”Tight performance bounds for
        permutation invariant binary linear block codes over symmetric channels,
        IEEE Trans. Inf. Theory, vol. 57, pp. 6015-6024, Sep. 2011.
        K. Limniotis, N. Kolokotronis, and N. Kalouptsidis, ”Constructing
        Boolean functions in odd number of variables with maximum algebraic
        immunity,” in proc. 2011 IEEE ISIT, pp. 2662-2666, 2011.
Encryption encoding and secrecy codes
                                Channel encoding
           Channel modelling and achievable rates
          Channel estimation and symbol detection



. Publications (contd.)
        N. Kalouptsidis and N. Kolokotronis, ”Fast decoding of regular LDPC
        codes using greedy approximation algorithms,” in proc. 2011 IEEE ISIT,
        pp. 2011-2015, 2011.
        A. Katsiotis and N. Kalouptsidis, ”On (n, n-1) punctured convolutional
        codes and their trellis modules,” IEEE Trans. Commun., vol. 59, pp.
        1213-1217, 2011.
        K. Xenoulis and N. Kalouptsidis, ”Achievable rates for nonlinear Volterra
        channels,” IEEE Trans. Inform. Theory, vol. 57, pp. 1237-1248, 2011.
        A. Katsiotis, P. Rizomiliotis, and N. Kalouptsidis, ”New constructions of
        high-performance low-complexity convolutional codes,” IEEE Trans.
        Commun., vol. 58, pp.1950-1961, 2010.
        G. Mileounis, B. Babadi, N. Kalouptsidis, and V. Tarokh, ”An Adaptive
        Greedy Algorithm with Application to Nonlinear Communications,” IEEE
        Trans. Signal Proc., vol. 58, No. 6, June 2010.
        B. Babadi, N. Kalouptsidis, and V. Tarokh, ”SPARLS: The Sparse RLS
        Algorithm,” IEEE Trans. Signal Proc., vol. 58, no. 8, August 2010.
Encryption encoding and secrecy codes
                                Channel encoding
           Channel modelling and achievable rates
          Channel estimation and symbol detection



. Publications (contd.)
        N. Kolokotronis, K. Limniotis, and N. Kalouptsidis, ”Best affine and
        quadratic approximations of particular classes of Boolean functions, IEEE
        Trans. Inform. Theory, vol. 55, pp. 5211-5222, 2009.
        T. Etzion, N. Kalouptsidis, N. Kolokotronis, K. Limniotis and K. G.
        Paterson, ”Properties of the error linear complexity spectrum,” IEEE
        Trans. Inform. Theory, pp. 4681-4686, vol. 55, 2009.
        B. Babadi, N. Kalouptsidis, and V. Tarokh, ”Asymptotic Achievability of
        the Cramer-Rao Bound for Noisy Compressive Sampling,” IEEE Trans.
        Signal Proc., vol. 57, no. 3, March 2009.
        G. Mileounis, P. Koukoulas, N. Kalouptsidis, ”Input-output identification
        of nonlinear channels using PSK, QAM and OFDM inputs, Signal
        Process., vol. 89, no. 7, pp. 1359-1369, Jul. 2009.
        K. Xenoulis and N. Kalouptsidis, ”Improvement of Gallager upper bound
        and its variations for discrete channels,” IEEE Trans. Inform. Theory, vol.
        55, pp. 4204-4210, 2009.

More Related Content

What's hot

Brokerage 2007 presentation security
Brokerage 2007 presentation securityBrokerage 2007 presentation security
Brokerage 2007 presentation securityimec.archive
 
Deep Accessibility: Adapting Interfaces to Suit Our Senses
Deep Accessibility: Adapting Interfaces to Suit Our SensesDeep Accessibility: Adapting Interfaces to Suit Our Senses
Deep Accessibility: Adapting Interfaces to Suit Our SensesSimon Harper
 
Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...
Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...
Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...IDES Editor
 
Towards a Better Understanding of Model-Free Semantic Concept Detection for A...
Towards a Better Understanding of Model-Free Semantic Concept Detection for A...Towards a Better Understanding of Model-Free Semantic Concept Detection for A...
Towards a Better Understanding of Model-Free Semantic Concept Detection for A...Wesley De Neve
 
Csr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhaninCsr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhaninCSR2011
 

What's hot (8)

Crypt
CryptCrypt
Crypt
 
Open end coding
Open end codingOpen end coding
Open end coding
 
Brokerage 2007 presentation security
Brokerage 2007 presentation securityBrokerage 2007 presentation security
Brokerage 2007 presentation security
 
Deep Accessibility: Adapting Interfaces to Suit Our Senses
Deep Accessibility: Adapting Interfaces to Suit Our SensesDeep Accessibility: Adapting Interfaces to Suit Our Senses
Deep Accessibility: Adapting Interfaces to Suit Our Senses
 
Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...
Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...
Combined Implementation of Robust Cryptosystem for Non-invertible Matrices ba...
 
Towards a Better Understanding of Model-Free Semantic Concept Detection for A...
Towards a Better Understanding of Model-Free Semantic Concept Detection for A...Towards a Better Understanding of Model-Free Semantic Concept Detection for A...
Towards a Better Understanding of Model-Free Semantic Concept Detection for A...
 
B034205010
B034205010B034205010
B034205010
 
Csr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhaninCsr2011 june17 09_30_yekhanin
Csr2011 june17 09_30_yekhanin
 

Viewers also liked

Viewers also liked (20)

Opening Second Greek Signal Processing Jam
Opening Second Greek Signal Processing JamOpening Second Greek Signal Processing Jam
Opening Second Greek Signal Processing Jam
 
Web Usage Miningand Using Ontology for Capturing Web Usage Semantic
Web Usage Miningand Using Ontology for Capturing Web Usage SemanticWeb Usage Miningand Using Ontology for Capturing Web Usage Semantic
Web Usage Miningand Using Ontology for Capturing Web Usage Semantic
 
Sparse and Low Rank Representations in Music Signal Analysis
 Sparse and Low Rank Representations in Music Signal  Analysis Sparse and Low Rank Representations in Music Signal  Analysis
Sparse and Low Rank Representations in Music Signal Analysis
 
Co-evolution, Games, and Social Behaviors
Co-evolution, Games, and Social BehaviorsCo-evolution, Games, and Social Behaviors
Co-evolution, Games, and Social Behaviors
 
Influence Propagation in Large Graphs - Theorems and Algorithms
Influence Propagation in Large Graphs - Theorems and AlgorithmsInfluence Propagation in Large Graphs - Theorems and Algorithms
Influence Propagation in Large Graphs - Theorems and Algorithms
 
Data Quality: Not Your Typical Database Problem
Data Quality: Not Your Typical Database ProblemData Quality: Not Your Typical Database Problem
Data Quality: Not Your Typical Database Problem
 
From Programs to Systems – Building a Smarter World
From Programs to Systems – Building a Smarter WorldFrom Programs to Systems – Building a Smarter World
From Programs to Systems – Building a Smarter World
 
The Tower of Knowledge A Generic System Architecture
The Tower of Knowledge A Generic System ArchitectureThe Tower of Knowledge A Generic System Architecture
The Tower of Knowledge A Generic System Architecture
 
A Classification Framework For Component Models
 A Classification Framework For Component Models A Classification Framework For Component Models
A Classification Framework For Component Models
 
Compressive Spectral Image Sensing, Processing, and Optimization
Compressive Spectral Image Sensing, Processing, and OptimizationCompressive Spectral Image Sensing, Processing, and Optimization
Compressive Spectral Image Sensing, Processing, and Optimization
 
Sparsity Control for Robustness and Social Data Analysis
Sparsity Control for Robustness and Social Data AnalysisSparsity Control for Robustness and Social Data Analysis
Sparsity Control for Robustness and Social Data Analysis
 
Tribute to Nicolas Galatsanos
Tribute to Nicolas GalatsanosTribute to Nicolas Galatsanos
Tribute to Nicolas Galatsanos
 
State Space Exploration for NASA’s Safety Critical Systems
State Space Exploration for NASA’s Safety Critical SystemsState Space Exploration for NASA’s Safety Critical Systems
State Space Exploration for NASA’s Safety Critical Systems
 
Semantic 3DTV Content Analysis and Description
Semantic 3DTV Content Analysis and DescriptionSemantic 3DTV Content Analysis and Description
Semantic 3DTV Content Analysis and Description
 
Jamming in Wireless Sensor Networks
Jamming in Wireless Sensor NetworksJamming in Wireless Sensor Networks
Jamming in Wireless Sensor Networks
 
Mixture Models for Image Analysis
Mixture Models for Image AnalysisMixture Models for Image Analysis
Mixture Models for Image Analysis
 
Sparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and ApplicationsSparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and Applications
 
Networked 3-D Virtual Collaboration in Science and Education: Towards 'Web 3....
Networked 3-D Virtual Collaboration in Science and Education: Towards 'Web 3....Networked 3-D Virtual Collaboration in Science and Education: Towards 'Web 3....
Networked 3-D Virtual Collaboration in Science and Education: Towards 'Web 3....
 
Machine Learning Tools and Particle Swarm Optimization for Content-Based Sear...
Machine Learning Tools and Particle Swarm Optimization for Content-Based Sear...Machine Learning Tools and Particle Swarm Optimization for Content-Based Sear...
Machine Learning Tools and Particle Swarm Optimization for Content-Based Sear...
 
Artificial Intelligence and Human Thinking
Artificial Intelligence and Human ThinkingArtificial Intelligence and Human Thinking
Artificial Intelligence and Human Thinking
 

Similar to Nonlinear Communications: Achievable Rates, Estimation, and Decoding

Implementation of reed solomon codes basics
Implementation of reed solomon codes basicsImplementation of reed solomon codes basics
Implementation of reed solomon codes basicsRam Singh Yadav
 
Combining cryptography with channel coding to reduce complicity
Combining cryptography with channel coding to reduce complicityCombining cryptography with channel coding to reduce complicity
Combining cryptography with channel coding to reduce complicityIAEME Publication
 
Hossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on GraphsHossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on Graphsknowdiff
 
Ldpc based error correction
Ldpc based error correctionLdpc based error correction
Ldpc based error correctionVijay Balaji
 
2 d barcodes(NSM)
 2 d barcodes(NSM) 2 d barcodes(NSM)
2 d barcodes(NSM)selva1090
 
Packet hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacksPacket hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacksJPINFOTECH JAYAPRAKASH
 
Packet hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacksPacket hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacksJPINFOTECH JAYAPRAKASH
 
02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knightDevanshi Piprottar
 
Basics of channel coding
Basics of channel codingBasics of channel coding
Basics of channel codingDrAimalKhan
 

Similar to Nonlinear Communications: Achievable Rates, Estimation, and Decoding (15)

Implementation of reed solomon codes basics
Implementation of reed solomon codes basicsImplementation of reed solomon codes basics
Implementation of reed solomon codes basics
 
Combining cryptography with channel coding to reduce complicity
Combining cryptography with channel coding to reduce complicityCombining cryptography with channel coding to reduce complicity
Combining cryptography with channel coding to reduce complicity
 
Turbo Codes
Turbo CodesTurbo Codes
Turbo Codes
 
Presentation
PresentationPresentation
Presentation
 
Cryptoghraphy
CryptoghraphyCryptoghraphy
Cryptoghraphy
 
Hossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on GraphsHossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on Graphs
 
Ldpc based error correction
Ldpc based error correctionLdpc based error correction
Ldpc based error correction
 
Phd Defence
Phd DefencePhd Defence
Phd Defence
 
2 d barcodes(NSM)
 2 d barcodes(NSM) 2 d barcodes(NSM)
2 d barcodes(NSM)
 
Packet hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacksPacket hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacks
 
Reed solomon code
Reed solomon codeReed solomon code
Reed solomon code
 
Packet hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacksPacket hiding methods for preventing selective jamming attacks
Packet hiding methods for preventing selective jamming attacks
 
Channel Coding.ppt
Channel Coding.pptChannel Coding.ppt
Channel Coding.ppt
 
02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight
 
Basics of channel coding
Basics of channel codingBasics of channel coding
Basics of channel coding
 

More from Distinguished Lecturer Series - Leon The Mathematician (6)

Defying Nyquist in Analog to Digital Conversion
Defying Nyquist in Analog to Digital ConversionDefying Nyquist in Analog to Digital Conversion
Defying Nyquist in Analog to Digital Conversion
 
Farewell to Disks: Efficient Processing of Obstinate Data
Farewell to Disks: Efficient Processing of Obstinate DataFarewell to Disks: Efficient Processing of Obstinate Data
Farewell to Disks: Efficient Processing of Obstinate Data
 
Artificial Intelligence and Human Thinking
Artificial Intelligence and Human ThinkingArtificial Intelligence and Human Thinking
Artificial Intelligence and Human Thinking
 
Descriptive Granularity - Building Foundations of Data Mining
Descriptive Granularity - Building Foundations of Data MiningDescriptive Granularity - Building Foundations of Data Mining
Descriptive Granularity - Building Foundations of Data Mining
 
Success Factors in Industry - Academia Collaboration - An Empirical Study
 Success Factors in Industry - Academia Collaboration - An Empirical Study   Success Factors in Industry - Academia Collaboration - An Empirical Study
Success Factors in Industry - Academia Collaboration - An Empirical Study
 
Compressed Sensing In Spectral Imaging
Compressed Sensing In Spectral Imaging  Compressed Sensing In Spectral Imaging
Compressed Sensing In Spectral Imaging
 

Recently uploaded

Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management systemChristalin Nelson
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptxmary850239
 
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...JojoEDelaCruz
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataBabyAnnMotar
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfPatidar M
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
The Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World PoliticsThe Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World PoliticsRommel Regala
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSMae Pangan
 
Presentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxPresentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxRosabel UA
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfErwinPantujan2
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSJoshuaGantuangco2
 
Activity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationActivity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationRosabel UA
 
EMBODO Lesson Plan Grade 9 Law of Sines.docx
EMBODO Lesson Plan Grade 9 Law of Sines.docxEMBODO Lesson Plan Grade 9 Law of Sines.docx
EMBODO Lesson Plan Grade 9 Law of Sines.docxElton John Embodo
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 

Recently uploaded (20)

Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management system
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx
 
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdf
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
The Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World PoliticsThe Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World Politics
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHS
 
Presentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxPresentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptx
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
 
Activity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translationActivity 2-unit 2-update 2024. English translation
Activity 2-unit 2-update 2024. English translation
 
Paradigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTAParadigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTA
 
EMBODO Lesson Plan Grade 9 Law of Sines.docx
EMBODO Lesson Plan Grade 9 Law of Sines.docxEMBODO Lesson Plan Grade 9 Law of Sines.docx
EMBODO Lesson Plan Grade 9 Law of Sines.docx
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 

Nonlinear Communications: Achievable Rates, Estimation, and Decoding

  • 1. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Nonlinear communications: achievable rates, encryption, estimation and decoding . N. Kalouptsidis Dept. of Informatics & Telecommunications, University of Athens Second Greek Signal Processing Jam Coworkers: B. Babadi, A. Katsiotis, N. Kolokotronis, G. Mileounis, I. Sason, V. Tarokh, K. Xenoulis N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 1 / 46
  • 2. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Outline Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 2 / 46
  • 3. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Requirements: MIMO and Nonlinearities Time varying channel Reliability Data integrity and confidentiality Complexity N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 3 / 46
  • 4. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Requirements: Approach: MIMO and Nonlinearities Sieve structures and finite memory Time varying channel Adaptive methods Reliability Capacity approaching codes Data integrity and Secrecy codes, encryption confidentiality Complexity Simplifications (EM, relaxation, sparse models and sparsity aware schemes) N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 3 / 46
  • 5. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Secrecy Codes Encryption v m Encoder Encryption key k symmetric key Public key cryptography Channel cryptography Decryption key Decryption ˆ m Decoder Eavesdropper y ˆ y Symmetric key cryptography: a common secret key is shared by encoder/decoder Public key cryptography: Each user has a public key and private key. Sender encrypts with the public key of receiver. The receiver decrypts with its own private key N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 4 / 46
  • 6. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Information theoretic secrecy Symmetric key cryptography: (2nR , 2nRk , n) randomized encoder: generates codewords v(m, k) ∼ P (v|m, k) for each message-key pair (m, k) ∈ [1 : 2nR ] × [1 : 2nRk ] Decoder: assigns a message m(y, k) to each received vector y and ˆ key k Decoding rule: joint input-output typicality Performance characteristics: Probability of error for the secrecy code n Pe = P [m(y, k) = m] ˆ n 1 Information leakage rate Rl = n I (M ; Y ) N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 5 / 46
  • 7. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Information theoretic secrecy A rate R is achievable at key rate Rk if there is a sequence of secrecy codes with Pe → 0 and Rl → 0. n n Secrecy capacity for the DMC channel C(Rk ): supremum of achievable rates at key rate Rk . Theorem. . { } CRk = min Rk , max I(V ; Y ) . P (v) Secure communication is limited by the key rate until saturated by the channel capacity N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 6 / 46
  • 8. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . The wiretap channel y ˆ m Decoder v Channel m Encoder P (y, y |v) ˆ ˜ y Eavesdropper ( ) n 1 Information leakage rate: Rl = n I M ; Y ˜ If the channel to the eavesdropper is a physically degraded version of the channel to the receiver P (y, y |v) = P (y|v)P (˜|y) ˜ y then the secrecy capacity is: ( ) Cs = max I(V ; Y ) − I(V ; Y ) ˜ P (v) N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 7 / 46
  • 9. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Encryption u Channel v m Encoder Encoder Encryption key k symmetric key Public key cryptography cryptography Channel Decryption key ˆ m Decryption Channel Eavesdropper Decoder ˆ u Decoder y ˜ y N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 8 / 46
  • 10. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Encryption Encoder: one way function u = E(m, k) Channel Encoder: (typically linear) adds redundancy to combat the channel noise. Channel Decoder: u = maxu P (u|y, k) ˆ Decryption Decoder: m = E −1 (ˆ, k) ˆ u Both must computationally tractable Eavesdropper (symmetric key): max P (k|˜), y max P (m|˜) y k m must be computationally hard N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 9 / 46
  • 11. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Public key cryptography Encryption encoder: u = G(m, kpub ) Decryption decoder: m = G−1 (ˆ, ksec ) ˆ u Eavesdropper: max P (ksec |˜, kpub ) y ksec max P (m|˜, kpub ) y m N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 10 / 46
  • 12. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . McEliece public key encryption scheme Key generation k, n, t fixed common integers choose k × n matrix G which can correct t errors and for which an efficient decoding algorithm is known (RS codes + BM decoding, LDPC + sum product) Draw k × k non-singular S. Draw n × n permutation P . ˆ Compute G = SGP ˆ Public key (G, t) Private key (S, G, P ) N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 11 / 46
  • 13. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Encryption ˆ Use the key of the intended recipient k = (G, t) Represent the message m as a binary vector of length k Draw a random binary vector z of weight t ˆ Compute u = E(m, k) = mG + z Decryption Compute uP −1 = mSG + zP −1 . Note w(zP −1 ) = w(z) because P permutation. Use the decoding algorithm to determine mS Compute mSS −1 = m N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 12 / 46
  • 14. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Fast decoding of regular LDPC codes (n, k) linear block code with parity check matrix H Codewords: n dimensional binary vectors: vH T = 0 Syndrome s = rH T , r received vector error e = r − v satisfies s = eH T Number of errors: = e 0 << n Minimum distance decoding: min e 0 subject to s = eH t min e 1 subject to s = eH t “Kalouptsidis, Kolokotronis. Fast decoding of regular LDPC codes using greedy approximation algorithms.” N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 13 / 46
  • 15. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection The error determines an epsilon sparse representation of the syndrome vector in the dictionary generated by the columns of H. The proposed algorithm is motivated by Matching Pursuit but operates mostly over finite fields Basic idea: Select columns of H mostly correlated with residual: ⊕ v ˆ sv = hλi ∈ F2 m i=1 Performance guarantees for regular (γ, ρ) LDPC codes H is sparse Each column contains γ ones Each row contains ρ ones The minimum distance of the code satisfies dmin ≥ γ + 1 The code can correct ≤ |γ/2| N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 14 / 46
  • 16. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection input: parity–check matrix H, received word r, maximum number ν of iterations initialization: Λ = ∅, i = 0 1. s = rH t mod 2 syndrome 2. While (s = 0) ∧ (i < ν) 3. λ ∈ arg max{ s, hω : ω ∈ Λ} / choose randomly 4. s = s ⊕ hλ 5. Λ = Λ ∪ {λ} 6. i=i+1 7. End output: residual s, error locations Λ N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 15 / 46
  • 17. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Theorem. . Let C be a (γ, ρ)–regular LDPC (n, k) code. The proposed algorithm is capable of correcting all error patterns e satisfying γ ≤ 2 where . = e 1. N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 16 / 46
  • 18. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 17 / 46
  • 19. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . NL AWGN Multi Input Multi Output nonlinear channel with Additive White Gaussian Noise y(t) =D[v](t) + ξ(t) ξ(t) : i.i.d ∼ N (0, Q) , Q>0 Channel Operator D: shift invariant, causal, BIBO stable with fading memory Then D can be approximated by a finite memory architecture N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 18 / 46
  • 20. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . NL AWGN v1 (t) Shift Register 1 . . ··· . v2 (t) Shift Register 2 . yi (t) . . hi ··· . . . . . . . . . vT (t) Shift Register T ··· Canonical finite memory form N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 19 / 46
  • 21. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Modeling options: nonparametric, semi–nonparametric, parametric Focus of this presentation: parametric forms Each function hi (·) is a polynomial in several variables More generally, hi (·) is approximated by a member of a sieve family Examples of linear sieves: Tensor products of Fourier series, splines, wavelets Consequence: Models linear in the parameters Examples of nonlinear sieves: Neural Networks, Radial Basis N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 20 / 46
  • 22. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Polynomial AWGN Multi–index i = (i1 , . . . , i ) ir ∈ N and xi = xi1 xi2 · · · xik Then ∑ h(xi1 , xi2 , . . . , xik ) = hi xi i∈I Each output: ∑∑∑ L (j,k) yj (t) = hi vk1 (t − i1 )vk2 (t − i2 ) · · · vk (t − i ) =1 k∈K i∈I ∑T ∑q (j,k) Examples: Linear Systems yj (t) = k=1 i=0 hi vk (t − i) Quadratic ∑∑ T q (j,k) ∑ ∑ ∑ ∑ T T q q (j,k ,k2 ) yj (t) = hi uk (t−i)+ hi1 i21 vk1 (t−i1 )vk2 (t−i2 ) k=1 i=0 k1 =1 k2 =1 i1 =0 i2 =0 (j,k) Sparsity: Most of the coefficients hi in each hj (·) are zero. N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 21 / 46
  • 23. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Achievable rates for SISO polynomial channels SISO polynomial channel: yt = D[v]t + ξt ∑∑ L q ∑ q D[v]t = h0 + ... hj (i1 , . . . , ij )vt−i1 · · · vt−ij j=1 i1 =0 ij =0 ξt ∼ N (0, σ 2 ) Let vm denote the transmitted codeword and y the received vector. A maximum likelihood error occurs if P (y|vm ) ≥ P (y|vm ), for some m = m N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 22 / 46
  • 24. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Since noise is Gaussian, this is equivalent to Dvm − Dvm + ξ 2 2 ≤ ξ 2 2 Chernoff’s bound and Gallager’s upper bound imply for a specific (N, R) code C ∑ ( ) Dvm − Dvm 2 Pe (m|C) ≤ exp −ρ 2 ,0 ≤ ρ ≤ 1 8σ 2 m =m Using a random coding argument, the average error probability over an ensemble of codes C we obtain ( ) [ ρ ∑N ] N R− ρ2 N Dv (Q) Pe ≤ e 4σ E e 8σ2 i=1 Zi , 0 ≤ ρ ≤ 1 N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 23 / 46
  • 25. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Ft the minimal σ-algebra generated by V1 , V1 , . . . , Vt , Vt ˜ ˜ (F0 {∅, Ω}). (Zt , Ft ) martingale difference sequence:     ∑t+q ∑ t+q Zt −E  wj Ft  + E  2 wj Ft−1  , wj [Dv]j − [D˜]j 2 v j=t j=t 1 ∑( [ ] ) n Output covariance: Dv (Q) E ([Dv]j )2 − (E[Dv]j )2 n j=1 Martingales enable the development of several concentration inequalities. For instance Bennett’s inequality: [ ( )] ( )N ρ ∑ ρd ρd γ2 e 8σ2 + e−γ2 8σ2 N E exp Zi ≤ 8σ 2 1 + γ2 i=1 N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 24 / 46
  • 26. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Suppose [ ] µ2 max |Zi | ≤ d, max E Zi2 |Fi−1 ≤ µ2 , γ2 vi ,˜i ,j≤i v vj ,˜j ,j≤i−1 v d2 Then, the bound on average error probability becomes: ( [ ]) P e ≤ exp −N R2 (σ 2 ) − R  ( 1+γ )   ( ) d 2 γ2 e 8σ2 −1  D γ2 2Dv (Q) 1 2Dv (Q) 2 KL 1+γ2 + d(1+γ2 ) 1+γ2 , d < 1+γ2 R2 (σ ) = max d 8σ 2 Q  1+γ2 e  Dv (Q)  d 8σ 2 +e −γ2 d2 − ln γ2 e 1+γ2 8σ 4σ 2 , otherwise DKL the binary Kullback-Leibler divergence DKL (p||q) = p log p/q + (1 − p) log(1 − p)/(1 − q) “Xenoulis,Kalouptsidis,Sason. New achievable rates for nonlinear Volterra channels via martingale inequalities.” N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 25 / 46
  • 27. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Example: Discrete Memoryless binary input AWGN channel (input: u ∈ {−A, A} with Q(u = A) = α, SNR σ2 ) A ( ) R2 (SNR) = ln 2 − ln 1 + e− 2 SNR in nats per channel use 0.7 Achievable rates in nats per channel use 0.6 0.5 0.4 0.3 Capacity 0.2 R2 SNR 0.1 0 1 2 3 4 5 6 7 8 9 10 SNR N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 26 / 46
  • 28. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Sparse joint channel state and parameter estimation Joint state and parameter estimation Blind estimation via EM and smoothing algorithms N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 27 / 46
  • 29. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Joint detection and estimation Alternating state estimation and training based parameter estimation Channel parameter: θ = [h, Q] Channel input output form: y t = h(xt ) + ξt xt = [vt , . . . , vt−q ] Maximum Likelihood max log P (y 1:n |x1:n ; θ) v∈C,θ = max max log P (y 1:n |x1:n ; θ) θ v∈C = max max log P (y 1:n |x1:n ; θ) θ x1:n ∈S N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 28 / 46
  • 30. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Stage 1: State Estimation Parameter estimate at step i : θ (i) = [h(i) , Q(i) ] State estimate: v = arg max log P (y 1:n |x1:n ; θ (i) ) ˆ x1:n ∈S Convolutional codes of memory < q lead to a Hidden Markov Process (HMP) (y 1:n , x1:n ) The HMP framework implies ∑ n ˆ xi = arg max log P (y t |xt ) x1:n ∈S t=1 Optimization can be carried out by dynamic programming and the Viterbi algorithm. Several relaxations over the real numbers are available for special cases (decoding by linear programming, semidefinite relaxation) N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 29 / 46
  • 31. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Example: Binary Input Memoryless channel ∑ n ∑ log P (y t |xt ) = vt γt t=1 P (y t |1) γt = P (y t |0) Relax the constraints by replacing the convex hull of the codebook by the intersection of the convex hulls of the parity check equations. The problem is converted to a linear programming problem Decoding by the interior point algorithm N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 30 / 46
  • 32. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection STEP 2: Parameter Estimation Given the transmitted message estimate, the decoder updates channel parameters by the rule (i+1) θ (i+1) = arg max P (y 1:n |x1:n ; θ) θ N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 31 / 46
  • 33. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection The solutions are given as follows: . Table look up 1 ∑n t=1 yt δ(xt , x) h(x) = ∑n ˆ t δ(xt , x) ∑( n )( )H ˆ = 1 Q ˆ ˆ y(t) − h(x) y(t) − h(x) n t=1 (∑n ) ∑n . 2 h linear: H ˆ t=1 xt xt h = t=1 yt xt (∑n ) . 3 h polynomial: ˆ ∑n yt φ(xt ) H h= t=1 φ(xt )φ(xt ) t=1 where φ(xt ) = [xt , ⊗2 xt , . . . , ⊗L xt ] with xt = [v1 (t), . . . , v1 (t − q), · · · , vT (t), . . . , vT (t − q)] N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 32 / 46
  • 34. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Adding a sparsity term in the likelihood θ (i+1) = arg max P (y 1:n |x1:n ; θ) + γ h 1 θ a convex program results that can be solved by CS methods. N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 33 / 46
  • 35. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Greedy algorithms: CoSaMP/SP The main ingredients of the CoSaMP/SP algorithms are outlined below: . 1 locate the largest components of the proxy . 2 form a union of two sets of indices . 3 estimation via LS on the merged set . 4 prune the LS estimates to s largest components . 5 updates the error residual The proposed algorithm modifies the identification, estimation and error residual step. In order to: sequentially track system variations reduce the computational complexity while maintaining the superior performance of CoSaMP/SP N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 34 / 46
  • 36. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . The SpAdOMP Algorithm Algorithm description Complexity h(0) = 0, w(0) = 0, p(0) = 0 r(0) = y(0) 0<λ≤1 0 < µ < 2λ−1 max For n := 1, 2, . . . do 1: p(n) = λp(n − 1) + v ∗ (n − 1)r(n − 1) q 2: Ω = supp(p2s (n)) q 3: Λ = Ω ∪ supp(h(n − 1)) s 4: ε(n) = y(n) − v T (n)w|Λ (n − 1) |Λ s 5: w|Λ (n) = w|Λ (n − 1) + µv ∗ (n)ε(n) |Λ s 6: Λs = max(|w|Λ (n)|, s) s 7: h|Λs (n) = w|Λs (n), h|Λc (n) = 0 s 8: r(n) = y(n) − v T (n)h(n) s end For O(q) “Mileounis,Babadi,Kalouptsidis,Tarokh. An Adaptive Greedy Algorithm With Application to Nonlinear Communications.” N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 35 / 46
  • 37. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Steady-State MSE of SpAdOMP . Theorem. . The SpAdOMP algorithm produces an s-sparse approximation h(n) that satisfies the following steady-state bound h − h(n) 2 C1 (n) ξ(n) 2 + C2 (n) v |Λ (n) 2 |eo (n)|, where eo (n) is the estimation error of the optimum Wiener filter C1 (n), C2 (n) are constants independent of h . The first term is analogous to the SS error of the CoSaMP/SP algorithm The second term is induced by performing a single LMS iteration (instead of using the LS estimate) N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 36 / 46
  • 38. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Simulations on sparse ARMA channels ARMA channel: yn = a1 yn−6 + a2 yn−48 + vn + b1 vn−13 + b2 vn−34 + ξn , and 500 samples from CN (0, 1/5). 0 0.2 LMS −5 0.1 LOG−LMS SpAdOMP −10 0 NMSE (dB) NMSE (dB) −15 −0.1 −20 −0.2 −25 −0.3 LMS −30 −0.4 LOG−LMS SpAdOMP −35 −0.5 0 200 400 600 800 1000 250 300 350 400 450 500 Iterations Iterations a. Learning curve (SNR=23dB) b. Time evolution of (a1 ) ( ) NMSE = 10 log10 E{ h(n) − h 22 }/E{ h 22 } SpAdOMP converges very fast SpAdOMP achieves an average gain of nearly 19dB N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 37 / 46
  • 39. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Blind estimation via EM and smoothing algorithms Transmitted sequence unknown Likelihood maximization is intractable The Expectation Maximization (EM) method and the underlying iterative algorithm provides an option that inherently addresses symbol detection Augmented likelihood function formed by the state and received sequence. Given Q(θ, θ ) the expectation step forms Q(θ, θ ) = Eθ {log P (x1:n , y 1:n ; θ)|y 1:n } Expectation over the state sequence given the received sequence. N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 38 / 46
  • 40. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Marginal output likelihood Ln (θ) = log P (y 1:n ; θ) Jensens inequality implies Ln (θ) − Ln (θ ) ≥ Q(θ, θ ) − Q(θ, θ) This suggests the second step Let θ (i) denote an estimate at step i. Then θ (i+1) = arg max Q(θ i , θ) and Ln (θ (i+1) ) ≥ Ln (θ i ) θ For Gaussian noise, P (y(t)|x(t)) is log concave, the minimizer of Q is unique and the sequence θ i converges to a stationary point of the likelihood N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 39 / 46
  • 41. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection The EM method leads to the following estimates ∑n ˆ (i+1) (x) = ∑ P (xt = x; θ |y 1:n )yt (i) t=1 h n t=1 P (xt = x; θ |y 1:n ) (i) ∑ |M | ( )( )H q+1 n ∑ ˆ (i+1) = 1 Q ˆ ˆ P (xt = x; θ (i) |y 1:n ) y t − h(xl ) y t − h(xl ) n t=1 l=1 Determination of the smoothing probabilities P (xt |y 1:n ) by the forward backward recursions (Chang and Hancock) P (xt , y 1:n ) = α(xt , y 1:t )β(y t+1:n |xt ) ∑ M α(xt , y 1:t ) = b(yt |xt ) α(xt−1 , y 1:t−1 )αxt−1 xt xt−1 =1 ∑ M β(y t+1:n |xt ) = αxt xt+1 β(y t+2:n |xt+1 )b(yt+1 |xt+1 ) xt+1 =1 N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 40 / 46
  • 42. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection normalized stable versions (for instance Lindgren) P (xt |y 1:t−1 )b(yt |xt ) α(xt |y 1:t ) = ∑M xt =1 P (xt |y 1:t−1 )b(yt |xt ) ∑ M P (xt |y 1:t−1 ) = αxt−1 xt α(xt−1 |y1:t−1 )) xt−1 =1 ∑ αxt xt+1 P (xt+1 |y1:n ) M P (xt |y1:n ) = α(xt |y1:t ) P (xt+1 |y1:t ) xt+1 =1 Sparsity can be incorporated in the maximization step N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 41 / 46
  • 43. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . The Sparse BW Algorithm Algorithmic description For := 0, 1, . . . , do 1: {r ( ) , R( ) }:=Run {Forward/Backward recursions} {symbol detector} sgn(r i ) [ ( ) ] ( ) ( +1) 2: hi = ( ) |r i | − γ 2 {channel estimator} Ri,i + 2 ( +1) ∑ n 2 3: σq =n 1 yn − x( +1)T h( +1) n {noise variance estima n=1 end For ∑ n ( ) ∑ n ( ) ∗ ˆ ˆ r( ) = yi E{xi |y n ; h }, R( ) = E{xi xH |y n ; h } i i=1 i=1 “Mileounis,Kalouptsidis,Babadi,Tarokh. Blind identification of sparse channels and symbol detection via the EM algorithm.” N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 42 / 46
  • 44. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection Adaptive Channel Coding Based on Flexible Trellis . (Convolutional) Codes Popular adaptive coding schemes → variable rate punctured convolutional codes IEEE 802.22 standard for cognitive WRAN uses a rate 1/2 convolutional code and a set of puncturing matrices that lead to rates 2/3, 3/4 and 5/6. Flexible Convolutional Codes They can vary both their rate and the decoding complexity → efficient management of the system resources. Constructed by combining the techniques of path pruning and puncturing. Varying quantities associated with the complexity profile of the trellis diagram→ Varying decoding complexity. N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 43 / 46
  • 45. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Flexible Convolutional Codes Consider an (n, 1, m) mother convolutional code. Let ut be the information bit and ut the input bit of the ˆ mother encoder at time instant t. every Tpr time units the single input bit of the encoder is not an information bit, rather it is computed as a linear combination of bits of the current state St = {ˆt−1 , · · · , ut−m }. u ˆ { ut1 (Tpr −1)+t2 , if t2 = 0 ut = ˆ ∑d ˆ i=1 ci ut1 Tpr −i , if t2 = 0 ˆ ⌊ ⌋ ˆ where t1 = Tt , t2 = t mod Tpr , t = 1, 2, . . . , and d is the pr ∑m degree of the polynomial c(X) = i=1 ci X i . “Katsiotis,Rizomiliotis,Kalouptsidis. Flexible Convolutional Codes: Variable Rate and Complexity.” N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 44 / 46
  • 46. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Flexible Convolutional Codes The final step involves periodic puncturing of the encoded bits with period Tpu = pTpr , in order to adjust the rate. The complexity profile of the resulting trellis depends solely on ˆ the parameters m, Tpr , d and the puncturing matrix. Large families of high-performance codes of various rates and values of decoding complexity are constructed. N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 45 / 46
  • 47. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Extending the Constructions Flexible Turbo Codes Extending the analysis in the case where recursive mother encoders are used. The goal is to construct flexible parallel concatenated powerful coding schemes. Preliminary results indicate that varying the complexity profile of the trellis can be more efficient than simply varying the number of decoding iterations. Flexible Secret Codes Embedding secret keys in the procedures of pruning and puncturing can result to robust and flexible secret encoders. N. Kalouptsidis SP JAM 2012 Nonlinear communications: achievable rates, encryption, estimation and decoding 46 / 46
  • 48. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Publications G. Mileounis, N. Kalouptsidis, A sparsity driven approach to cumulant based identification, in IEEE Proc. SPAWC 2012, Turkey. K. Xenoulis, N. Kalouptsidis, I. Sason, New achievable rates for nonlinear Volterra channels via martingale inequalities, in IEEE proc. ISIT 2012. A. Katsiotis, P. Rizomiliotis, and N. Kalouptsidis, ”Flexible Convolutional Codes: Variable Rate and Complexity,” IEEE Trans. Commun., vol. 60, no. 3, pp. 608-613, March 2012. N. Kalouptsidis, G. Mileounis, B.Babadi, and V. Tarokh, ”Adaptive Algorithms for Sparse System Identification,” Signal Process., vol. 91, no. 8, pp. 1910-1919, Aug. 2011. K. Xenoulis and N. Kalouptsidis, ”Tight performance bounds for permutation invariant binary linear block codes over symmetric channels, IEEE Trans. Inf. Theory, vol. 57, pp. 6015-6024, Sep. 2011. K. Limniotis, N. Kolokotronis, and N. Kalouptsidis, ”Constructing Boolean functions in odd number of variables with maximum algebraic immunity,” in proc. 2011 IEEE ISIT, pp. 2662-2666, 2011.
  • 49. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Publications (contd.) N. Kalouptsidis and N. Kolokotronis, ”Fast decoding of regular LDPC codes using greedy approximation algorithms,” in proc. 2011 IEEE ISIT, pp. 2011-2015, 2011. A. Katsiotis and N. Kalouptsidis, ”On (n, n-1) punctured convolutional codes and their trellis modules,” IEEE Trans. Commun., vol. 59, pp. 1213-1217, 2011. K. Xenoulis and N. Kalouptsidis, ”Achievable rates for nonlinear Volterra channels,” IEEE Trans. Inform. Theory, vol. 57, pp. 1237-1248, 2011. A. Katsiotis, P. Rizomiliotis, and N. Kalouptsidis, ”New constructions of high-performance low-complexity convolutional codes,” IEEE Trans. Commun., vol. 58, pp.1950-1961, 2010. G. Mileounis, B. Babadi, N. Kalouptsidis, and V. Tarokh, ”An Adaptive Greedy Algorithm with Application to Nonlinear Communications,” IEEE Trans. Signal Proc., vol. 58, No. 6, June 2010. B. Babadi, N. Kalouptsidis, and V. Tarokh, ”SPARLS: The Sparse RLS Algorithm,” IEEE Trans. Signal Proc., vol. 58, no. 8, August 2010.
  • 50. Encryption encoding and secrecy codes Channel encoding Channel modelling and achievable rates Channel estimation and symbol detection . Publications (contd.) N. Kolokotronis, K. Limniotis, and N. Kalouptsidis, ”Best affine and quadratic approximations of particular classes of Boolean functions, IEEE Trans. Inform. Theory, vol. 55, pp. 5211-5222, 2009. T. Etzion, N. Kalouptsidis, N. Kolokotronis, K. Limniotis and K. G. Paterson, ”Properties of the error linear complexity spectrum,” IEEE Trans. Inform. Theory, pp. 4681-4686, vol. 55, 2009. B. Babadi, N. Kalouptsidis, and V. Tarokh, ”Asymptotic Achievability of the Cramer-Rao Bound for Noisy Compressive Sampling,” IEEE Trans. Signal Proc., vol. 57, no. 3, March 2009. G. Mileounis, P. Koukoulas, N. Kalouptsidis, ”Input-output identification of nonlinear channels using PSK, QAM and OFDM inputs, Signal Process., vol. 89, no. 7, pp. 1359-1369, Jul. 2009. K. Xenoulis and N. Kalouptsidis, ”Improvement of Gallager upper bound and its variations for discrete channels,” IEEE Trans. Inform. Theory, vol. 55, pp. 4204-4210, 2009.