SlideShare a Scribd company logo
Discrete Memoryless Channel and
it’s Capacity
by
Purnachand Simhadri
Asst. Professor
Electronics and Communication Engineering Department
K L University
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Outline
1 Discrete Memoryless channel
Probability Model
Binary Channel
2 Mutual Information
Joint Entropy
Conditional Entropy
Definition
3 Capacity of DMC
Transmission Rate
Definition
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
1 Discrete Memoryless channel
Probability Model
Binary Channel
2 Mutual Information
Joint Entropy
Conditional Entropy
Definition
3 Capacity of DMC
Transmission Rate
Definition
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Probability Model
All these transition probabilities from xi to yj are
gathered in a transition matrix (also called as channel
matrix) to model DMC .
pt
i = P(X = xi), pr
j = P(Y = yi), pij = P(Y = yj/X = xi)
and P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
⇒ pr
j =
M
i=1
pt
ipij (1)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Probability Model
All these transition probabilities from xi to yj are
gathered in a transition matrix (also called as channel
matrix) to model DMC .
pt
i = P(X = xi), pr
j = P(Y = yi), pij = P(Y = yj/X = xi)
and P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
⇒ pr
j =
M
i=1
pt
ipij (1)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Probability Model
Equation( 1) can be written using matrix form as





pr
1
pr
2
...
pr
M





Pr
Y
=





p11 p12 · · · p1M
p21 p22 · · · p2M
...
...
...
...
pM1 pM2 · · · pMM





Channel Matrix − PY/X





pt
1
pt
2
...
pt
M





Pt
X
(2)
Equation( 2) can be compactly written as
Pr
Y = PY/XPt
X (3)
Note that,
M
j=1
pij = 1 and pe =
M
i=1
pi


M
j=1,i=j
pij

 (4)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
Channels designed to transmit and receive one of M
symbols aree called discrete M-ary channels (M > 2).
If M=2, then the channel is called binary channel.
In the binary case we can statistically model the
channel as below
0
1
0
1
𝑷 𝟎𝟎
𝑷 𝟏𝟏
𝑷 𝟎𝟏
𝑷 𝟏𝟎
𝑷 𝟎
𝒕
𝑷 𝟏
𝒕
𝑷 𝟎
𝒓
𝑷 𝟏
𝒓
P(Y = j/X = i) = pij
p00 + p01 = 1
p10 + p11 = 1
P(X = 0) = pt
0
P(X = 1) = pt
1
P(Y = 0) = pr
0
P(Y = 1) = pr
1
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
Channels designed to transmit and receive one of M
symbols aree called discrete M-ary channels (M > 2).
If M=2, then the channel is called binary channel.
In the binary case we can statistically model the
channel as below
0
1
0
1
𝑷 𝟎𝟎
𝑷 𝟏𝟏
𝑷 𝟎𝟏
𝑷 𝟏𝟎
𝑷 𝟎
𝒕
𝑷 𝟏
𝒕
𝑷 𝟎
𝒓
𝑷 𝟏
𝒓
P(Y = j/X = i) = pij
p00 + p01 = 1
p10 + p11 = 1
P(X = 0) = pt
0
P(X = 1) = pt
1
P(Y = 0) = pr
0
P(Y = 1) = pr
1
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
for a binary channel,
pr
0 = pt
0p00 + pt
1p10
pr
1 = pt
0p01 + pt
1p11
and Pe = pt
0p01 + pt
1p10
Binary Symmetric Channel
A binary channel is said to be binary symmetric channel is
p00 = p11 (⇒ p01 = p10).
Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p
then, for a binary symmetric channel
Pe = pt
0p01 + pt
1p10 = pt
0(1 − p) + pt
1(1 − p) = 1 − p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
for a binary channel,
pr
0 = pt
0p00 + pt
1p10
pr
1 = pt
0p01 + pt
1p11
and Pe = pt
0p01 + pt
1p10
Binary Symmetric Channel
A binary channel is said to be binary symmetric channel is
p00 = p11 (⇒ p01 = p10).
Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p
then, for a binary symmetric channel
Pe = pt
0p01 + pt
1p10 = pt
0(1 − p) + pt
1(1 − p) = 1 − p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
for a binary channel,
pr
0 = pt
0p00 + pt
1p10
pr
1 = pt
0p01 + pt
1p11
and Pe = pt
0p01 + pt
1p10
Binary Symmetric Channel
A binary channel is said to be binary symmetric channel is
p00 = p11 (⇒ p01 = p10).
Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p
then, for a binary symmetric channel
Pe = pt
0p01 + pt
1p10 = pt
0(1 − p) + pt
1(1 − p) = 1 − p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
1 Discrete Memoryless channel
Probability Model
Binary Channel
2 Mutual Information
Joint Entropy
Conditional Entropy
Definition
3 Capacity of DMC
Transmission Rate
Definition
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
In a DMC, there are two statistical process at work:
input to the channel and the noise, which inturn
effects the output of channel. So, it is worthy to
consider the joint and conditional densities of input
and output.
Thus there are a number of entropies or information
contents that need to be considered for studying
discrete memoryless channel characteristics.
First, entropy of the input is
H(X) = −
M
i=1
pt
i log2(pt
i) bits/symbol
Entropy of the output is
H(Y ) = −
M
j=1
pr
j log2(pr
j ) bits/symbol
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint distribution of input and output can be obtained
from transition probabilities and input distribution as
P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
Joint Entropy
Joint entropy H(X, Y ) is defined as
H(X, Y ) = −
xi∈X yj∈Y
P(xi, yj) log2 P(xi, yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint distribution of input and output can be obtained
from transition probabilities and input distribution as
P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
Joint Entropy
Joint entropy H(X, Y ) is defined as
H(X, Y ) = −
xi∈X yj∈Y
P(xi, yj) log2 P(xi, yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint Entropy - Properties
The joint entropy of a set of variables is greater than
or equal to all of the individual entropies of the
variables in the set.
H(X, Y ) ≥ max(H(X), H(Y ))
The joint entropy of a set of variables is less than or
equal to the sum of the individual entropies of the
variables in the set.
H(X, Y ) ≤ H(X) + H(Y )
This inequality is an equality if and only if X and Y
are statistically independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint Entropy - Properties
The joint entropy of a set of variables is greater than
or equal to all of the individual entropies of the
variables in the set.
H(X, Y ) ≥ max(H(X), H(Y ))
The joint entropy of a set of variables is less than or
equal to the sum of the individual entropies of the
variables in the set.
H(X, Y ) ≤ H(X) + H(Y )
This inequality is an equality if and only if X and Y
are statistically independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Let the conditional distribution of X, given that the output of
channel Y = yj, be P(X/Y = yj), then the average uncertainity
about X given that Y = yj is given by
H(X/Y = yj) = −
xi∈X
P(X = xi/Y = yj) log2 P(X = xi/Y = yj)
The conditional entropy of X conditioned on Y is the expected
value for the entropy of the distribution P(X/Y = yj)
⇒ H(X/Y ) = E[H(X/Y = yj)]
=
yj ∈Y
P(Y = yj)H(X/Y = yj)
=
yj ∈Y
P(yj) −
xi∈X
P(xi/yj) log2 P(xi/yj
= −
xi∈X yj ∈Y
P(xi/yj)P(yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Let the conditional distribution of X, given that the output of
channel Y = yj, be P(X/Y = yj), then the average uncertainity
about X given that Y = yj is given by
H(X/Y = yj) = −
xi∈X
P(X = xi/Y = yj) log2 P(X = xi/Y = yj)
The conditional entropy of X conditioned on Y is the expected
value for the entropy of the distribution P(X/Y = yj)
⇒ H(X/Y ) = E[H(X/Y = yj)]
=
yj ∈Y
P(Y = yj)H(X/Y = yj)
=
yj ∈Y
P(yj) −
xi∈X
P(xi/yj) log2 P(xi/yj
= −
xi∈X yj ∈Y
P(xi/yj)P(yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Conditional Entropy - Definition
Conditional entropy H(X/Y ) is defined as
H(X/Y ) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
similarly, Conditional entropy H(Y/X) is defined as
H(Y/X) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(yj/xi)
Conditional entropy is also called as equivocation.
H(X/Y ) gives the amount of uncertainty remaining about
the channel input X after the channel output Y has been
observed.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Conditional Entropy - Definition
Conditional entropy H(X/Y ) is defined as
H(X/Y ) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
similarly, Conditional entropy H(Y/X) is defined as
H(Y/X) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(yj/xi)
Conditional entropy is also called as equivocation.
H(X/Y ) gives the amount of uncertainty remaining about
the channel input X after the channel output Y has been
observed.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
There is less information in conditional entropy H(X/Y )
than in the entropy H(X)
⇒ H(X/Y ) − H(X) ≤ 0
Proof:
H(X/Y ) − H(X) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
+
xi∈X
P(xi) log2 P(xi)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
+
xi∈X yj ∈Y
P(xi, yj) log2 P(xi)
=
xi∈X yj ∈Y
P(xi, yj) log2
P(xi)
P(xi/yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Using the inequality, log a ≤ (a − 1),
it follows that:
H(X/Y ) − H(X) ≤
xi∈X yj ∈Y
P(xi, yj)
P(xi)
P(xi/yj)
− 1
=
xi∈X yj ∈Y
P(xi, yj)
P(xi/yj)
P(xi) −
xi∈X yj ∈Y
P(xi, yj)
=
xi∈X
P(xi)
yj ∈Y
P(yj) − 1
=1 − 1
=0
⇒ H(X/Y ) ≤ H(X)
and H(Y/X) ≤ H(Y )
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy- Relation with Joint Entropy
Conditional entropy H(X/Y ) is given by
H(X/Y ) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2
P(xi, yj)
P(yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi, yj)
+
yj ∈Y xi∈X
P(xi, yj) log2 P(yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi, yj)
+
yj ∈Y
P(yj) log2 P(yj)
= H(X, Y ) − H(Y ) ⇒ H(X, Y ) = H(X/Y ) + H(Y )
similarly, H(X, Y ) = H(Y/X) + H(X)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Definition
Mutual Information I(X, Y ) of X and Y is deifned as
I(X, Y ) = H(X) − H(X/Y )
I(X, Y ) gives the uncertainty of the input X resolved by
observing output Y . In other words, it is the protion of
information of X that depends on Y .
Properties
Symmetric : I(X, Y ) = I(Y, X)
I(X, Y ) = H(X) − H(X/Y ) = H(Y ) − H(Y/X)
= H(X) + H(Y ) − H(X, Y )
Nonnegetive : I(X, Y ) ≥ 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Definition
Mutual Information I(X, Y ) of X and Y is deifned as
I(X, Y ) = H(X) − H(X/Y )
I(X, Y ) gives the uncertainty of the input X resolved by
observing output Y . In other words, it is the protion of
information of X that depends on Y .
Properties
Symmetric : I(X, Y ) = I(Y, X)
I(X, Y ) = H(X) − H(X/Y ) = H(Y ) − H(Y/X)
= H(X) + H(Y ) − H(X, Y )
Nonnegetive : I(X, Y ) ≥ 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Proof: Property - 1
I(X, Y ) =H(X) − H(X/Y )
= −
xi∈X
P(xi) log2 P(xi) +
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi) +
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
=
xi∈X yj ∈Y
P(xi, yj) log2
P(xi/yj)
P(xi)
=
xi∈X yj ∈Y
P(xi, yj) log2
P(xi, yj)
P(xi)P(yj)
=I(Y, X)
Equaion in box gives Kullback Leibler divergence between two probability
distributions P(xi, yj) and P(xi)P(yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Kullback Leibler divergence
In probability theory and information theory, the Kullback
- Leibler divergence(also information divergence,
information gain, relative entropy) is a non-symmetric
measure of the difference between two probability
distributions P and Q.
DKL(P//Q) =
i
log2
P(i)
Q(i)
KL measures the expected number of extra bits required to
code samples from P when using a code based on Q, rather
than using a code based on P.
Thus, Mutual information gives no.of bits can be gained by
considering dependancy between X and Y rather than by
considering X and Y are independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Kullback Leibler divergence
In probability theory and information theory, the Kullback
- Leibler divergence(also information divergence,
information gain, relative entropy) is a non-symmetric
measure of the difference between two probability
distributions P and Q.
DKL(P//Q) =
i
log2
P(i)
Q(i)
KL measures the expected number of extra bits required to
code samples from P when using a code based on Q, rather
than using a code based on P.
Thus, Mutual information gives no.of bits can be gained by
considering dependancy between X and Y rather than by
considering X and Y are independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Kullback Leibler divergence
In probability theory and information theory, the Kullback
- Leibler divergence(also information divergence,
information gain, relative entropy) is a non-symmetric
measure of the difference between two probability
distributions P and Q.
DKL(P//Q) =
i
log2
P(i)
Q(i)
KL measures the expected number of extra bits required to
code samples from P when using a code based on Q, rather
than using a code based on P.
Thus, Mutual information gives no.of bits can be gained by
considering dependancy between X and Y rather than by
considering X and Y are independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Proof: Property - 2
It is known that :
H(X) ≥ H(X/Y )
⇒ H(X) − H(X/Y ) ≥ 0
If X and Y are statistically independent, then
H(X/Y ) = H(X) ⇒ I(X, Y ) = 0
. Therefore,
I(X, Y ) = H(X) − H(X/Y ) ≥ 0
with equality when X and Y are statistically independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Four Cases
Case - 1: X and Y are statistically independent
𝐻(𝑋) 𝐻(𝑌)
𝐻 𝑋, 𝑌 = 𝐻 𝑋 + 𝐻(𝑌)
𝐼(𝑋, 𝑌) = 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Four Cases
Case - 2: Y is completely dependent on X
Case - 3: X is completely dependent on Y
I(X,Y) = H(Y)
𝐻(𝑋)
𝐻(𝑌)
I(X,Y) = H(X)
𝐻(𝑋) 𝐻(𝑌)
𝐻(𝑋, 𝑌) = 𝐻(𝑋) 𝐻(𝑋, 𝑌) = 𝐻(𝑌)
CASE -2 CASE -3
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Four Cases
Case - 4: X and Y are neither statistically independent
nor one is completely dependent on the other.
𝐻(𝑋/𝑌) I(𝑋, 𝑌)
𝐻 𝑋, 𝑌 = 𝐻 𝑋 + 𝐻(𝑌/𝑋) = 𝐻 𝑌 + 𝐻(𝑋/𝑌)
𝐻(𝑌/𝑋)
𝐻(𝑋) 𝐻(𝑌)
𝐼(𝑋, 𝑌) = 𝐻(𝑋) − 𝐻(𝑋/𝑌) = 𝐻(𝑌) − 𝐻(𝑌/𝑋)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
1 Discrete Memoryless channel
Probability Model
Binary Channel
2 Mutual Information
Joint Entropy
Conditional Entropy
Definition
3 Capacity of DMC
Transmission Rate
Definition
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
H(X) is the amount uncertainity about X, in other
words, information gain related to X if we are told
about X.
H(X/Y ) is remaining amount uncertainity about X
when Y is observed, in other words, the amount
information required to resolve X if we are told about
Y .
I(X, Y ) is amount of uncertainity of X resolved by
observing the output Y .
So, the amount of information that can be
transmitted over a channel is nothing but the amount
of uncertainity resolved by observing the channel
output.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
Thus, it is possible to transmit I(X, Y ) bits of
information per channel use, approximately, without
any uncertainity about the input at the output of the
channel.
⇒ It = I(X, Y ) = H(X)−H(X/Y ) bits/channel use
If the the symbol rate of a source is Rs, then the rate
of information that can be transmitted over a channel
such that the input can be resolved approximately
without errors is given by
Dt = [H(X) − H(X/Y )]Rs bits/sec
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
For an ideal channel X = Y , there is no uncertainty over X
when we observe Y .
⇒ H(X/Y ) = 0
⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X)
So all the information is transmitted for each channel use:
It = I(X, Y ) = H(X)
If the channel is too noisy, such that X and Y are
independent. So the uncertainty over X remains the same
irrespective of observation on Y .
⇒ H(X/Y ) = H(X)
⇒ I(X, Y ) = H(X) − H(X/Y ) = 0
i.e., no information passes through the channel:
It = I(X, Y ) = 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
For an ideal channel X = Y , there is no uncertainty over X
when we observe Y .
⇒ H(X/Y ) = 0
⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X)
So all the information is transmitted for each channel use:
It = I(X, Y ) = H(X)
If the channel is too noisy, such that X and Y are
independent. So the uncertainty over X remains the same
irrespective of observation on Y .
⇒ H(X/Y ) = H(X)
⇒ I(X, Y ) = H(X) − H(X/Y ) = 0
i.e., no information passes through the channel:
It = I(X, Y ) = 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Definition
The capacity of DMC is the maximum rate of information
transmission over the channel. The maximum rate of
transmission occurs when the source is matched to the
channel.
Definition
The capacity of DMC is defined the maximum rate of
information transmission over the channel, where the
maximum is taken over all possible input distributions
P(X)
C = max
P (X)
I(X, Y )Rs bits/sec
= max
P (X)
[H(X) − H(X/Y )]Rs bits/sec
= max
P (X)
[H(Y ) − H(Y/X)]Rs bits/sec
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Definition
The capacity of DMC is the maximum rate of information
transmission over the channel. The maximum rate of
transmission occurs when the source is matched to the
channel.
Definition
The capacity of DMC is defined the maximum rate of
information transmission over the channel, where the
maximum is taken over all possible input distributions
P(X)
C = max
P (X)
I(X, Y )Rs bits/sec
= max
P (X)
[H(X) − H(X/Y )]Rs bits/sec
= max
P (X)
[H(Y ) − H(Y/X)]Rs bits/sec
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noiseless Binary Channel
Consider a noiseless binary channel as shown below
0
1
0
1
𝑷 𝟎𝟎 = 𝟏
𝑷 𝟏𝟏 = 𝟏
𝑷 𝟎𝟏 = 𝟎
𝑷 𝟏𝟎 = 𝟎
𝑷(𝒙 𝟎)
𝑷(𝒙 𝟏)
𝑷(𝒚 𝟎)𝑷(𝒙 𝟎)
𝑷(𝒚 𝟏)
P(x0, y0) = P(x0)P00 = P(x0)
P(x1, y1) = P(x1)P11 = P(x1)
P(x0, y1) = P(x0)P01 = 0
P(x1, y0) = P(x1)P10 = 0
P(y0) = P(x0)P00 + P(x1)P10
= P(x0)
P(y1) = P(x0)P01 + P(x1)P11
= P(x1)
P(x0/y0) =
P(x0, y0)
P(y0)
=
P(x0)
P(x0)
= 1
P(x0/y1) =
P(x0, y1)
P(y1)
=
0
P(x1)
= 0
P(x1/y0) =
P(x1, y0)
P(y0)
=
0
P(x0)
= 0
P(x1/y1) =
P(x1, y1)
P(y1)
=
P(x1)
P(x1)
= 1
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noiseless Binary Channel
Consider a noiseless binary channel as shown below
0
1
0
1
𝑷 𝟎𝟎 = 𝟏
𝑷 𝟏𝟏 = 𝟏
𝑷 𝟎𝟏 = 𝟎
𝑷 𝟏𝟎 = 𝟎
𝑷(𝒙 𝟎)
𝑷(𝒙 𝟏)
𝑷(𝒚 𝟎)𝑷(𝒙 𝟎)
𝑷(𝒚 𝟏)
P(x0, y0) = P(x0)P00 = P(x0)
P(x1, y1) = P(x1)P11 = P(x1)
P(x0, y1) = P(x0)P01 = 0
P(x1, y0) = P(x1)P10 = 0
P(y0) = P(x0)P00 + P(x1)P10
= P(x0)
P(y1) = P(x0)P01 + P(x1)P11
= P(x1)
P(x0/y0) =
P(x0, y0)
P(y0)
=
P(x0)
P(x0)
= 1
P(x0/y1) =
P(x0, y1)
P(y1)
=
0
P(x1)
= 0
P(x1/y0) =
P(x1, y0)
P(y0)
=
0
P(x0)
= 0
P(x1/y1) =
P(x1, y1)
P(y1)
=
P(x1)
P(x1)
= 1
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noiseless Binary Channel
H(X/Y ) = −
1
i=0
1
j=0
P(xi, yj) log2 P(xi/yj)
= − [P(x0, y0) log2 P(x0/y0) + P(x0, y1) log2 P(x0/y1)
+ P(x1, y0) log2 P(x1/y0) + P(x1, y1) log2 P(x1/y1)]
= 0
⇒ I(X, Y ) = H(X) − H(X/Y )
= H(X)
Therefore, the capacity of noiseless binary channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(X) bits/channel use
= 1 bits/channel use
i.e., over a noiseless binary channel atmost one bit of information can be
send per channel use, which is maximum information content of a binary
source.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noiseless Binary Channel
H(X/Y ) = −
1
i=0
1
j=0
P(xi, yj) log2 P(xi/yj)
= − [P(x0, y0) log2 P(x0/y0) + P(x0, y1) log2 P(x0/y1)
+ P(x1, y0) log2 P(x1/y0) + P(x1, y1) log2 P(x1/y1)]
= 0
⇒ I(X, Y ) = H(X) − H(X/Y )
= H(X)
Therefore, the capacity of noiseless binary channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(X) bits/channel use
= 1 bits/channel use
i.e., over a noiseless binary channel atmost one bit of information can be
send per channel use, which is maximum information content of a binary
source.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
Consider a noisy binary symmetric channel as shown below
0
1
0
1
𝑷 𝟎𝟎 = 𝒑
𝑷 𝟏𝟏 = 𝒑
𝑷 𝟎𝟏 = 𝟏 − 𝒑
𝑷 𝟏𝟎 = 𝟏 − 𝒑
𝑷(𝒙 𝟎)
𝑷(𝒙 𝟏)
𝑷(𝒚 𝟎)𝑷(𝒙 𝟎)
𝑷(𝒚 𝟏)
P(x0, y0) = P(x0)P00 = P(x0)p
P(x1, y1) = P(x1)P11 = P(x1)p
P(x0, y1) = P(x0)P01 = P(x0)(1 − p)
P(x1, y0) = P(x1)P10 = P(x1)(1 − p)
P(y0) = P(x0)P00 + P(x1)P10 = P(x0)p + P(x1)(1 − p)
P(y1) = P(x0)P01 + P(x1)P11 = P(x0)(1 − p) + P(x1)p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
Consider a noisy binary symmetric channel as shown below
0
1
0
1
𝑷 𝟎𝟎 = 𝒑
𝑷 𝟏𝟏 = 𝒑
𝑷 𝟎𝟏 = 𝟏 − 𝒑
𝑷 𝟏𝟎 = 𝟏 − 𝒑
𝑷(𝒙 𝟎)
𝑷(𝒙 𝟏)
𝑷(𝒚 𝟎)𝑷(𝒙 𝟎)
𝑷(𝒚 𝟏)
P(x0, y0) = P(x0)P00 = P(x0)p
P(x1, y1) = P(x1)P11 = P(x1)p
P(x0, y1) = P(x0)P01 = P(x0)(1 − p)
P(x1, y0) = P(x1)P10 = P(x1)(1 − p)
P(y0) = P(x0)P00 + P(x1)P10 = P(x0)p + P(x1)(1 − p)
P(y1) = P(x0)P01 + P(x1)P11 = P(x0)(1 − p) + P(x1)p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
H(Y/X) = −
1
i=0
1
j=0
P(xi, yj) log2 P(yj/xi)
= − [P(x0, y0) log2 P(y0/x0) + P(x0, y1) log2 P(y1/x0)
+ P(x1, y0) log2 P(y0/x1) + P(x1, y1) log2 P(y1/x1)]
= − [P(x0)p log2 p + P(x0)(1 − p) log2(1 − p)
+ P(x1)(1 − p) log2(1 − p) + P(x1)p log2 p]
= − [p log2 p + (1 − p) log2(1 − p)] = H(p, 1 − p)
⇒ I(X, Y ) = H(Y ) − H(Y/X)
= H(Y ) − H(p, 1 − p)
Therefore, the capacity of noisy binary symmetric channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(Y ) − H(p, 1 − p) bits/channel use
= 1 − H(p, 1 − p) bits/channel use
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
H(Y/X) = −
1
i=0
1
j=0
P(xi, yj) log2 P(yj/xi)
= − [P(x0, y0) log2 P(y0/x0) + P(x0, y1) log2 P(y1/x0)
+ P(x1, y0) log2 P(y0/x1) + P(x1, y1) log2 P(y1/x1)]
= − [P(x0)p log2 p + P(x0)(1 − p) log2(1 − p)
+ P(x1)(1 − p) log2(1 − p) + P(x1)p log2 p]
= − [p log2 p + (1 − p) log2(1 − p)] = H(p, 1 − p)
⇒ I(X, Y ) = H(Y ) − H(Y/X)
= H(Y ) − H(p, 1 − p)
Therefore, the capacity of noisy binary symmetric channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(Y ) − H(p, 1 − p) bits/channel use
= 1 − H(p, 1 − p) bits/channel use
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
To achieve the capacity of 1 − H(p, 1 − p) over a noisy
binary symmetric channel, the input distribution
should make H(Y ) = 1.
H(Y ) = 1, if P(y0) = P(y1) = 1
2
⇒ P(x0)p + P(x1)(1 − p) =
1
2
and P(x0)(1 − p) + P(x1)p =
1
2
⇒ (1 − 2p)(P(x1) − P(x0)) = 0
⇒ P(x1) = P(x0) =
1
2
Thus over a binary symmetric channel, maximum
information rate is possible when the source symbols
are equally likely.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
To achieve the capacity of 1 − H(p, 1 − p) over a noisy
binary symmetric channel, the input distribution
should make H(Y ) = 1.
H(Y ) = 1, if P(y0) = P(y1) = 1
2
⇒ P(x0)p + P(x1)(1 − p) =
1
2
and P(x0)(1 − p) + P(x1)p =
1
2
⇒ (1 − 2p)(P(x1) − P(x0)) = 0
⇒ P(x1) = P(x0) =
1
2
Thus over a binary symmetric channel, maximum
information rate is possible when the source symbols
are equally likely.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
Capacity of binary symmetric channel Vs p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity

More Related Content

What's hot

A genetic algorithm to solve the
A genetic algorithm to solve theA genetic algorithm to solve the
A genetic algorithm to solve the
IJCNCJournal
 
IRJET- Chord Classification of an Audio Signal using Artificial Neural Network
IRJET- Chord Classification of an Audio Signal using Artificial Neural NetworkIRJET- Chord Classification of an Audio Signal using Artificial Neural Network
IRJET- Chord Classification of an Audio Signal using Artificial Neural Network
IRJET Journal
 
352735336 rsh-qam11-tif-11-doc
352735336 rsh-qam11-tif-11-doc352735336 rsh-qam11-tif-11-doc
352735336 rsh-qam11-tif-11-doc
Firas Husseini
 
A method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesA method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesAlexander Decker
 
Performance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmPerformance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithm
sipij
 
Energy-Efficient LDPC Decoder using DVFS for binary sources
Energy-Efficient LDPC Decoder using DVFS for binary sourcesEnergy-Efficient LDPC Decoder using DVFS for binary sources
Energy-Efficient LDPC Decoder using DVFS for binary sources
IDES Editor
 
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation Models
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation ModelsEvolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation Models
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation Models
Editor IJCATR
 
Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...
Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...
Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...
ijngnjournal
 
The performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systemsThe performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systemschakravarthy Gopi
 
Dcp project
Dcp projectDcp project
Dcp project
Chetan Soni
 
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...
IJCNCJournal
 
Arithmetic coding
Arithmetic codingArithmetic coding
Arithmetic coding
Vikas Goyal
 

What's hot (17)

A genetic algorithm to solve the
A genetic algorithm to solve theA genetic algorithm to solve the
A genetic algorithm to solve the
 
Baum2
Baum2Baum2
Baum2
 
IRJET- Chord Classification of an Audio Signal using Artificial Neural Network
IRJET- Chord Classification of an Audio Signal using Artificial Neural NetworkIRJET- Chord Classification of an Audio Signal using Artificial Neural Network
IRJET- Chord Classification of an Audio Signal using Artificial Neural Network
 
40120140501016
4012014050101640120140501016
40120140501016
 
352735336 rsh-qam11-tif-11-doc
352735336 rsh-qam11-tif-11-doc352735336 rsh-qam11-tif-11-doc
352735336 rsh-qam11-tif-11-doc
 
A method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesA method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codes
 
Performance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmPerformance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithm
 
Energy-Efficient LDPC Decoder using DVFS for binary sources
Energy-Efficient LDPC Decoder using DVFS for binary sourcesEnergy-Efficient LDPC Decoder using DVFS for binary sources
Energy-Efficient LDPC Decoder using DVFS for binary sources
 
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation Models
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation ModelsEvolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation Models
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation Models
 
Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...
Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...
Implementation of Pipelined Architecture for Physical Downlink Channels of 3G...
 
fading-conf
fading-conffading-conf
fading-conf
 
The performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systemsThe performance of turbo codes for wireless communication systems
The performance of turbo codes for wireless communication systems
 
Dcp project
Dcp projectDcp project
Dcp project
 
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...
 
Arithmetic Coding
Arithmetic CodingArithmetic Coding
Arithmetic Coding
 
79 83
79 8379 83
79 83
 
Arithmetic coding
Arithmetic codingArithmetic coding
Arithmetic coding
 

Similar to Dmcpresentation 120904112322 phpapp01

Binary symmetric channel review
Binary symmetric channel reviewBinary symmetric channel review
Binary symmetric channel review
ShilpaDe
 
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfUnit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
vani374987
 
Unit 5.pdf
Unit 5.pdfUnit 5.pdf
I010125056
I010125056I010125056
I010125056
IOSR Journals
 
Channel Estimation In The STTC For OFDM Using MIMO With 4G System
Channel Estimation In The STTC For OFDM Using MIMO With 4G SystemChannel Estimation In The STTC For OFDM Using MIMO With 4G System
Channel Estimation In The STTC For OFDM Using MIMO With 4G System
IOSR Journals
 
Multiuser MIMO Gaussian Channels: Capacity Region and Duality
Multiuser MIMO Gaussian Channels: Capacity Region and DualityMultiuser MIMO Gaussian Channels: Capacity Region and Duality
Multiuser MIMO Gaussian Channels: Capacity Region and Duality
Shristi Pradhan
 
basicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxbasicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptx
upendrabhatt13
 
A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...
A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...
A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...
a_elmoslimany
 
Ec eg intro 1
Ec eg  intro 1Ec eg  intro 1
Ec eg intro 1
abu55
 
On Optimization of Network-coded Scalable Multimedia Service Multicasting
On Optimization of Network-coded Scalable Multimedia Service MulticastingOn Optimization of Network-coded Scalable Multimedia Service Multicasting
On Optimization of Network-coded Scalable Multimedia Service Multicasting
Andrea Tassi
 
Bz25454457
Bz25454457Bz25454457
Bz25454457
IJERA Editor
 
Ch6 information theory
Ch6 information theoryCh6 information theory
Ch6 information theory
Dr. Sachin Kumar Gupta
 
Paper id 22201419
Paper id 22201419Paper id 22201419
Paper id 22201419IJRAT
 
Turbo Detection in Rayleigh flat fading channel with unknown statistics
Turbo Detection in Rayleigh flat fading channel with unknown statisticsTurbo Detection in Rayleigh flat fading channel with unknown statistics
Turbo Detection in Rayleigh flat fading channel with unknown statistics
ijwmn
 
Iterative qr decompostion channel estimation for
Iterative qr decompostion channel estimation forIterative qr decompostion channel estimation for
Iterative qr decompostion channel estimation for
eSAT Publishing House
 
Unit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxUnit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptx
KIRUTHIKAAR2
 
Image compression
Image compressionImage compression
Image compression
Bassam Kanber
 
Mimo
MimoMimo

Similar to Dmcpresentation 120904112322 phpapp01 (20)

Binary symmetric channel review
Binary symmetric channel reviewBinary symmetric channel review
Binary symmetric channel review
 
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfUnit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
 
Unit 5.pdf
Unit 5.pdfUnit 5.pdf
Unit 5.pdf
 
Final ppt
Final pptFinal ppt
Final ppt
 
I010125056
I010125056I010125056
I010125056
 
Channel Estimation In The STTC For OFDM Using MIMO With 4G System
Channel Estimation In The STTC For OFDM Using MIMO With 4G SystemChannel Estimation In The STTC For OFDM Using MIMO With 4G System
Channel Estimation In The STTC For OFDM Using MIMO With 4G System
 
Multiuser MIMO Gaussian Channels: Capacity Region and Duality
Multiuser MIMO Gaussian Channels: Capacity Region and DualityMultiuser MIMO Gaussian Channels: Capacity Region and Duality
Multiuser MIMO Gaussian Channels: Capacity Region and Duality
 
basicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxbasicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptx
 
A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...
A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...
A New Communication Scheme Implying Amplitude-Limited Inputs and Signal-Depen...
 
Ec eg intro 1
Ec eg  intro 1Ec eg  intro 1
Ec eg intro 1
 
BlackSea_Presentation
BlackSea_PresentationBlackSea_Presentation
BlackSea_Presentation
 
On Optimization of Network-coded Scalable Multimedia Service Multicasting
On Optimization of Network-coded Scalable Multimedia Service MulticastingOn Optimization of Network-coded Scalable Multimedia Service Multicasting
On Optimization of Network-coded Scalable Multimedia Service Multicasting
 
Bz25454457
Bz25454457Bz25454457
Bz25454457
 
Ch6 information theory
Ch6 information theoryCh6 information theory
Ch6 information theory
 
Paper id 22201419
Paper id 22201419Paper id 22201419
Paper id 22201419
 
Turbo Detection in Rayleigh flat fading channel with unknown statistics
Turbo Detection in Rayleigh flat fading channel with unknown statisticsTurbo Detection in Rayleigh flat fading channel with unknown statistics
Turbo Detection in Rayleigh flat fading channel with unknown statistics
 
Iterative qr decompostion channel estimation for
Iterative qr decompostion channel estimation forIterative qr decompostion channel estimation for
Iterative qr decompostion channel estimation for
 
Unit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxUnit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptx
 
Image compression
Image compressionImage compression
Image compression
 
Mimo
MimoMimo
Mimo
 

Recently uploaded

AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
BrazilAccount1
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
karthi keyan
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
VENKATESHvenky89705
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
JoytuBarua2
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
Jayaprasanna4
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
gerogepatton
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
FluxPrime1
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
Kamal Acharya
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
Kamal Acharya
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
MdTanvirMahtab2
 
The role of big data in decision making.
The role of big data in decision making.The role of big data in decision making.
The role of big data in decision making.
ankuprajapati0525
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
Kerry Sado
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
WENKENLI1
 
Runway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptxRunway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptx
SupreethSP4
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
R&R Consult
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 

Recently uploaded (20)

AP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specificAP LAB PPT.pdf ap lab ppt no title specific
AP LAB PPT.pdf ap lab ppt no title specific
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
 
Immunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary AttacksImmunizing Image Classifiers Against Localized Adversary Attacks
Immunizing Image Classifiers Against Localized Adversary Attacks
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
 
Final project report on grocery store management system..pdf
Final project report on grocery store management system..pdfFinal project report on grocery store management system..pdf
Final project report on grocery store management system..pdf
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
 
The role of big data in decision making.
The role of big data in decision making.The role of big data in decision making.
The role of big data in decision making.
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
 
Runway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptxRunway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptx
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 

Dmcpresentation 120904112322 phpapp01

  • 1. Discrete Memoryless Channel and it’s Capacity by Purnachand Simhadri Asst. Professor Electronics and Communication Engineering Department K L University Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 2. Outline 1 Discrete Memoryless channel Probability Model Binary Channel 2 Mutual Information Joint Entropy Conditional Entropy Definition 3 Capacity of DMC Transmission Rate Definition Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 3. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC 1 Discrete Memoryless channel Probability Model Binary Channel 2 Mutual Information Joint Entropy Conditional Entropy Definition 3 Capacity of DMC Transmission Rate Definition Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 4. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties Properties The input of a DMC is a symbol belongig to a alphabet of M symbols with a probabilty of transmission pt i(i = 1, 2, 3, . . . , M). The input of a DMC is a symbol belongig to the same alphabet of M symbols with a probabilty pr j(j = 1, 2, 3, . . . , M). Due to errors caused by noise in channel, the output may be different from input during symbols interval. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 5. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties Properties The input of a DMC is a symbol belongig to a alphabet of M symbols with a probabilty of transmission pt i(i = 1, 2, 3, . . . , M). The input of a DMC is a symbol belongig to the same alphabet of M symbols with a probabilty pr j(j = 1, 2, 3, . . . , M). Due to errors caused by noise in channel, the output may be different from input during symbols interval. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 6. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties Properties The input of a DMC is a symbol belongig to a alphabet of M symbols with a probabilty of transmission pt i(i = 1, 2, 3, . . . , M). The input of a DMC is a symbol belongig to the same alphabet of M symbols with a probabilty pr j(j = 1, 2, 3, . . . , M). Due to errors caused by noise in channel, the output may be different from input during symbols interval. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 7. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties Properties The input of a DMC is a symbol belongig to a alphabet of M symbols with a probabilty of transmission pt i(i = 1, 2, 3, . . . , M). The input of a DMC is a symbol belongig to the same alphabet of M symbols with a probabilty pr j(j = 1, 2, 3, . . . , M). Due to errors caused by noise in channel, the output may be different from input during symbols interval. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 8. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties(contd...) Properties (contd...) In an ideal channel, the output is equal to the input. In a non-ideal channel, the output can be different from the input with a given transition probability pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M). In DMC the output of the channel depends only on the input of the channel at the same instant and not on the input before or after. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 9. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties(contd...) Properties (contd...) In an ideal channel, the output is equal to the input. In a non-ideal channel, the output can be different from the input with a given transition probability pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M). In DMC the output of the channel depends only on the input of the channel at the same instant and not on the input before or after. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 10. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties(contd...) Properties (contd...) In an ideal channel, the output is equal to the input. In a non-ideal channel, the output can be different from the input with a given transition probability pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M). In DMC the output of the channel depends only on the input of the channel at the same instant and not on the input before or after. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 11. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Properties(contd...) Properties (contd...) In an ideal channel, the output is equal to the input. In a non-ideal channel, the output can be different from the input with a given transition probability pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M). In DMC the output of the channel depends only on the input of the channel at the same instant and not on the input before or after. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 12. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Probability Model All these transition probabilities from xi to yj are gathered in a transition matrix (also called as channel matrix) to model DMC . pt i = P(X = xi), pr j = P(Y = yi), pij = P(Y = yj/X = xi) and P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt i ⇒ pr j = M i=1 pt ipij (1) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 13. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Probability Model All these transition probabilities from xi to yj are gathered in a transition matrix (also called as channel matrix) to model DMC . pt i = P(X = xi), pr j = P(Y = yi), pij = P(Y = yj/X = xi) and P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt i ⇒ pr j = M i=1 pt ipij (1) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 14. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Probability Model Equation( 1) can be written using matrix form as      pr 1 pr 2 ... pr M      Pr Y =      p11 p12 · · · p1M p21 p22 · · · p2M ... ... ... ... pM1 pM2 · · · pMM      Channel Matrix − PY/X      pt 1 pt 2 ... pt M      Pt X (2) Equation( 2) can be compactly written as Pr Y = PY/XPt X (3) Note that, M j=1 pij = 1 and pe = M i=1 pi   M j=1,i=j pij   (4) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 15. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Binary Channel Channels designed to transmit and receive one of M symbols aree called discrete M-ary channels (M > 2). If M=2, then the channel is called binary channel. In the binary case we can statistically model the channel as below 0 1 0 1 𝑷 𝟎𝟎 𝑷 𝟏𝟏 𝑷 𝟎𝟏 𝑷 𝟏𝟎 𝑷 𝟎 𝒕 𝑷 𝟏 𝒕 𝑷 𝟎 𝒓 𝑷 𝟏 𝒓 P(Y = j/X = i) = pij p00 + p01 = 1 p10 + p11 = 1 P(X = 0) = pt 0 P(X = 1) = pt 1 P(Y = 0) = pr 0 P(Y = 1) = pr 1 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 16. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Binary Channel Channels designed to transmit and receive one of M symbols aree called discrete M-ary channels (M > 2). If M=2, then the channel is called binary channel. In the binary case we can statistically model the channel as below 0 1 0 1 𝑷 𝟎𝟎 𝑷 𝟏𝟏 𝑷 𝟎𝟏 𝑷 𝟏𝟎 𝑷 𝟎 𝒕 𝑷 𝟏 𝒕 𝑷 𝟎 𝒓 𝑷 𝟏 𝒓 P(Y = j/X = i) = pij p00 + p01 = 1 p10 + p11 = 1 P(X = 0) = pt 0 P(X = 1) = pt 1 P(Y = 0) = pr 0 P(Y = 1) = pr 1 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 17. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Binary Channel for a binary channel, pr 0 = pt 0p00 + pt 1p10 pr 1 = pt 0p01 + pt 1p11 and Pe = pt 0p01 + pt 1p10 Binary Symmetric Channel A binary channel is said to be binary symmetric channel is p00 = p11 (⇒ p01 = p10). Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p then, for a binary symmetric channel Pe = pt 0p01 + pt 1p10 = pt 0(1 − p) + pt 1(1 − p) = 1 − p Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 18. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Binary Channel for a binary channel, pr 0 = pt 0p00 + pt 1p10 pr 1 = pt 0p01 + pt 1p11 and Pe = pt 0p01 + pt 1p10 Binary Symmetric Channel A binary channel is said to be binary symmetric channel is p00 = p11 (⇒ p01 = p10). Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p then, for a binary symmetric channel Pe = pt 0p01 + pt 1p10 = pt 0(1 − p) + pt 1(1 − p) = 1 − p Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 19. Discrete Memoryless channel Properties Probability Model Binary Channel Mutual Information Capacity of DMC Discrete Memoryless channel Binary Channel for a binary channel, pr 0 = pt 0p00 + pt 1p10 pr 1 = pt 0p01 + pt 1p11 and Pe = pt 0p01 + pt 1p10 Binary Symmetric Channel A binary channel is said to be binary symmetric channel is p00 = p11 (⇒ p01 = p10). Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p then, for a binary symmetric channel Pe = pt 0p01 + pt 1p10 = pt 0(1 − p) + pt 1(1 − p) = 1 − p Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 20. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC 1 Discrete Memoryless channel Probability Model Binary Channel 2 Mutual Information Joint Entropy Conditional Entropy Definition 3 Capacity of DMC Transmission Rate Definition Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 21. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Joint Entropy In a DMC, there are two statistical process at work: input to the channel and the noise, which inturn effects the output of channel. So, it is worthy to consider the joint and conditional densities of input and output. Thus there are a number of entropies or information contents that need to be considered for studying discrete memoryless channel characteristics. First, entropy of the input is H(X) = − M i=1 pt i log2(pt i) bits/symbol Entropy of the output is H(Y ) = − M j=1 pr j log2(pr j ) bits/symbol Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 22. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Joint Entropy Joint distribution of input and output can be obtained from transition probabilities and input distribution as P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt i Joint Entropy Joint entropy H(X, Y ) is defined as H(X, Y ) = − xi∈X yj∈Y P(xi, yj) log2 P(xi, yj) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 23. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Joint Entropy Joint distribution of input and output can be obtained from transition probabilities and input distribution as P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt i Joint Entropy Joint entropy H(X, Y ) is defined as H(X, Y ) = − xi∈X yj∈Y P(xi, yj) log2 P(xi, yj) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 24. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Joint Entropy Joint Entropy - Properties The joint entropy of a set of variables is greater than or equal to all of the individual entropies of the variables in the set. H(X, Y ) ≥ max(H(X), H(Y )) The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. H(X, Y ) ≤ H(X) + H(Y ) This inequality is an equality if and only if X and Y are statistically independent. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 25. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Joint Entropy Joint Entropy - Properties The joint entropy of a set of variables is greater than or equal to all of the individual entropies of the variables in the set. H(X, Y ) ≥ max(H(X), H(Y )) The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. H(X, Y ) ≤ H(X) + H(Y ) This inequality is an equality if and only if X and Y are statistically independent. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 26. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Conditional Entropy Let the conditional distribution of X, given that the output of channel Y = yj, be P(X/Y = yj), then the average uncertainity about X given that Y = yj is given by H(X/Y = yj) = − xi∈X P(X = xi/Y = yj) log2 P(X = xi/Y = yj) The conditional entropy of X conditioned on Y is the expected value for the entropy of the distribution P(X/Y = yj) ⇒ H(X/Y ) = E[H(X/Y = yj)] = yj ∈Y P(Y = yj)H(X/Y = yj) = yj ∈Y P(yj) − xi∈X P(xi/yj) log2 P(xi/yj = − xi∈X yj ∈Y P(xi/yj)P(yj) log2 P(xi/yj) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 27. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Conditional Entropy Let the conditional distribution of X, given that the output of channel Y = yj, be P(X/Y = yj), then the average uncertainity about X given that Y = yj is given by H(X/Y = yj) = − xi∈X P(X = xi/Y = yj) log2 P(X = xi/Y = yj) The conditional entropy of X conditioned on Y is the expected value for the entropy of the distribution P(X/Y = yj) ⇒ H(X/Y ) = E[H(X/Y = yj)] = yj ∈Y P(Y = yj)H(X/Y = yj) = yj ∈Y P(yj) − xi∈X P(xi/yj) log2 P(xi/yj = − xi∈X yj ∈Y P(xi/yj)P(yj) log2 P(xi/yj) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 28. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Conditional Entropy Conditional Entropy - Definition Conditional entropy H(X/Y ) is defined as H(X/Y ) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) similarly, Conditional entropy H(Y/X) is defined as H(Y/X) = − xi∈X yj ∈Y P(xi, yj) log2 P(yj/xi) Conditional entropy is also called as equivocation. H(X/Y ) gives the amount of uncertainty remaining about the channel input X after the channel output Y has been observed. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 29. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Conditional Entropy Conditional Entropy - Definition Conditional entropy H(X/Y ) is defined as H(X/Y ) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) similarly, Conditional entropy H(Y/X) is defined as H(Y/X) = − xi∈X yj ∈Y P(xi, yj) log2 P(yj/xi) Conditional entropy is also called as equivocation. H(X/Y ) gives the amount of uncertainty remaining about the channel input X after the channel output Y has been observed. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 30. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Conditional Entropy There is less information in conditional entropy H(X/Y ) than in the entropy H(X) ⇒ H(X/Y ) − H(X) ≤ 0 Proof: H(X/Y ) − H(X) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) + xi∈X P(xi) log2 P(xi) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) + xi∈X yj ∈Y P(xi, yj) log2 P(xi) = xi∈X yj ∈Y P(xi, yj) log2 P(xi) P(xi/yj) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 31. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Conditional Entropy Using the inequality, log a ≤ (a − 1), it follows that: H(X/Y ) − H(X) ≤ xi∈X yj ∈Y P(xi, yj) P(xi) P(xi/yj) − 1 = xi∈X yj ∈Y P(xi, yj) P(xi/yj) P(xi) − xi∈X yj ∈Y P(xi, yj) = xi∈X P(xi) yj ∈Y P(yj) − 1 =1 − 1 =0 ⇒ H(X/Y ) ≤ H(X) and H(Y/X) ≤ H(Y ) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 32. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Conditional Entropy- Relation with Joint Entropy Conditional entropy H(X/Y ) is given by H(X/Y ) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi, yj) P(yj) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi, yj) + yj ∈Y xi∈X P(xi, yj) log2 P(yj) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi, yj) + yj ∈Y P(yj) log2 P(yj) = H(X, Y ) − H(Y ) ⇒ H(X, Y ) = H(X/Y ) + H(Y ) similarly, H(X, Y ) = H(Y/X) + H(X) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 33. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Definition Definition Mutual Information I(X, Y ) of X and Y is deifned as I(X, Y ) = H(X) − H(X/Y ) I(X, Y ) gives the uncertainty of the input X resolved by observing output Y . In other words, it is the protion of information of X that depends on Y . Properties Symmetric : I(X, Y ) = I(Y, X) I(X, Y ) = H(X) − H(X/Y ) = H(Y ) − H(Y/X) = H(X) + H(Y ) − H(X, Y ) Nonnegetive : I(X, Y ) ≥ 0 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 34. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Definition Definition Mutual Information I(X, Y ) of X and Y is deifned as I(X, Y ) = H(X) − H(X/Y ) I(X, Y ) gives the uncertainty of the input X resolved by observing output Y . In other words, it is the protion of information of X that depends on Y . Properties Symmetric : I(X, Y ) = I(Y, X) I(X, Y ) = H(X) − H(X/Y ) = H(Y ) − H(Y/X) = H(X) + H(Y ) − H(X, Y ) Nonnegetive : I(X, Y ) ≥ 0 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 35. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Definition Proof: Property - 1 I(X, Y ) =H(X) − H(X/Y ) = − xi∈X P(xi) log2 P(xi) + xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) = − xi∈X yj ∈Y P(xi, yj) log2 P(xi) + xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) = xi∈X yj ∈Y P(xi, yj) log2 P(xi/yj) P(xi) = xi∈X yj ∈Y P(xi, yj) log2 P(xi, yj) P(xi)P(yj) =I(Y, X) Equaion in box gives Kullback Leibler divergence between two probability distributions P(xi, yj) and P(xi)P(yj) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 36. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Definition Kullback Leibler divergence In probability theory and information theory, the Kullback - Leibler divergence(also information divergence, information gain, relative entropy) is a non-symmetric measure of the difference between two probability distributions P and Q. DKL(P//Q) = i log2 P(i) Q(i) KL measures the expected number of extra bits required to code samples from P when using a code based on Q, rather than using a code based on P. Thus, Mutual information gives no.of bits can be gained by considering dependancy between X and Y rather than by considering X and Y are independent. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 37. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Definition Kullback Leibler divergence In probability theory and information theory, the Kullback - Leibler divergence(also information divergence, information gain, relative entropy) is a non-symmetric measure of the difference between two probability distributions P and Q. DKL(P//Q) = i log2 P(i) Q(i) KL measures the expected number of extra bits required to code samples from P when using a code based on Q, rather than using a code based on P. Thus, Mutual information gives no.of bits can be gained by considering dependancy between X and Y rather than by considering X and Y are independent. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 38. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Definition Kullback Leibler divergence In probability theory and information theory, the Kullback - Leibler divergence(also information divergence, information gain, relative entropy) is a non-symmetric measure of the difference between two probability distributions P and Q. DKL(P//Q) = i log2 P(i) Q(i) KL measures the expected number of extra bits required to code samples from P when using a code based on Q, rather than using a code based on P. Thus, Mutual information gives no.of bits can be gained by considering dependancy between X and Y rather than by considering X and Y are independent. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 39. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Definition Proof: Property - 2 It is known that : H(X) ≥ H(X/Y ) ⇒ H(X) − H(X/Y ) ≥ 0 If X and Y are statistically independent, then H(X/Y ) = H(X) ⇒ I(X, Y ) = 0 . Therefore, I(X, Y ) = H(X) − H(X/Y ) ≥ 0 with equality when X and Y are statistically independent. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 40. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Four Cases Case - 1: X and Y are statistically independent 𝐻(𝑋) 𝐻(𝑌) 𝐻 𝑋, 𝑌 = 𝐻 𝑋 + 𝐻(𝑌) 𝐼(𝑋, 𝑌) = 0 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 41. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Four Cases Case - 2: Y is completely dependent on X Case - 3: X is completely dependent on Y I(X,Y) = H(Y) 𝐻(𝑋) 𝐻(𝑌) I(X,Y) = H(X) 𝐻(𝑋) 𝐻(𝑌) 𝐻(𝑋, 𝑌) = 𝐻(𝑋) 𝐻(𝑋, 𝑌) = 𝐻(𝑌) CASE -2 CASE -3 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 42. Discrete Memoryless channel Mutual Information Joint Entropy Conditional Entropy Definition Four Cases Capacity of DMC Mutual Information Four Cases Case - 4: X and Y are neither statistically independent nor one is completely dependent on the other. 𝐻(𝑋/𝑌) I(𝑋, 𝑌) 𝐻 𝑋, 𝑌 = 𝐻 𝑋 + 𝐻(𝑌/𝑋) = 𝐻 𝑌 + 𝐻(𝑋/𝑌) 𝐻(𝑌/𝑋) 𝐻(𝑋) 𝐻(𝑌) 𝐼(𝑋, 𝑌) = 𝐻(𝑋) − 𝐻(𝑋/𝑌) = 𝐻(𝑌) − 𝐻(𝑌/𝑋) Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 43. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples 1 Discrete Memoryless channel Probability Model Binary Channel 2 Mutual Information Joint Entropy Conditional Entropy Definition 3 Capacity of DMC Transmission Rate Definition Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 44. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Transmission Rate H(X) is the amount uncertainity about X, in other words, information gain related to X if we are told about X. H(X/Y ) is remaining amount uncertainity about X when Y is observed, in other words, the amount information required to resolve X if we are told about Y . I(X, Y ) is amount of uncertainity of X resolved by observing the output Y . So, the amount of information that can be transmitted over a channel is nothing but the amount of uncertainity resolved by observing the channel output. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 45. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Transmission Rate Thus, it is possible to transmit I(X, Y ) bits of information per channel use, approximately, without any uncertainity about the input at the output of the channel. ⇒ It = I(X, Y ) = H(X)−H(X/Y ) bits/channel use If the the symbol rate of a source is Rs, then the rate of information that can be transmitted over a channel such that the input can be resolved approximately without errors is given by Dt = [H(X) − H(X/Y )]Rs bits/sec Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 46. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Transmission Rate For an ideal channel X = Y , there is no uncertainty over X when we observe Y . ⇒ H(X/Y ) = 0 ⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X) So all the information is transmitted for each channel use: It = I(X, Y ) = H(X) If the channel is too noisy, such that X and Y are independent. So the uncertainty over X remains the same irrespective of observation on Y . ⇒ H(X/Y ) = H(X) ⇒ I(X, Y ) = H(X) − H(X/Y ) = 0 i.e., no information passes through the channel: It = I(X, Y ) = 0 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 47. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Transmission Rate For an ideal channel X = Y , there is no uncertainty over X when we observe Y . ⇒ H(X/Y ) = 0 ⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X) So all the information is transmitted for each channel use: It = I(X, Y ) = H(X) If the channel is too noisy, such that X and Y are independent. So the uncertainty over X remains the same irrespective of observation on Y . ⇒ H(X/Y ) = H(X) ⇒ I(X, Y ) = H(X) − H(X/Y ) = 0 i.e., no information passes through the channel: It = I(X, Y ) = 0 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 48. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Definition The capacity of DMC is the maximum rate of information transmission over the channel. The maximum rate of transmission occurs when the source is matched to the channel. Definition The capacity of DMC is defined the maximum rate of information transmission over the channel, where the maximum is taken over all possible input distributions P(X) C = max P (X) I(X, Y )Rs bits/sec = max P (X) [H(X) − H(X/Y )]Rs bits/sec = max P (X) [H(Y ) − H(Y/X)]Rs bits/sec Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 49. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Definition The capacity of DMC is the maximum rate of information transmission over the channel. The maximum rate of transmission occurs when the source is matched to the channel. Definition The capacity of DMC is defined the maximum rate of information transmission over the channel, where the maximum is taken over all possible input distributions P(X) C = max P (X) I(X, Y )Rs bits/sec = max P (X) [H(X) − H(X/Y )]Rs bits/sec = max P (X) [H(Y ) − H(Y/X)]Rs bits/sec Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 50. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noiseless Binary Channel Consider a noiseless binary channel as shown below 0 1 0 1 𝑷 𝟎𝟎 = 𝟏 𝑷 𝟏𝟏 = 𝟏 𝑷 𝟎𝟏 = 𝟎 𝑷 𝟏𝟎 = 𝟎 𝑷(𝒙 𝟎) 𝑷(𝒙 𝟏) 𝑷(𝒚 𝟎)𝑷(𝒙 𝟎) 𝑷(𝒚 𝟏) P(x0, y0) = P(x0)P00 = P(x0) P(x1, y1) = P(x1)P11 = P(x1) P(x0, y1) = P(x0)P01 = 0 P(x1, y0) = P(x1)P10 = 0 P(y0) = P(x0)P00 + P(x1)P10 = P(x0) P(y1) = P(x0)P01 + P(x1)P11 = P(x1) P(x0/y0) = P(x0, y0) P(y0) = P(x0) P(x0) = 1 P(x0/y1) = P(x0, y1) P(y1) = 0 P(x1) = 0 P(x1/y0) = P(x1, y0) P(y0) = 0 P(x0) = 0 P(x1/y1) = P(x1, y1) P(y1) = P(x1) P(x1) = 1 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 51. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noiseless Binary Channel Consider a noiseless binary channel as shown below 0 1 0 1 𝑷 𝟎𝟎 = 𝟏 𝑷 𝟏𝟏 = 𝟏 𝑷 𝟎𝟏 = 𝟎 𝑷 𝟏𝟎 = 𝟎 𝑷(𝒙 𝟎) 𝑷(𝒙 𝟏) 𝑷(𝒚 𝟎)𝑷(𝒙 𝟎) 𝑷(𝒚 𝟏) P(x0, y0) = P(x0)P00 = P(x0) P(x1, y1) = P(x1)P11 = P(x1) P(x0, y1) = P(x0)P01 = 0 P(x1, y0) = P(x1)P10 = 0 P(y0) = P(x0)P00 + P(x1)P10 = P(x0) P(y1) = P(x0)P01 + P(x1)P11 = P(x1) P(x0/y0) = P(x0, y0) P(y0) = P(x0) P(x0) = 1 P(x0/y1) = P(x0, y1) P(y1) = 0 P(x1) = 0 P(x1/y0) = P(x1, y0) P(y0) = 0 P(x0) = 0 P(x1/y1) = P(x1, y1) P(y1) = P(x1) P(x1) = 1 Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 52. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noiseless Binary Channel H(X/Y ) = − 1 i=0 1 j=0 P(xi, yj) log2 P(xi/yj) = − [P(x0, y0) log2 P(x0/y0) + P(x0, y1) log2 P(x0/y1) + P(x1, y0) log2 P(x1/y0) + P(x1, y1) log2 P(x1/y1)] = 0 ⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X) Therefore, the capacity of noiseless binary channel is C = max P (X) I(X, Y ) bits/channel use = max P (X) H(X) bits/channel use = 1 bits/channel use i.e., over a noiseless binary channel atmost one bit of information can be send per channel use, which is maximum information content of a binary source. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 53. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noiseless Binary Channel H(X/Y ) = − 1 i=0 1 j=0 P(xi, yj) log2 P(xi/yj) = − [P(x0, y0) log2 P(x0/y0) + P(x0, y1) log2 P(x0/y1) + P(x1, y0) log2 P(x1/y0) + P(x1, y1) log2 P(x1/y1)] = 0 ⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X) Therefore, the capacity of noiseless binary channel is C = max P (X) I(X, Y ) bits/channel use = max P (X) H(X) bits/channel use = 1 bits/channel use i.e., over a noiseless binary channel atmost one bit of information can be send per channel use, which is maximum information content of a binary source. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 54. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noisy Binary Symmetric Channel Consider a noisy binary symmetric channel as shown below 0 1 0 1 𝑷 𝟎𝟎 = 𝒑 𝑷 𝟏𝟏 = 𝒑 𝑷 𝟎𝟏 = 𝟏 − 𝒑 𝑷 𝟏𝟎 = 𝟏 − 𝒑 𝑷(𝒙 𝟎) 𝑷(𝒙 𝟏) 𝑷(𝒚 𝟎)𝑷(𝒙 𝟎) 𝑷(𝒚 𝟏) P(x0, y0) = P(x0)P00 = P(x0)p P(x1, y1) = P(x1)P11 = P(x1)p P(x0, y1) = P(x0)P01 = P(x0)(1 − p) P(x1, y0) = P(x1)P10 = P(x1)(1 − p) P(y0) = P(x0)P00 + P(x1)P10 = P(x0)p + P(x1)(1 − p) P(y1) = P(x0)P01 + P(x1)P11 = P(x0)(1 − p) + P(x1)p Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 55. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noisy Binary Symmetric Channel Consider a noisy binary symmetric channel as shown below 0 1 0 1 𝑷 𝟎𝟎 = 𝒑 𝑷 𝟏𝟏 = 𝒑 𝑷 𝟎𝟏 = 𝟏 − 𝒑 𝑷 𝟏𝟎 = 𝟏 − 𝒑 𝑷(𝒙 𝟎) 𝑷(𝒙 𝟏) 𝑷(𝒚 𝟎)𝑷(𝒙 𝟎) 𝑷(𝒚 𝟏) P(x0, y0) = P(x0)P00 = P(x0)p P(x1, y1) = P(x1)P11 = P(x1)p P(x0, y1) = P(x0)P01 = P(x0)(1 − p) P(x1, y0) = P(x1)P10 = P(x1)(1 − p) P(y0) = P(x0)P00 + P(x1)P10 = P(x0)p + P(x1)(1 − p) P(y1) = P(x0)P01 + P(x1)P11 = P(x0)(1 − p) + P(x1)p Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 56. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noisy Binary Symmetric Channel H(Y/X) = − 1 i=0 1 j=0 P(xi, yj) log2 P(yj/xi) = − [P(x0, y0) log2 P(y0/x0) + P(x0, y1) log2 P(y1/x0) + P(x1, y0) log2 P(y0/x1) + P(x1, y1) log2 P(y1/x1)] = − [P(x0)p log2 p + P(x0)(1 − p) log2(1 − p) + P(x1)(1 − p) log2(1 − p) + P(x1)p log2 p] = − [p log2 p + (1 − p) log2(1 − p)] = H(p, 1 − p) ⇒ I(X, Y ) = H(Y ) − H(Y/X) = H(Y ) − H(p, 1 − p) Therefore, the capacity of noisy binary symmetric channel is C = max P (X) I(X, Y ) bits/channel use = max P (X) H(Y ) − H(p, 1 − p) bits/channel use = 1 − H(p, 1 − p) bits/channel use Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 57. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noisy Binary Symmetric Channel H(Y/X) = − 1 i=0 1 j=0 P(xi, yj) log2 P(yj/xi) = − [P(x0, y0) log2 P(y0/x0) + P(x0, y1) log2 P(y1/x0) + P(x1, y0) log2 P(y0/x1) + P(x1, y1) log2 P(y1/x1)] = − [P(x0)p log2 p + P(x0)(1 − p) log2(1 − p) + P(x1)(1 − p) log2(1 − p) + P(x1)p log2 p] = − [p log2 p + (1 − p) log2(1 − p)] = H(p, 1 − p) ⇒ I(X, Y ) = H(Y ) − H(Y/X) = H(Y ) − H(p, 1 − p) Therefore, the capacity of noisy binary symmetric channel is C = max P (X) I(X, Y ) bits/channel use = max P (X) H(Y ) − H(p, 1 − p) bits/channel use = 1 − H(p, 1 − p) bits/channel use Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 58. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noisy Binary Symmetric Channel To achieve the capacity of 1 − H(p, 1 − p) over a noisy binary symmetric channel, the input distribution should make H(Y ) = 1. H(Y ) = 1, if P(y0) = P(y1) = 1 2 ⇒ P(x0)p + P(x1)(1 − p) = 1 2 and P(x0)(1 − p) + P(x1)p = 1 2 ⇒ (1 − 2p)(P(x1) − P(x0)) = 0 ⇒ P(x1) = P(x0) = 1 2 Thus over a binary symmetric channel, maximum information rate is possible when the source symbols are equally likely. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 59. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noisy Binary Symmetric Channel To achieve the capacity of 1 − H(p, 1 − p) over a noisy binary symmetric channel, the input distribution should make H(Y ) = 1. H(Y ) = 1, if P(y0) = P(y1) = 1 2 ⇒ P(x0)p + P(x1)(1 − p) = 1 2 and P(x0)(1 − p) + P(x1)p = 1 2 ⇒ (1 − 2p)(P(x1) − P(x0)) = 0 ⇒ P(x1) = P(x0) = 1 2 Thus over a binary symmetric channel, maximum information rate is possible when the source symbols are equally likely. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 60. Discrete Memoryless channel Mutual Information Capacity of DMC Transmission Rate Definition Examples Capacity of DMC Noisy Binary Symmetric Channel Capacity of binary symmetric channel Vs p Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 61. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
  • 62. Information Theory and Coding Discrete Memoryless Channel and it’s Capacity