This document discusses discrete memoryless channels and their capacity. It defines a discrete memoryless channel as having inputs from a finite alphabet that are transmitted independently through the channel. The output depends only on the current input and not past inputs. It presents the probability model and transition matrix for modeling a discrete memoryless channel. It also describes binary channels specifically and binary symmetric channels where the error probability between any two inputs is the same. The goal is to determine the maximum error-free transmission rate, or capacity, of a discrete memoryless channel.
Application of Fuzzy Algebra in Coding TheoryIJERA Editor
Fuzziness means different things depending upon the domain of application and the way it is measured. By
means of fuzzy sets, vague notions can be described mathematically now a vigorous area of research with
manifold applications. It should be mentioned that there are natural ways (not necessarily trivial) to fuzzily
various mathematical structures such as topological spaces, algebraic structure etc. The notion of L-fuzzy sets
later more generalizations were also made using various membership sets and operations. In this section we let F
denote the field of integers module 2, we define a fuzzy code as a fuzzy subset of Fn where Fn = {(a1, ....an | a i
F, i = 1, ...n} and n is a fixed arbitrary positive integers we recall that Fn is a vector space over F. We give an
analysis of the Hamming distance between two fuzzy code words and the error – correcting capability of a code
in terms of its corresponding fuzzy codes. The results appearing in the first part of this section are from [17].
Application of Fuzzy Algebra in Coding TheoryIJERA Editor
Fuzziness means different things depending upon the domain of application and the way it is measured. By
means of fuzzy sets, vague notions can be described mathematically now a vigorous area of research with
manifold applications. It should be mentioned that there are natural ways (not necessarily trivial) to fuzzily
various mathematical structures such as topological spaces, algebraic structure etc. The notion of L-fuzzy sets
later more generalizations were also made using various membership sets and operations. In this section we let F
denote the field of integers module 2, we define a fuzzy code as a fuzzy subset of Fn where Fn = {(a1, ....an | a i
F, i = 1, ...n} and n is a fixed arbitrary positive integers we recall that Fn is a vector space over F. We give an
analysis of the Hamming distance between two fuzzy code words and the error – correcting capability of a code
in terms of its corresponding fuzzy codes. The results appearing in the first part of this section are from [17].
One of the important steps in routing is to find a feasible path based on the state information. In order to support real-time multimedia applications, the feasible path that satisfies one or more constraints has to be computed within a very short time. Therefore, the paper presents a genetic algorithm to solve the paths tree problem subject to cost constraints. The objective of the algorithm is to find the set of edges connecting all nodes such that the sum of the edge costs from the source (root) to each node is minimized. I.e. the path from the root to each node must be a minimum cost path connecting them. The algorithm has been applied on two sample networks, the first network with eight nodes, and the last one with eleven nodes to illustrate its efficiency.
Performance analysis of image compression using fuzzy logic algorithmsipij
With the increase in demand, product of multimedia is increasing fast and thus contributes to insufficient
network bandwidth and memory storage. Therefore image compression is more significant for reducing
data redundancy for save more memory and transmission bandwidth. An efficient compression technique
has been proposed which combines fuzzy logic with that of Huffman coding. While normalizing image
pixel, each value of pixel image belonging to that image foreground are characterized and interpreted. The
image is sub divided into pixel which is then characterized by a pair of set of approximation. Here
encoding represent Huffman code which is statistically independent to produce more efficient code for
compression and decoding represents rough fuzzy logic which is used to rebuilt the pixel of image. The
method used here are rough fuzzy logic with Huffman coding algorithm (RFHA). Here comparison of
different compression techniques with Huffman coding is done and fuzzy logic is applied on the Huffman
reconstructed image. Result shows that high compression rates are achieved and visually negligible
difference between compressed images and original images
Energy-Efficient LDPC Decoder using DVFS for binary sourcesIDES Editor
This paper deals with reduction of the transmission
power usage in the wireless sensor networks. A system with
FEC can provide an objective reliability using less power
than a system without FEC. We propose to study LDPC
codes to provide reliable communication while saving power
in the sensor networks. As shown later, LDPC codes are more
energy efficient than those that use BCH codes. Another
method to reduce the transmission cost is to compress the
correlated data among a number of sensor nodes before
transmission. A suitable source encoder that removes the
redundant information bits can save the transmission power.
Such a system requires distributed source coding. We propose
to apply LDPC codes for both distributed source coding and
source-channel coding to obtain a two-fold energy savings.
Source and channel coding with LDPC for two correlated nodes
under AWGN channel is implemented in this paper. In this
iterative decoding algorithm is used for decoding the data, and
it’s efficiency is compared with the new decoding algorithm
called layered decoding algorithm which based on offset min
sum algorithm. The usage of layered decoding algorithm and
Adaptive LDPC decoding for AWGN channel reduces the
decoding complexity and its number of iterations. So the power
will be saved, and it can be implemented in hardware.
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation ModelsEditor IJCATR
Constraint programming is the study of system which is based on constraints. The solution of a constraint satisfaction problem is a set of
variable value assignments, which satisfies all members of the set of constraints in the CSP. In this paper the application of constraint satisfaction
programming is used in predicting the path loss of various indoor propagation models using chronological backtrack algorithm, which is basic
algorithm of CSP. After predicting the path loss at different set of parameters such as frequencies (f), floor attenuation factor (FAF), path loss
coefficient (n), we find the optimum set of parameter frequency (f), floor attenuation factor (FAF), path loss coefficient(n) at which the path loss is
minimum. The Branch and bound algorithm is used to optimize the constraint satisfaction problem.
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...IJCNCJournal
The Graph is a powerful mathematical tool applied in many fields as transportation, communication,
informatics, economy, … In an ordinary graph, the weights of edges and vertexes are considered
independently where the length of a path is the sum of weights of the edges and the vertexes on this path.
However, in many practical problems, weights at a vertex are not the same for all paths passing this vertex
but depend on coming and leaving edges. The presented paper develops a model of the extended linear
multi-commodity multi-cost network that can be more exactly and effectively applied to model many
practical problems. Then, maximal limit cost flow problems are modeled as implicit linear programming
problems. On the base of dual theory in linear programming, an effective approximate algorithm is
developed.
One of the important steps in routing is to find a feasible path based on the state information. In order to support real-time multimedia applications, the feasible path that satisfies one or more constraints has to be computed within a very short time. Therefore, the paper presents a genetic algorithm to solve the paths tree problem subject to cost constraints. The objective of the algorithm is to find the set of edges connecting all nodes such that the sum of the edge costs from the source (root) to each node is minimized. I.e. the path from the root to each node must be a minimum cost path connecting them. The algorithm has been applied on two sample networks, the first network with eight nodes, and the last one with eleven nodes to illustrate its efficiency.
Performance analysis of image compression using fuzzy logic algorithmsipij
With the increase in demand, product of multimedia is increasing fast and thus contributes to insufficient
network bandwidth and memory storage. Therefore image compression is more significant for reducing
data redundancy for save more memory and transmission bandwidth. An efficient compression technique
has been proposed which combines fuzzy logic with that of Huffman coding. While normalizing image
pixel, each value of pixel image belonging to that image foreground are characterized and interpreted. The
image is sub divided into pixel which is then characterized by a pair of set of approximation. Here
encoding represent Huffman code which is statistically independent to produce more efficient code for
compression and decoding represents rough fuzzy logic which is used to rebuilt the pixel of image. The
method used here are rough fuzzy logic with Huffman coding algorithm (RFHA). Here comparison of
different compression techniques with Huffman coding is done and fuzzy logic is applied on the Huffman
reconstructed image. Result shows that high compression rates are achieved and visually negligible
difference between compressed images and original images
Energy-Efficient LDPC Decoder using DVFS for binary sourcesIDES Editor
This paper deals with reduction of the transmission
power usage in the wireless sensor networks. A system with
FEC can provide an objective reliability using less power
than a system without FEC. We propose to study LDPC
codes to provide reliable communication while saving power
in the sensor networks. As shown later, LDPC codes are more
energy efficient than those that use BCH codes. Another
method to reduce the transmission cost is to compress the
correlated data among a number of sensor nodes before
transmission. A suitable source encoder that removes the
redundant information bits can save the transmission power.
Such a system requires distributed source coding. We propose
to apply LDPC codes for both distributed source coding and
source-channel coding to obtain a two-fold energy savings.
Source and channel coding with LDPC for two correlated nodes
under AWGN channel is implemented in this paper. In this
iterative decoding algorithm is used for decoding the data, and
it’s efficiency is compared with the new decoding algorithm
called layered decoding algorithm which based on offset min
sum algorithm. The usage of layered decoding algorithm and
Adaptive LDPC decoding for AWGN channel reduces the
decoding complexity and its number of iterations. So the power
will be saved, and it can be implemented in hardware.
Evolving CSP Algorithm in Predicting the Path Loss of Indoor Propagation ModelsEditor IJCATR
Constraint programming is the study of system which is based on constraints. The solution of a constraint satisfaction problem is a set of
variable value assignments, which satisfies all members of the set of constraints in the CSP. In this paper the application of constraint satisfaction
programming is used in predicting the path loss of various indoor propagation models using chronological backtrack algorithm, which is basic
algorithm of CSP. After predicting the path loss at different set of parameters such as frequencies (f), floor attenuation factor (FAF), path loss
coefficient (n), we find the optimum set of parameter frequency (f), floor attenuation factor (FAF), path loss coefficient(n) at which the path loss is
minimum. The Branch and bound algorithm is used to optimize the constraint satisfaction problem.
EXTENDED LINEAR MULTI-COMMODITY MULTICOST NETWORK AND MAXIMAL FLOW LIMITED CO...IJCNCJournal
The Graph is a powerful mathematical tool applied in many fields as transportation, communication,
informatics, economy, … In an ordinary graph, the weights of edges and vertexes are considered
independently where the length of a path is the sum of weights of the edges and the vertexes on this path.
However, in many practical problems, weights at a vertex are not the same for all paths passing this vertex
but depend on coming and leaving edges. The presented paper develops a model of the extended linear
multi-commodity multi-cost network that can be more exactly and effectively applied to model many
practical problems. Then, maximal limit cost flow problems are modeled as implicit linear programming
problems. On the base of dual theory in linear programming, an effective approximate algorithm is
developed.
Multiuser MIMO Gaussian Channels: Capacity Region and DualityShristi Pradhan
In this paper, I present the MIMO channel for single user case, discuss the decomposition of MIMO into parallel independent channels, and estimate the MIMO channel capacity. Then, I discuss on computation of capacity region for multiuser MIMO broadcast and multiple access channel and plot capacity regions for two users case. I conclude by showing the duality relationship between the multiple access and broadcast channel and show its significance for numerical standpoint.
On Optimization of Network-coded Scalable Multimedia Service MulticastingAndrea Tassi
In the near future, the delivery of multimedia multicast services over next-generation networks is likely to become one of the main pillars of future cellular networks. In this extended abstract, we address the issue of efficiently multicasting layered video services by defining a novel optimization paradigm that is based on an Unequal Error Protection implementation of Random Linear Network Coding, and aims to ensure target service coverages by using a limited amount of radio resources.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Turbo Detection in Rayleigh flat fading channel with unknown statisticsijwmn
The turbo detection of turbo coded symbols over correlated Rayleigh flat fading channels generated
according to Jakes’ model is considered in this paper. We propose a method to estimate the channel
signal-to-noise ratio (SNR) and the maximum Doppler frequency. These statistics are required by
the linear minimum mean squared error (LMMSE) channel estimator. To improve the system convergence, we redefine the channel reliability factor by taking into account the channel estimation
error statistics. Simulation results for rate 1/3 turbo code and two different normalized fading rates
show that the use of the new reliability factor greatly improves the performance. The improvement
is more substantial when channel statistics are unknown.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Dmcpresentation 120904112322 phpapp01
1. Discrete Memoryless Channel and
it’s Capacity
by
Purnachand Simhadri
Asst. Professor
Electronics and Communication Engineering Department
K L University
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
2. Outline
1 Discrete Memoryless channel
Probability Model
Binary Channel
2 Mutual Information
Joint Entropy
Conditional Entropy
Definition
3 Capacity of DMC
Transmission Rate
Definition
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
4. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
5. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
6. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
7. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties
Properties
The input of a DMC is a symbol belongig to a
alphabet of M symbols with a probabilty of
transmission pt
i(i = 1, 2, 3, . . . , M).
The input of a DMC is a symbol belongig to the same
alphabet of M symbols with a probabilty
pr
j(j = 1, 2, 3, . . . , M).
Due to errors caused by noise in channel, the output
may be different from input during symbols interval.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
8. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
9. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
10. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
11. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Properties(contd...)
Properties (contd...)
In an ideal channel, the output is equal to the input.
In a non-ideal channel, the output can be different
from the input with a given transition probability
pij = p(Y = yj/X = xi) (i, j = 1, 2, 3, . . . , M).
In DMC the output of the channel depends only on
the input of the channel at the same instant and not
on the input before or after.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
12. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Probability Model
All these transition probabilities from xi to yj are
gathered in a transition matrix (also called as channel
matrix) to model DMC .
pt
i = P(X = xi), pr
j = P(Y = yi), pij = P(Y = yj/X = xi)
and P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
⇒ pr
j =
M
i=1
pt
ipij (1)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
13. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Probability Model
All these transition probabilities from xi to yj are
gathered in a transition matrix (also called as channel
matrix) to model DMC .
pt
i = P(X = xi), pr
j = P(Y = yi), pij = P(Y = yj/X = xi)
and P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
⇒ pr
j =
M
i=1
pt
ipij (1)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
14. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Probability Model
Equation( 1) can be written using matrix form as
pr
1
pr
2
...
pr
M
Pr
Y
=
p11 p12 · · · p1M
p21 p22 · · · p2M
...
...
...
...
pM1 pM2 · · · pMM
Channel Matrix − PY/X
pt
1
pt
2
...
pt
M
Pt
X
(2)
Equation( 2) can be compactly written as
Pr
Y = PY/XPt
X (3)
Note that,
M
j=1
pij = 1 and pe =
M
i=1
pi
M
j=1,i=j
pij
(4)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
15. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
Channels designed to transmit and receive one of M
symbols aree called discrete M-ary channels (M > 2).
If M=2, then the channel is called binary channel.
In the binary case we can statistically model the
channel as below
0
1
0
1
𝑷 𝟎𝟎
𝑷 𝟏𝟏
𝑷 𝟎𝟏
𝑷 𝟏𝟎
𝑷 𝟎
𝒕
𝑷 𝟏
𝒕
𝑷 𝟎
𝒓
𝑷 𝟏
𝒓
P(Y = j/X = i) = pij
p00 + p01 = 1
p10 + p11 = 1
P(X = 0) = pt
0
P(X = 1) = pt
1
P(Y = 0) = pr
0
P(Y = 1) = pr
1
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
16. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
Channels designed to transmit and receive one of M
symbols aree called discrete M-ary channels (M > 2).
If M=2, then the channel is called binary channel.
In the binary case we can statistically model the
channel as below
0
1
0
1
𝑷 𝟎𝟎
𝑷 𝟏𝟏
𝑷 𝟎𝟏
𝑷 𝟏𝟎
𝑷 𝟎
𝒕
𝑷 𝟏
𝒕
𝑷 𝟎
𝒓
𝑷 𝟏
𝒓
P(Y = j/X = i) = pij
p00 + p01 = 1
p10 + p11 = 1
P(X = 0) = pt
0
P(X = 1) = pt
1
P(Y = 0) = pr
0
P(Y = 1) = pr
1
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
17. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
for a binary channel,
pr
0 = pt
0p00 + pt
1p10
pr
1 = pt
0p01 + pt
1p11
and Pe = pt
0p01 + pt
1p10
Binary Symmetric Channel
A binary channel is said to be binary symmetric channel is
p00 = p11 (⇒ p01 = p10).
Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p
then, for a binary symmetric channel
Pe = pt
0p01 + pt
1p10 = pt
0(1 − p) + pt
1(1 − p) = 1 − p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
18. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
for a binary channel,
pr
0 = pt
0p00 + pt
1p10
pr
1 = pt
0p01 + pt
1p11
and Pe = pt
0p01 + pt
1p10
Binary Symmetric Channel
A binary channel is said to be binary symmetric channel is
p00 = p11 (⇒ p01 = p10).
Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p
then, for a binary symmetric channel
Pe = pt
0p01 + pt
1p10 = pt
0(1 − p) + pt
1(1 − p) = 1 − p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
19. Discrete
Memoryless
channel
Properties
Probability
Model
Binary Channel
Mutual
Information
Capacity of
DMC
Discrete Memoryless channel
Binary Channel
for a binary channel,
pr
0 = pt
0p00 + pt
1p10
pr
1 = pt
0p01 + pt
1p11
and Pe = pt
0p01 + pt
1p10
Binary Symmetric Channel
A binary channel is said to be binary symmetric channel is
p00 = p11 (⇒ p01 = p10).
Let, p00 = p11 = p ⇒ p01 = p10 = 1 − p
then, for a binary symmetric channel
Pe = pt
0p01 + pt
1p10 = pt
0(1 − p) + pt
1(1 − p) = 1 − p
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
21. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
In a DMC, there are two statistical process at work:
input to the channel and the noise, which inturn
effects the output of channel. So, it is worthy to
consider the joint and conditional densities of input
and output.
Thus there are a number of entropies or information
contents that need to be considered for studying
discrete memoryless channel characteristics.
First, entropy of the input is
H(X) = −
M
i=1
pt
i log2(pt
i) bits/symbol
Entropy of the output is
H(Y ) = −
M
j=1
pr
j log2(pr
j ) bits/symbol
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
22. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint distribution of input and output can be obtained
from transition probabilities and input distribution as
P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
Joint Entropy
Joint entropy H(X, Y ) is defined as
H(X, Y ) = −
xi∈X yj∈Y
P(xi, yj) log2 P(xi, yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
23. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint distribution of input and output can be obtained
from transition probabilities and input distribution as
P(xi, yj) = P(yj/xi)P(X = xi) = pij · pt
i
Joint Entropy
Joint entropy H(X, Y ) is defined as
H(X, Y ) = −
xi∈X yj∈Y
P(xi, yj) log2 P(xi, yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
24. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint Entropy - Properties
The joint entropy of a set of variables is greater than
or equal to all of the individual entropies of the
variables in the set.
H(X, Y ) ≥ max(H(X), H(Y ))
The joint entropy of a set of variables is less than or
equal to the sum of the individual entropies of the
variables in the set.
H(X, Y ) ≤ H(X) + H(Y )
This inequality is an equality if and only if X and Y
are statistically independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
25. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Joint Entropy
Joint Entropy - Properties
The joint entropy of a set of variables is greater than
or equal to all of the individual entropies of the
variables in the set.
H(X, Y ) ≥ max(H(X), H(Y ))
The joint entropy of a set of variables is less than or
equal to the sum of the individual entropies of the
variables in the set.
H(X, Y ) ≤ H(X) + H(Y )
This inequality is an equality if and only if X and Y
are statistically independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
26. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Let the conditional distribution of X, given that the output of
channel Y = yj, be P(X/Y = yj), then the average uncertainity
about X given that Y = yj is given by
H(X/Y = yj) = −
xi∈X
P(X = xi/Y = yj) log2 P(X = xi/Y = yj)
The conditional entropy of X conditioned on Y is the expected
value for the entropy of the distribution P(X/Y = yj)
⇒ H(X/Y ) = E[H(X/Y = yj)]
=
yj ∈Y
P(Y = yj)H(X/Y = yj)
=
yj ∈Y
P(yj) −
xi∈X
P(xi/yj) log2 P(xi/yj
= −
xi∈X yj ∈Y
P(xi/yj)P(yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
27. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Let the conditional distribution of X, given that the output of
channel Y = yj, be P(X/Y = yj), then the average uncertainity
about X given that Y = yj is given by
H(X/Y = yj) = −
xi∈X
P(X = xi/Y = yj) log2 P(X = xi/Y = yj)
The conditional entropy of X conditioned on Y is the expected
value for the entropy of the distribution P(X/Y = yj)
⇒ H(X/Y ) = E[H(X/Y = yj)]
=
yj ∈Y
P(Y = yj)H(X/Y = yj)
=
yj ∈Y
P(yj) −
xi∈X
P(xi/yj) log2 P(xi/yj
= −
xi∈X yj ∈Y
P(xi/yj)P(yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
28. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Conditional Entropy - Definition
Conditional entropy H(X/Y ) is defined as
H(X/Y ) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
similarly, Conditional entropy H(Y/X) is defined as
H(Y/X) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(yj/xi)
Conditional entropy is also called as equivocation.
H(X/Y ) gives the amount of uncertainty remaining about
the channel input X after the channel output Y has been
observed.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
29. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Conditional Entropy - Definition
Conditional entropy H(X/Y ) is defined as
H(X/Y ) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
similarly, Conditional entropy H(Y/X) is defined as
H(Y/X) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(yj/xi)
Conditional entropy is also called as equivocation.
H(X/Y ) gives the amount of uncertainty remaining about
the channel input X after the channel output Y has been
observed.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
30. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
There is less information in conditional entropy H(X/Y )
than in the entropy H(X)
⇒ H(X/Y ) − H(X) ≤ 0
Proof:
H(X/Y ) − H(X) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
+
xi∈X
P(xi) log2 P(xi)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
+
xi∈X yj ∈Y
P(xi, yj) log2 P(xi)
=
xi∈X yj ∈Y
P(xi, yj) log2
P(xi)
P(xi/yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
31. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy
Using the inequality, log a ≤ (a − 1),
it follows that:
H(X/Y ) − H(X) ≤
xi∈X yj ∈Y
P(xi, yj)
P(xi)
P(xi/yj)
− 1
=
xi∈X yj ∈Y
P(xi, yj)
P(xi/yj)
P(xi) −
xi∈X yj ∈Y
P(xi, yj)
=
xi∈X
P(xi)
yj ∈Y
P(yj) − 1
=1 − 1
=0
⇒ H(X/Y ) ≤ H(X)
and H(Y/X) ≤ H(Y )
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
32. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Conditional Entropy- Relation with Joint Entropy
Conditional entropy H(X/Y ) is given by
H(X/Y ) = −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2
P(xi, yj)
P(yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi, yj)
+
yj ∈Y xi∈X
P(xi, yj) log2 P(yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi, yj)
+
yj ∈Y
P(yj) log2 P(yj)
= H(X, Y ) − H(Y ) ⇒ H(X, Y ) = H(X/Y ) + H(Y )
similarly, H(X, Y ) = H(Y/X) + H(X)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
33. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Definition
Mutual Information I(X, Y ) of X and Y is deifned as
I(X, Y ) = H(X) − H(X/Y )
I(X, Y ) gives the uncertainty of the input X resolved by
observing output Y . In other words, it is the protion of
information of X that depends on Y .
Properties
Symmetric : I(X, Y ) = I(Y, X)
I(X, Y ) = H(X) − H(X/Y ) = H(Y ) − H(Y/X)
= H(X) + H(Y ) − H(X, Y )
Nonnegetive : I(X, Y ) ≥ 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
34. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Definition
Mutual Information I(X, Y ) of X and Y is deifned as
I(X, Y ) = H(X) − H(X/Y )
I(X, Y ) gives the uncertainty of the input X resolved by
observing output Y . In other words, it is the protion of
information of X that depends on Y .
Properties
Symmetric : I(X, Y ) = I(Y, X)
I(X, Y ) = H(X) − H(X/Y ) = H(Y ) − H(Y/X)
= H(X) + H(Y ) − H(X, Y )
Nonnegetive : I(X, Y ) ≥ 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
35. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Proof: Property - 1
I(X, Y ) =H(X) − H(X/Y )
= −
xi∈X
P(xi) log2 P(xi) +
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
= −
xi∈X yj ∈Y
P(xi, yj) log2 P(xi) +
xi∈X yj ∈Y
P(xi, yj) log2 P(xi/yj)
=
xi∈X yj ∈Y
P(xi, yj) log2
P(xi/yj)
P(xi)
=
xi∈X yj ∈Y
P(xi, yj) log2
P(xi, yj)
P(xi)P(yj)
=I(Y, X)
Equaion in box gives Kullback Leibler divergence between two probability
distributions P(xi, yj) and P(xi)P(yj)
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
36. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Kullback Leibler divergence
In probability theory and information theory, the Kullback
- Leibler divergence(also information divergence,
information gain, relative entropy) is a non-symmetric
measure of the difference between two probability
distributions P and Q.
DKL(P//Q) =
i
log2
P(i)
Q(i)
KL measures the expected number of extra bits required to
code samples from P when using a code based on Q, rather
than using a code based on P.
Thus, Mutual information gives no.of bits can be gained by
considering dependancy between X and Y rather than by
considering X and Y are independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
37. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Kullback Leibler divergence
In probability theory and information theory, the Kullback
- Leibler divergence(also information divergence,
information gain, relative entropy) is a non-symmetric
measure of the difference between two probability
distributions P and Q.
DKL(P//Q) =
i
log2
P(i)
Q(i)
KL measures the expected number of extra bits required to
code samples from P when using a code based on Q, rather
than using a code based on P.
Thus, Mutual information gives no.of bits can be gained by
considering dependancy between X and Y rather than by
considering X and Y are independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
38. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Kullback Leibler divergence
In probability theory and information theory, the Kullback
- Leibler divergence(also information divergence,
information gain, relative entropy) is a non-symmetric
measure of the difference between two probability
distributions P and Q.
DKL(P//Q) =
i
log2
P(i)
Q(i)
KL measures the expected number of extra bits required to
code samples from P when using a code based on Q, rather
than using a code based on P.
Thus, Mutual information gives no.of bits can be gained by
considering dependancy between X and Y rather than by
considering X and Y are independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
39. Discrete
Memoryless
channel
Mutual
Information
Joint Entropy
Conditional
Entropy
Definition
Four Cases
Capacity of
DMC
Mutual Information
Definition
Proof: Property - 2
It is known that :
H(X) ≥ H(X/Y )
⇒ H(X) − H(X/Y ) ≥ 0
If X and Y are statistically independent, then
H(X/Y ) = H(X) ⇒ I(X, Y ) = 0
. Therefore,
I(X, Y ) = H(X) − H(X/Y ) ≥ 0
with equality when X and Y are statistically independent.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
44. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
H(X) is the amount uncertainity about X, in other
words, information gain related to X if we are told
about X.
H(X/Y ) is remaining amount uncertainity about X
when Y is observed, in other words, the amount
information required to resolve X if we are told about
Y .
I(X, Y ) is amount of uncertainity of X resolved by
observing the output Y .
So, the amount of information that can be
transmitted over a channel is nothing but the amount
of uncertainity resolved by observing the channel
output.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
45. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
Thus, it is possible to transmit I(X, Y ) bits of
information per channel use, approximately, without
any uncertainity about the input at the output of the
channel.
⇒ It = I(X, Y ) = H(X)−H(X/Y ) bits/channel use
If the the symbol rate of a source is Rs, then the rate
of information that can be transmitted over a channel
such that the input can be resolved approximately
without errors is given by
Dt = [H(X) − H(X/Y )]Rs bits/sec
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
46. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
For an ideal channel X = Y , there is no uncertainty over X
when we observe Y .
⇒ H(X/Y ) = 0
⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X)
So all the information is transmitted for each channel use:
It = I(X, Y ) = H(X)
If the channel is too noisy, such that X and Y are
independent. So the uncertainty over X remains the same
irrespective of observation on Y .
⇒ H(X/Y ) = H(X)
⇒ I(X, Y ) = H(X) − H(X/Y ) = 0
i.e., no information passes through the channel:
It = I(X, Y ) = 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
47. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Transmission Rate
For an ideal channel X = Y , there is no uncertainty over X
when we observe Y .
⇒ H(X/Y ) = 0
⇒ I(X, Y ) = H(X) − H(X/Y ) = H(X)
So all the information is transmitted for each channel use:
It = I(X, Y ) = H(X)
If the channel is too noisy, such that X and Y are
independent. So the uncertainty over X remains the same
irrespective of observation on Y .
⇒ H(X/Y ) = H(X)
⇒ I(X, Y ) = H(X) − H(X/Y ) = 0
i.e., no information passes through the channel:
It = I(X, Y ) = 0
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
48. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Definition
The capacity of DMC is the maximum rate of information
transmission over the channel. The maximum rate of
transmission occurs when the source is matched to the
channel.
Definition
The capacity of DMC is defined the maximum rate of
information transmission over the channel, where the
maximum is taken over all possible input distributions
P(X)
C = max
P (X)
I(X, Y )Rs bits/sec
= max
P (X)
[H(X) − H(X/Y )]Rs bits/sec
= max
P (X)
[H(Y ) − H(Y/X)]Rs bits/sec
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
49. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Definition
The capacity of DMC is the maximum rate of information
transmission over the channel. The maximum rate of
transmission occurs when the source is matched to the
channel.
Definition
The capacity of DMC is defined the maximum rate of
information transmission over the channel, where the
maximum is taken over all possible input distributions
P(X)
C = max
P (X)
I(X, Y )Rs bits/sec
= max
P (X)
[H(X) − H(X/Y )]Rs bits/sec
= max
P (X)
[H(Y ) − H(Y/X)]Rs bits/sec
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
52. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noiseless Binary Channel
H(X/Y ) = −
1
i=0
1
j=0
P(xi, yj) log2 P(xi/yj)
= − [P(x0, y0) log2 P(x0/y0) + P(x0, y1) log2 P(x0/y1)
+ P(x1, y0) log2 P(x1/y0) + P(x1, y1) log2 P(x1/y1)]
= 0
⇒ I(X, Y ) = H(X) − H(X/Y )
= H(X)
Therefore, the capacity of noiseless binary channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(X) bits/channel use
= 1 bits/channel use
i.e., over a noiseless binary channel atmost one bit of information can be
send per channel use, which is maximum information content of a binary
source.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
53. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noiseless Binary Channel
H(X/Y ) = −
1
i=0
1
j=0
P(xi, yj) log2 P(xi/yj)
= − [P(x0, y0) log2 P(x0/y0) + P(x0, y1) log2 P(x0/y1)
+ P(x1, y0) log2 P(x1/y0) + P(x1, y1) log2 P(x1/y1)]
= 0
⇒ I(X, Y ) = H(X) − H(X/Y )
= H(X)
Therefore, the capacity of noiseless binary channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(X) bits/channel use
= 1 bits/channel use
i.e., over a noiseless binary channel atmost one bit of information can be
send per channel use, which is maximum information content of a binary
source.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
56. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
H(Y/X) = −
1
i=0
1
j=0
P(xi, yj) log2 P(yj/xi)
= − [P(x0, y0) log2 P(y0/x0) + P(x0, y1) log2 P(y1/x0)
+ P(x1, y0) log2 P(y0/x1) + P(x1, y1) log2 P(y1/x1)]
= − [P(x0)p log2 p + P(x0)(1 − p) log2(1 − p)
+ P(x1)(1 − p) log2(1 − p) + P(x1)p log2 p]
= − [p log2 p + (1 − p) log2(1 − p)] = H(p, 1 − p)
⇒ I(X, Y ) = H(Y ) − H(Y/X)
= H(Y ) − H(p, 1 − p)
Therefore, the capacity of noisy binary symmetric channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(Y ) − H(p, 1 − p) bits/channel use
= 1 − H(p, 1 − p) bits/channel use
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
57. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
H(Y/X) = −
1
i=0
1
j=0
P(xi, yj) log2 P(yj/xi)
= − [P(x0, y0) log2 P(y0/x0) + P(x0, y1) log2 P(y1/x0)
+ P(x1, y0) log2 P(y0/x1) + P(x1, y1) log2 P(y1/x1)]
= − [P(x0)p log2 p + P(x0)(1 − p) log2(1 − p)
+ P(x1)(1 − p) log2(1 − p) + P(x1)p log2 p]
= − [p log2 p + (1 − p) log2(1 − p)] = H(p, 1 − p)
⇒ I(X, Y ) = H(Y ) − H(Y/X)
= H(Y ) − H(p, 1 − p)
Therefore, the capacity of noisy binary symmetric channel is
C = max
P (X)
I(X, Y ) bits/channel use
= max
P (X)
H(Y ) − H(p, 1 − p) bits/channel use
= 1 − H(p, 1 − p) bits/channel use
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
58. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
To achieve the capacity of 1 − H(p, 1 − p) over a noisy
binary symmetric channel, the input distribution
should make H(Y ) = 1.
H(Y ) = 1, if P(y0) = P(y1) = 1
2
⇒ P(x0)p + P(x1)(1 − p) =
1
2
and P(x0)(1 − p) + P(x1)p =
1
2
⇒ (1 − 2p)(P(x1) − P(x0)) = 0
⇒ P(x1) = P(x0) =
1
2
Thus over a binary symmetric channel, maximum
information rate is possible when the source symbols
are equally likely.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity
59. Discrete
Memoryless
channel
Mutual
Information
Capacity of
DMC
Transmission
Rate
Definition
Examples
Capacity of DMC
Noisy Binary Symmetric Channel
To achieve the capacity of 1 − H(p, 1 − p) over a noisy
binary symmetric channel, the input distribution
should make H(Y ) = 1.
H(Y ) = 1, if P(y0) = P(y1) = 1
2
⇒ P(x0)p + P(x1)(1 − p) =
1
2
and P(x0)(1 − p) + P(x1)p =
1
2
⇒ (1 − 2p)(P(x1) − P(x0)) = 0
⇒ P(x1) = P(x0) =
1
2
Thus over a binary symmetric channel, maximum
information rate is possible when the source symbols
are equally likely.
Information Theory and Coding Discrete Memoryless Channel and it’s Capacity