This document contains solved examples related to information theory. It begins with examples calculating the information rate of a telegraph source with dots and dashes. It then provides examples calculating the entropy, message rate, and information rate of a PCM voice signal quantized into 16 levels. Further examples calculate the source entropy and information rate for a message source that generates one of four messages. Finally, it constructs the Shannon-Fano code for a source with five symbols of varying probabilities.
The presentation gives basic insight into Information Theory, Entropies, various binary channels, and error conditions. It explains principles, derivations and problems in very easy and detailed manner with examples.
A second important technique in error-control coding is that of convolutional coding . In this type of coding the encoder output is not in block form, but is in the form of an encoded
sequence generated from an input information sequence.
convolutional encoding is designed so that its decoding can be performed in some structured and simplified way. One of the design assumptions that simplifies decoding
is linearity of the code. For this reason, linear convolutional codes are preferred. The source alphabet is taken from a finite field or Galois field GF(q).
Convolution coding is a popular error-correcting coding method used in digital communications.
The convolution operation encodes some redundant information into the transmitted signal, thereby improving the data capacity of the channel.
Convolution Encoding with Viterbi decoding is a powerful FEC technique that is particularly suited to a channel in which the transmitted signal is corrupted mainly by AWGN.
It is simple and has good performance with low implementation cost.
The presentation gives basic insight into Information Theory, Entropies, various binary channels, and error conditions. It explains principles, derivations and problems in very easy and detailed manner with examples.
A second important technique in error-control coding is that of convolutional coding . In this type of coding the encoder output is not in block form, but is in the form of an encoded
sequence generated from an input information sequence.
convolutional encoding is designed so that its decoding can be performed in some structured and simplified way. One of the design assumptions that simplifies decoding
is linearity of the code. For this reason, linear convolutional codes are preferred. The source alphabet is taken from a finite field or Galois field GF(q).
Convolution coding is a popular error-correcting coding method used in digital communications.
The convolution operation encodes some redundant information into the transmitted signal, thereby improving the data capacity of the channel.
Convolution Encoding with Viterbi decoding is a powerful FEC technique that is particularly suited to a channel in which the transmitted signal is corrupted mainly by AWGN.
It is simple and has good performance with low implementation cost.
This presentation covers:
Some basic definitions & concepts of digital communication
What is Phase Shift Keying(PSK) ?
Binary Phase Shift Keying – BPSK
BPSK transmitter & receiver
Advantages & Disadvantages of BPSK
Pi/4 – QPSK
Pi/4 – QPSK transmitter & receiver
Advantages of Pi/4- QPSK
It is a digital representation of an analog signal that takes samples of the amplitude of the analog signal at regular intervals. The sampled analog data is changed to, and then represented by, binary data.
The presentation covers sampling theorem, ideal sampling, flat top sampling, natural sampling, reconstruction of signals from samples, aliasing effect, zero order hold, upsampling, downsampling, and discrete time processing of continuous time signals.
In digital modulation, minimum-shift keying(MSK) is a type of continuous-phase frequency-shift keying that was developed in the late 1950s and 1960s.
Similar to OQPSK(Offset quadrature phase-shift keying),
This presentation covers:
Some basic definitions & concepts of digital communication
What is Phase Shift Keying(PSK) ?
Binary Phase Shift Keying – BPSK
BPSK transmitter & receiver
Advantages & Disadvantages of BPSK
Pi/4 – QPSK
Pi/4 – QPSK transmitter & receiver
Advantages of Pi/4- QPSK
It is a digital representation of an analog signal that takes samples of the amplitude of the analog signal at regular intervals. The sampled analog data is changed to, and then represented by, binary data.
The presentation covers sampling theorem, ideal sampling, flat top sampling, natural sampling, reconstruction of signals from samples, aliasing effect, zero order hold, upsampling, downsampling, and discrete time processing of continuous time signals.
In digital modulation, minimum-shift keying(MSK) is a type of continuous-phase frequency-shift keying that was developed in the late 1950s and 1960s.
Similar to OQPSK(Offset quadrature phase-shift keying),
Data Communication & Computer network: Shanon fano codingDr Rajiv Srivastava
These slides cover the fundamentals of data communication & networking. it covers Shanon fano coding which are used in communication of data over transmission medium. it is useful for engineering students & also for the candidates who want to master data communication & computer networking.
This presentation explains basics of Coding Theory in easy and detailed manner with derivations, explanations and examples. It prepares the users for advance Error Control Codes.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. Digital Communication (GTU) 3-1 Information Theory
Chapter 3 : Information
Theory
Section 3.5 :
Ex. 3.5.3 : Consider a telegraph source having two symbols, dot and dash. The dot duration is
0.2 seconds; and the dash duration is 3 times the dot duration. The probability of the dot
occurring is twice that of the dash, and the time between symbols is 0.2 seconds.
Calculate the information rate of the telegraph source. .Page No. 3-14.
Soln. :
Given that : 1. Dot duration : 0.2 sec.
2. Dash duration : 3 × 0.2 = 0.6 sec.
3. P (dot) = P (dot) = 2 P (dash).
4. Space between symbols is 0.2 sec.
Information rate = ?
1. Probabilities of dots and dashes :
Let the probability of a dash be “P”. Therefore the probability of a dot will be “2P”. The total
probability of transmitting dots and dashes is equal to 1.
∴ P (dot) + P (dash) = 1
∴ P + 2P = 1 ∴ P = 1/3
∴ Probability of dash = 1/3
and probability of dot = 2/3 …(1)
2. Average information H (X) per symbol :
∴ H (X) = P (dot) · log2 [ 1/P (dot) ] + P (dash) · log2 [ 1/P (dash) ]
∴ H (X) = (2/3) log2 [ 3/2 ] + (1/3) log2 [ 3 ] = 0.3899 + 0.5283 = 0.9182 bits/symbol.
3. Symbol rate (Number of symbols/sec.) :
The total average time per symbol can be calculated as follows :
Average symbol time Ts = [ TDOT × P ( DOT )] + [ TDASH × P (DASH) ] + Tspace
∴ Ts = [ 0.2 × 2/3 ] + [ 0.6 × 1/3 ] + 0.2 = 0.5333 sec./symbol.
Hence the average rate of symbol transmission is given by :
r = 1/Ts = 1.875 symbols/sec.
2. Digital Communication (GTU) 3-2 Information Theory
4. Information rate (R) :
R = r × H ( X ) = 1.875 × 0.9182 = 1.72 bits/sec. ...Ans.
Ex. 3.5.4 : The voice signal in a PCM system is quantized in 16 levels with the following
probabilities :
P1 = P2 = P3 = P4 = 0.1 P5 = P6 = P7 = P8 = 0.05
P9 = P10 = P11 = P12 = 0.075 P13 = P14 = P15 = P16 = 0.025
Calculate the entropy and information rate. Assume fm = 3 kHz. .Page No. 3-15
Soln. :
It is given that,
1. The number of levels = 16. Therefore number of messages = 16.
2. fm = 3 kHz.
(a) To find the entropy of the source :
The entropy is defined as,
H = pk log2 (1/ pk) ...(1)
As M = 16 Equation (1) gets modified to,
H = pk log2 (1/ pk)
= 4 [0.1 log2 (1/0.1)] + 4 [0.05 log2 (1/0.05)]
+ 4 [0.075 log2 (1/0.075)] + 4 [0.025 log2 (1/0.025)]
∴ H = 0.4 log2 (10) + 0.2 log2 (20) + 0.3 log2 (13.33) + 0.1 log2 (40)
= 0.4 + + 0.3 +
∴ H = 3.85 bits/message ...(2) ...Ans.
(b) To find the message rate (r) :
The minimum rate of sampling is Nyquist rate.
Therefore fs = 2 × fm
= 2 × 3 kHz = 6 kHz ...(3)
3. Digital Communication (GTU) 3-3 Information Theory
Hence there are 6000 samples/sec. As each sample is converted to one of the 16 levels, there are
6000 messages/sec.
∴ Message rate r = 6000 messages/sec ...(4)
(c) To find the information rate (R) :
R = r × H = 6000 × 3.85 = 23100 bits/sec ...Ans.
Ex. 3.5.5 : A message source generates one of four messages randomly every microsecond. The
probabilities of these messages are 0.4, 0.3, 0.2 and 0.1. Each emitted message is
independent of other messages in the sequence :
1. What is the source entropy ?
2. What is the rate of information generated by this source in bits per second ?
.Page No. 3-15
Soln. :
It is given that,
1. Number of messages, M = 4, let us denote them by m1, m2, m3 and m4.
2. Their probabilities are p1 = 0.4, p2 = 0.3, p3 = 0.2 and p4 = 0.1.
3. One message is transmitted per microsecond.
∴ Message transmission rate r = = 1 × 106 messages/sec.
(a) To obtain the source entropy (H) :
H = pk log2 ( 1/pk )
∴ H = p1 log2 ( 1/ p1 ) + p2 log2 ( 1/ p2 ) + p3 log2 ( 1/ p3 ) + p4 log2 ( 1/ p4 )
= 0.4 log2 ( 1/0.4 ) + 0.3 log2 ( 1/0.3 ) + 0.2 log2 ( 1/0.2 ) + 0.1 log2 ( 1/0.1 )
∴ H = 1.846 bits/message ...Ans.
(b) To obtain the information rate (R) :
R = H × r = 1.846 × 1 × 106 = 1.846 M bits/sec ...Ans.
Ex. 3.5.6 : A source consists of 4 letters A, B, C and D. For transmission each letter is coded into a
sequence of two binary pulses. A is represented by 00, B by 01, C by 10 and D by 11.
The probability of occurrence of each letter is P(A) = , P (B) = , P (C) = and
P (D) = . Determine the entropy of the source and average rate of transmission of
information. .Page No. 3-15
4. Digital Communication (GTU) 3-4 Information Theory
Soln. : The given data can be summarised as shown in the following table :
Messag Probability Code
e
A 1/5 00
B 1/4 01
C 1/4 10
D 3/10 11
Assumption : Let us assume that the message transmission rate be r = 4000 messages/sec.
(a) To determine the source entropy :
H = log2 (5) + log2 (4) + log2 (4) + 0.3 log2 (10/3)
∴ H = 1.9855 bits/message ...Ans.
(b) To determine the information rate :
R = r × H = [4000 messages/sec] × [1.9855 bits/message]
R = 7942.3 bits/sec ...Ans.
(c) Maximum possible information rate :
Number of messages/sec = 000
But here the number of binary digits/message = 2
∴ Number of binary digits (binits)/sec = 4000 × 2 = 8000 binits/sec.
We know that each binit can convey a maximum average information of 1 bit
∴ H = 1 bit/binit
∴ Maximum rate of information transmission = [8000 binits/s] × [Hmax /binit]
= 8000 × 1 bits/sec ...Ans.
Section 3.6 :
Ex. 3.6.3 : A discrete memoryless source has five symbols x1, x2, x3, x4 and x5 with probabilities
p ( x1 ) = 0.4, p ( x2 ) = 0.19, p ( x3 ) = 0.16, p ( x4 ) = 0.14 and p ( x 5 ) = 0.11. Construct the
Shannon-Fano code for this source. Calculate the average code word length and coding
efficiency of the source. .Page No. 3-21
Soln. : Follow the steps given below to obtain the Shannon-Fano code.
Step 1 : List the source symbols in the order of decreasing probability.
Step 2 : Partition the set into two sets that are as close to being equiprobable as possible and assign
0 to the upper set and 1 to the lower set.
Step 3 : Continue this process, each time partitioning the sets with as nearly equal probabilities as
possible until further partitioning is not possible.
5. Digital Communication (GTU) 3-5 Information Theory
(a) The Shannon-Fano codes are as given in Table P. 3.6.3.
Table P. 3.6.3 : Shannon-Fano codes
Symbols Probability Step 1 Step 2 Step 3 Code word
x1 0.4 0 0 Stop 00
Partition here
x2 0.19 0 1 Stop 01
Partition here
x3 0.16 1 0 Stop 10
Partition here
x4 0.14 1 1 0 Stop 110
Partition here
x5 0.11 1 1 1 Stop 111
here
(b) Average code word length (L) :
The average code word length is given by :
L = pk × (length of mk in bits)
= ( 0.4 × 2 ) + ( 0.19 × 2 ) + ( 0.16 × 2 ) + ( 0.14 × 3 ) + ( 0.11 × 3 )
= 2.25 bits/message
(c) Entropy of the source (H) :
H = pk log2 ( 1 / pk )
= 0.4 log2 ( 1 / 0.4 ) + 0.19 log2 ( 1 / 0.19 ) + 0.16 log2 ( 1 / 0.16 )
+ 0.14 log2 ( 1 / 0.14 ) + 0.11 log2 ( 1 / 0.11 ) = 2.15
Ex. 3.6.6 : A discrete memoryless source has an alphabet of seven symbols with probabilities for its
output as described in Table P. 3.6.6(a). .Page No. 3-25
Table P. 3.6.6(a)
Symbol S0 S1 S2 S3 S4 S5 S6
Probability 0.25 0.25 0.125 0.125 0.125 0.0625 0.0625
Compute the Huffman code for this source moving the “combined” symbol as high as
possible. Explain why the computed source code has an efficiency of 100 percent.
6. Digital Communication (GTU) 3-6 Information Theory
Soln. : The Huffman code for the source alphabets is as shown in Fig. P. 3.6.6.
Fig. P. 3.6.6 : Huffman code
Follow the path indicated by the dotted line to obtain the codeword for symbol S 0 as 10.
Similarly we can obtain the codewords for the remaining symbols. These are as listed in
Table P. 3.6.6(b).
Table P. 3.6.6(b)
Symbol Probability Codewor Codeword length
d
S0 0.25 10 2 bit
S1 0.25 11 2 bit
S2 0.125 001 3 bit
S3 0.125 010 3 bit
S4 0.125 011 3 bit
S5 0.0625 0000 4 bit
S6 0.0625 0001 4 bit
To compute the efficiency :
1. The average code length = L = (length of symbol in bits)
From Table P. 3.7.4(b)
L = ( 0.25 × 2 ) + ( 0.25 × 2 ) + ( 0.125 × 3 ) × 3 + ( 0.0625 × 4 ) × 2
∴ L = 2.625 bits/symbol
2. The average information per message = H =
7. Digital Communication (GTU) 3-7 Information Theory
∴ H = [ 0.25 log2 ( 4 ) ] × 2 + [ 0.125 log2 ( 8 ) ] × 3 + [ 0.0625 log2 ( 16 ) ] × 2
= [ 0.25 × 2 × 2 ] + [ 0.125 × 3 × 3 ] + [ 0.0625 × 4 × 2 ]
∴ H = 2.625 bits/message.
3. Code efficiency η = × 100 =
∴ η = 100%
Note : As the average information per symbol (H) is equal to the average code length (L), the code
efficiency is 100%.
Section 3.11 :
Ex. 3.11.5 : Calculate differential entropy H (X) of the uniformly distributed random variable X with
probability density function.
fX (x) = 1/a 0≤x≤a
= 0 elsewhere
for 1. a = 1 2. a=2 3. a = 1/2. .Page No. 3-49.
Soln. :
The uniform PDF of the random variable X is as shown in Fig. P. 3.11.5.
Fig. P. 3.11.5
1. The average amount of information per sample value of x (t) is measured by,
H (X) = fX (x) · log2 [1/fX (x)] dx bits/sample …(1)
The entropy H (X) defined by the expression above is called as the differential entropy of X.
2. Substituting the value of fX (x) in the expression for H (X) we get,
H (X) = · log2 (a) dx ...(2)
(a) Substitute a = 1 to get, H (X) = 1 · log2 1 dx = 0 ...Ans.
(b) Substitute a = 2 to get, H (X) = · log2 2 dx = × 2 = 1 ...Ans.
(c) Substitute a = to get, H (X) = 2 log2 (1/2) dx = – 2 log2 2 = – 2 ...Ans.
These are the values of differential entropy for various values of a.
Ex. 3.11.6 : A discrete source transmits messages x1, x2, x3 with probabilities p ( x1 ) = 0.3,
p ( x2 ) = 0.25, p ( x3 ) = 0.45. The source is connected to the channel whose conditional
probability matrix is
y1 y2 y3
8. Digital Communication (GTU) 3-8 Information Theory
P (Y / X) =
Calculate all the entropies and mutual information with this channel. .Page No. 3-49
Soln. :
Steps to be followed :
Step 1 : Obtain the joint probability matrix P (X, Y).
Step 2 : Obtain the probabilities p (y1), p (y2), p (y3).
Step 3 : Obtain the conditional probability matrix P (X/Y)
Step 4 : Obtain the marginal densities H (X) and H (Y).
Step 5 : Calculate the conditional entropy H (X/Y).
Step 6 : Calculate the joint entropy H (X , Y).
Step 7 : Calculate the mutual information I (X , Y).
Step 1 : Obtain the joint probability matrix P (X, Y) :
The given matrix P (Y/X) is the conditional probability matrix. We can obtain the joint
probability matrix P (X , Y) as :
P (X, Y) = P [ Y/X ] · P (X)
∴ P (X, Y) =
y1 y2 y3
∴ P (X, Y) = ...(1)
Step 2 : Obtain the probabilities p (y1), p (y2) and p (y3) :
The probabilities p ( y1 ), p ( y2 ) and p ( y3 ) can be obtained by adding the column entries of
P (X , Y) matrix of Equation (1).
∴ p ( y1 ) = 0.27 + 0 + 0 = 0.27
p ( y2 ) = 0.03 + 0.2 + 0.135 = 0.365
p ( y3 ) = 0 + 0.05 + 0.315 = 0.365
Step 3 : Obtain the conditional probability matrix P (X/Y) :
The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint
probability matrix P (X , Y) of Equation (1) by p (y1), p (y2) and p (y3) respectively.
∴ P (X /Y) =
y1 y2 y3
∴ P (X /Y) = ...(2)
Step 4 : To obtain the marginal entropies H (X) and H (Y) :
H (X) = p ( xi ) log2 [ 1/p ( xi )]
= p ( x1 ) log2 [ 1/p ( x1 ) ] + p ( x2 ) log2 [ 1/p ( x2 ) ] + p ( x3 ) log2 [ 1/p ( x3 )]
Substituting the values of p ( x1 ), p ( x2 ) and p ( x3 ) we get,
= 0.3 log2 (1/0.3) + 0.25 log2 (1/0.25) + 0.45 log2 (1/0.45)
9. Digital Communication (GTU) 3-9 Information Theory
= [ (0.3 × 1.7369) + (0.25 × 2) + (0.45 × 1.152) ]
∴ H (X) = [ 0.521 + 0.5 + 0.5184 ] = 1.5394 bits/message ...Ans.
Similarly H (Y) = p ( y1 ) log2 [ 1/y1 ] + p ( y2 ) log2 [ 1/y2 ] + p ( y3 ) log2 [ 1/y3 ]
= 0.27 log2 [ 1/0.27 ] + 0.365 × 2 × log2 [ 1/0.365 ]
H (Y) = 0.51 + 1.0614 = 1.5714 bits/message ...Ans.
Step 5 : To obtain the conditional entropy H (X / Y) :
H (X/Y) = – p ( xi , yj ) log2 p ( xi/yj )
∴ H (X/Y) = – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 )
– p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 )
– p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 )
10. Digital Communication (GTU) 3-10 Information Theory
Refer to the joint and conditional matrices given in Fig. P. 3.11.6.
P (X, Y) P (X, Y)
y2 y3
0.27 0.03 0 0.0821 0
0.05 0.5479 0.1369
0.135 0.315 0.3698 0.863
Fig. P. 3.11.6
Substituting various values from these two matrices we get,
H (X/Y) = – 0.27 log2 1 – 0.03 log2 (0.0821) – 0 – 0 – 0.2 log2 (0.5479)
– 0.05 log2 (0.1369) – 0 – 0.135 log2 (0.3698) – 0.315 log2 (0.863)
= 0 + 0.108 + 0.1736 + 0.1434 + 0.1937 + 0.0669
H (X/Y) = 0.6856 bits / message ...Ans.
Step 6 : To obtain the joint entropy H (X , Y) :
The joint entropy H (X , Y) is given by,
H (X, Y) = – p ( xi , yj ) · log2 p ( xi , yj )
∴ H (X, Y) = – [ p ( x1 , y1 ) log2 p ( x1 , y1 ) + p ( x1 , y2 ) log2 p ( x1 , y2 ) + p ( x1 , y3 )
log2 p ( x1 , y3) + p ( x2 , y1 ) log2 p ( x2 , y1 ) + p ( x2 , y2 ) log2 p ( x2 , y2 )
+ p ( x2 , y3 ) log2 p ( x2 , y3 ) + p ( x3 , y1 ) log2 p ( x3 , y1)
+ p ( x3 , y2 ) log2 p ( x3 , y2 ) + p ( x3 , y3 ) log2 p ( x3 , y3 ) ]
Referring to the joint matrix we get,
∴ H (X, Y) = – [ 0.27 log2 0.27 + 0.03 log2 0.03 + 0 + 0 + 0.2 log 0.2 + 0.05 log 0.05 + 0
+ 0.135 log2 0.135 + 0.315 log2 0.315 ]
= [ 0.51 + 0.1517 + 0.4643 + 0.216 + 0.39 + 0.5249]
∴ H (X, Y) = 2.2569 bits/message ...Ans.
Step 7 : To calculate the mutual information :
Mutual information, is given by,
I [ X, Y ] = H (X) – H (X/Y) = 1.5394 – 0.6856 = 0.8538 bits.
Ex. 3.11.7 : For the given channel matrix, find out the mutual information. Given that p ( x1 ) = 0.6,
p ( x2 ) = 0.3 and p ( x3 ) = 0.1. .Page No. 3-50
p (y / x)
11. Digital Communication (GTU) 3-11 Information Theory
Soln. :
Steps to be followed :
Step 1 : Obtain the joint probability matrix P (X , Y).
Step 2 : Calculate the probabilities p ( y1 ), p ( y2 ), p ( y3 ).
Step 3 : Obtain the conditional probability matrix P (X/Y).
Step 4 : Calculate the marginal densities H (X) and H (Y).
Step 5 : Calculate the conditional entropy H (X/Y).
Step 6 : Find the mutual information.
Step 1 : Obtain the joint probability matrix P (X , Y) :
We can obtain the joint probability matrix P (X , Y) as
P (X , Y) = P (Y/X) · P (X)
So multiply rows of the P (Y / X) matrix by p ( x1 ), p ( x2 ) and p ( x3 ) to get,
0.5 × 0.5 × 0
0 0
. .
6 6
P (X / 0.5 × 0 0.5 × 0.3
Y 0
) .
3
0 0.5 × 0.5 × 0.1
0
.
1
y1 y2 y3
0.3 0.3 0
… (1)
∴ P (X, Y) 0.15 0 0.15
0 0.05 0.05
Step 2 : Obtain the probabilities p ( y1 ), p ( y2 ), p ( y3 ) :
These probabilities can be obtained by adding the column entries of P (X , Y) matrix of
Equation (1).
∴ p ( y1 ) = 0.3 + 0.15 + 0 = 0.45
p ( y2 ) = 0.3 + 0 + 0.05 = 0.35
p ( y3 ) = 0 + 0.15 + 0.05 = 0.20
Step 3 : Obtain the conditional probability matrix P (X/Y) :
12. Digital Communication (GTU) 3-12 Information Theory
The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint
probability matrix P (X , Y) of Equation (1) by p ( y1 ), p ( y2 ) and p ( y3 ) respectively.
0.3 / 0.45 0.3 / 0.35 0
∴ P (X / Y) 0.15 / 0 0.15 / 0.2
0.4
5
0 0.05 / 0.35 0.05 / 0.2
y1 y2 y3
0.667 0.857 0 … (2)
∴ P (X, Y) 0.333 0 0.75
0 0.143 0.25
Step 4 : Calculate the marginal entropy H (X) :
H (X) = – p ( xi ) log2 p ( xi )
= – p ( x1 ) log2 p ( x1 ) – p ( x2 ) log2 p ( x2 ) – p ( x3 ) log2 p ( x3 )
= – 0.6 log2 (0.6) – 0.3 log2 (0.3) – 0.1 log2 (0.1)
= 0.4421 + 0.5210 + 0.3321
∴ H (X) = 1.2952 bits/message
Step 5 : Obtain the conditional entropy H (X/Y) :
H (X/Y) = – p ( xi , yj ) log2 ( xi / yj )
= – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 )
– p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 )
– p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 )
Refer to the joint and conditional matrices of Fig. P. 3.11.7.
P (X / Y) P (X, Y)
y1 y2 y3 y1 y2 Y3
0.667 0.857 0 0.3 0.3 0
0.333 0 0.75 0.15 0 0.15
0 0.143 0.25 0 0.05 0.05
Fig. P. 3.11.7
Substituting various values from these two matrices we get,
H (X/Y) = – 0.3 log2 0.667 – 0.3 log2 0.857 – 0
– 0.15 log2 0.333 – 0 – 0.15 log2 0.75
13. Digital Communication (GTU) 3-13 Information Theory
– 0 – 0.05 log2 0.143 – 0.05 log2 0.25
∴ H (X/Y) = 0.1752 + 0.06678 + 0.2379 + 0.06225 + 0.1402 + 0.1
∴ H (X/Y) = 0.78233 bits/message
Step 6 : Mutual information :
I (X , Y) = H (X) – H (X/Y)
= 1.2952 – 0.78233 = 0.51287 bits ...Ans.
Ex. 3.11.8 : State the joint and conditional entropy. For a signal which is known to have a uniform
density function in the range 0 ≤ x ≤ 5; find entropy H (X). If the same signal is amplified
eight times, then determine H (X). .Page No. 3-50
Soln. : For the definitions of joint and conditional entropy
refer to sections 3.10.1 and 3.10.2.
The uniform PDF of the random variable X is as shown
in Fig. P. 3.11.8.
1. The differential entropy H (X) of the given R.V. X is
given by,
H (X) = fX (x) log2 [1/fX (x)] dx bits/sample. Fig. P. 3.11.8
2. Let us define the PDF fX (x). It is given that fX (x) is uniform in the range 0 ≤ x ≤ 5.
∴ Let fX (x) = k .... 0 ≤ x ≤ 5
= 0 .... elsewhere
But area under fX (x) is always 1.
∴ fX (x) dx = 1
∴ k dx = 1
∴ k = 1/5
Hence the PDF of X is given by,
fX (x) = 1/5 .... 0 ≤ x ≤ 5
= 0 .... elsewhere
3. Substituting the value of fX (x) we get,
H (X) = log2 (5) dx
∴ H (X) = 2.322 bits/message ...Ans.
Ex. 3.11.9 : Two binary symmetrical channels are connected in cascade as shown in Fig. P. 3.11.9.
1. Find the channel matrix of the resultant channel.
2. Find p ( z1 ) and p ( z2 ) if p ( x1 ) = 0.6 and p ( x2 ) = 0.4. .Page No. 3-50
14. Digital Communication (GTU) 3-14 Information Theory
Fig. P. 3.11.9 : BSC for Ex. 3.11.10
Soln. :
Steps to be followed :
Step 1 : Write the channel matrix for the individual channels as P [ Y/X ] for the first one and
P [ Z/Y ] for the second channel.
Step 2 : Obtain the channel matrix for the cascaded channel as,
P [ Z/X ] = P [ Y/X ] · P [ Z/Y ]
Step 3 : Calculate the probabilities P ( z1 ) and P ( z2 ).
1. To obtain the individual channel matrix :
The channel matrix of a BSC consists of the transition probabilities of the channel. That means
the channel matrix for channel – 1 is given by,
P [ Y/X ] = ...(1)
Substituting the values we get,
P [ Y/X ] = ...(2)
Similarly the channel matrix for second BSC is given by,
P [ Z/Y ] = ...(3)
Substituting the values we get,
P [ Z/Y ] = ...(4)
2. Channel matrix of the resultant channel :
The channel matrix of the resultant channel is given by,
P [ Z/X ] = ...(5)
The probability P ( z1/x1 ) can be expressed by referring to Fig. P. 3.11.10 as,
P ( z1/x1 ) = P ( z1/y1 ) · P ( y1/x1 ) + P ( z1/y2 ) · P ( y2/x2 ) ...(6)
Similarly we can obtain the expressions for the remaining terms in the channel matrix of resultant
channel.
∴ P[ Z/X ] =
...(7)
The elements of the channel matrix of Equation (7) can be obtained by multiplying the individual
channel matrices.
∴ P (Z/X) = P (Y/X) · P (Z/Y) …(8)
∴ P (Z/X) =
15. Digital Communication (GTU) 3-15 Information Theory
= ...Ans.
This is the required resultant channel matrix.
3. To calculate P ( z1 ) and P ( z2 ) :
From Fig. P.3.11.10 we can write the following expression,
P ( z1 ) = P ( z1/ y1 ) P ( y1 ) + P ( z1/ y2 ) · P ( y2 ) …(9)
Substituting P ( y1 ) = P ( x1 ) · P ( y1/ x1 ) + P ( x2 ) · P ( y1/ x2 )
= (0.6 × 0.8) + (0.4 × 0.2) = 0.56
and P ( y2 ) = P ( x1 ) · P ( y2/ x1 ) + P ( x2 ) · P ( y2/ x2 )
= (0.6 × 0.2) + (0.4 × 0.8) = 0.44
and P ( z1/ y1 ) = 0.7 and P ( z1/ y2 ) = 0.3
We get, P ( z1 ) = (0.7 × 0.56) + (0.3 × 0.44)
∴ P ( z1 ) = 0.392 + 0.132 = 0.524 ...Ans.
Similarly P ( z2 ) = P ( z2/ y1 ) P ( y1 ) + P ( z2/ y2 ) · P ( y2 )
= (0.3 × 0.56) + (0.7 × 0.44)
∴ P ( z2 ) = 0.476 ...Ans.
Ex. 3.11.10 : A binary channel matrix is given by :
y1 y2 → outputs
inputs →
Determine H (X), H (X/Y), H (Y/X) and mutual information I (X ; Y) .Page No. 3-50
Soln. : The given channel matrix is
y1 y2
p (x, y) =
Step 1 : Obtain the individual probabilities :
The individual message probabilities are given by -
p ( x1 ) = 2/3 + 1/3 = 1
p ( x2 ) = 1/10 + 9/10 = 1
p ( y1 ) = 2/3 + 1/10 = 23/30
p ( y2 ) = 1/3 + 9/10 = 37/30
16. Digital Communication (GTU) 3-16 Information Theory
Step 2 : Obtain the marginal entropies H (X) and H (Y) :
H (X) = p ( x1 ) log2 [ 1/ p ( x1 ) ] + p ( x2 ) log2 [ 1/ p ( x2 ) ]
= 1 log2 (1) + 1 log2 (1)
∴ H (X) = 0
H (Y) = p ( y1 ) log2 [ 1/ p ( y1 ) ] + p ( y2 ) log2 [ 1/ p ( y2 ) ]
= (23/30) log2 [ 30/23 ] + (37/30) log2 [30/37]
H (Y) = 0.2938 – 0.3731 = – 0.07936 ≈ – 0.08
Step 3 : Obtain the joint entropy H (X, Y) :
H (X, Y) = p ( x1 , y1 ) log2 [ 1/ p ( x1 , y1 ) ] + p ( x1 , y2 ) log2 [ 1/ p ( x1 , y2 ) ]
+ p ( x2 , y1 ) log2 [ 1/ p ( x2 , y1 ) ] + p ( x2 , y2 ) log2 [ 1/ p ( x2 , y2 ) ]
∴ H (X, Y) = log2 (3/2) + log2 (3) + log2 (10) + log2 (10/9)
= 0.38 + 0.52 + 0.33 + 0.13 = 1.36 bits
Step 4 : Obtain the conditional probabilities H (X/Y) and H (Y/X) :
H (X/Y) = H (X , Y) – H (Y)
= 1.36 – (– 0.08) = 1.44 bits.
H (Y/X) = H (X , Y) – H (X)
= 1.36 – 0 = 1.36 bits.
Step 5 : Mutual information :
I (X, Y) = H (X) – H (X/Y)
= 0 – 1.44
= – 1.44 bits/message. ...Ans.
Ex. 3.11.11 : A channel has the following channel matrix :
[ P (Y/X) ] =
1. Draw the channel diagram.
2. If the source has equally likely outputs, compute the probabilities associated with
the channel outputs for P = 0.2. .Page No. 3-50
Soln. :
Part I :
1. The given matrix shows that the number of
inputs is two i.e. x1 and x2 whereas the
number of outputs is three i.e. y1 , y2 and
y3.
2. This channel has two inputs x1 = 0 and x2 =
1 and three outputs y1 = 0, y2 = e and y3 = 1
as shown in Fig. P. 3.11.11.
Fig. P. 3.11.11 : The channel diagram
The channel diagram is as shown in Fig. P. 3.11.11 This type of channel is called as “binary
erasure channel”. The output y2 = e indicates an erasure that means this output is in doubt and this
17. Digital Communication (GTU) 3-17 Information Theory
output should be erased.
Part II : Given that the sources x1 and x2 are equally likely
∴ p ( x1 ) = p ( x2 ) = 0.5
It is also given that p = 0.2.
∴ p (y) = p (x) [ p (y/x) ]
= [p (x1) , p (x2) ]
∴ p (y) = [ 0.5, 0.5 ] = [ 0.4 0.2 0.4 ]
That means p (y1) = 0.4, p ( y2 ) = 0.2 and p ( y3 ) = 0.4
These are the required values of probabilities associated with the channel outputs for p = 0.2.
Ex. 3.11.13 : Find the mutual information and channel capacity of the channel as shown in
Fig. P. 3.11.13(a). Given that P ( x1 ) = 0.6 and P ( x2 ) = 0.4. .Page No. 3-57.
Fig. P. 3.11.13(a)
Soln. :
Given that : p ( x1 ) = 0.6, p ( x2 ) = 0.4
The conditional probabilities are,
p ( y1/x1 ) = 0.8, p ( y2/x1 ) = 0.2
p ( y1/x2 ) = 0.3 and p ( y2/x2 ) = 0.7
The mutual information can be obtained by
referring to Fig. P. 3.11.13(b).
Fig. P. 3.11.13(b)
As already derived, the mutual information is given by,
I (X ; Y) = Ω [ β + (1 – α – β) p ] – p Ω (α) – (1 – p) Ω (β) ...(1)
Where Ω is called as the horseshoe function which is given by,
Ω (p) = p log2 (1/p) + (1 – p) log2 (1/1 – p) ...(2)
18. Digital Communication (GTU) 3-18 Information Theory
Substituting the values we get,
I (X ; Y) = Ω [ 0.3 + (1 – 0.2 – 0.3) 0.6 ] – 0.6 Ω (0.2) – 0.4 Ω (0.3)
∴ I (X ; Y) = Ω (0.6) – 0.6 Ω (0.2) – 0.4 Ω (0.3) …(3)
Using the Equation (2) we get,
I (X ; Y) = [ 0.6 log2 (1/0.6) + 0.4 log2 (1/0.4) ] – 0.6 [0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ]
– 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7)]
∴ I (X ; Y) = 0.1868 bits. ...Ans.
Channel capacity (C) :
For the asymmetric binary channel,
C = 1 – p Ω (α) – (1 – p) Ω (β)
= 1 – 0.6 Ω (0.2) – 0.4 Ω (0.3)
= 1 – 0.6 [ 0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ] – 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7) ]
= 1 – 0.433 – 0.352
C = 0.214 bits ...Ans.
Section 3.12
Ex. 3.12.3 : In a facsimile transmission of a picture, there are about [2.25 × 106 ] picture elements per
frame. For good reproduction, twelve brightness levels are necessary. Assuming all these
levels to be equiprobable, calculate the channel bandwidth required to transmit one
picture in every three minutes for a single to noise power ratio of 30 dB. If SNR
requirement increases to 40 dB, calculate the new bandwidth. Explain the trade-off
between bandwidth and SNR, by comparing the two results. .Page No. 3-67
Soln. :
Given : Number of picture elements per frame = 2.25 × 106
Number of brightness levels = 12 = M
All the twelve brightness levels are equiprobable.
Number of pictures per minute = 1/3
SNR1 = 30 dB SNR2 = 40 dB
1. Calculate the information rate :
The number of picture elements per frame is 2.25 × 106 and these elements can be of any
brightness out of the possible 12 brightness levels.
The information rate (R) = No. of messages/sec. × Average information per message.
R = r×H ...(1)
Where r = = = 12500 elements/sec. ...(2)
and H = log2 M = log2 12 ...as all brightness levels are
equiprobable. ...(3)
∴ R = 12,500 × log2 12
∴ R = 44.812 k bits/sec. ...(4)
19. Digital Communication (GTU) 3-19 Information Theory
2. Calculate the bandwidth B :
The Shannon’s capacity theorem states that,
R ≤ C where C = B log2 ...(5)
Substitute = 30 dB = 1000 we get,
∴ 44.812 × 10 3
≤ B log2 [1 + 1000]
∴ B ≥
∴ B ≥ 4.4959 kHz. ...Ans.
3. BW for S/N = 40 dB :
For signal to noise ratio of 40 dB or 10,000 let us calculate new value of bandwidth.
∴ 44.812 × 103 ≤ B log2 [1 + 10000 ]
∴ B ≥
∴ B ≥ 3.372 kHz. ...Ans.
Trade off between bandwidth and SNR : As the signal to noise ratio is increased from
30 dB to 40 dB, the bandwidth will have to be decreased.
Ex. 3.12.4 : An analog signal having bandwidth of 4 kHz is sampled at 1.25 times the Nyquist rate,
with each sample quantised into one of 256 equally likely levels.
1. What is information rate of this source ?
2. Can the output of this source be transmitted without error over an AWGN channel
with bandwidth of 10 kHz and SNR or 20 dB ?
3. Find SNR required for error free transmission for part (ii).
4. Find bandwidth required for an AWGN channel for error free transmission this
source if SNR happens to be 20 dB. .Page No. 3-68
Soln. :
Given : fm = 4 kHz., fs = 1.25 × 2 × fm = 1.25 × 2 × 4 kHz = 10 kHz.
Quantization levels Q = 256 (equally likely).
1. Information rate (R) :
R = r×H ...(1)
Where r = Number of messages/sec.
= Number of samples/sec. = 10 kHz.
and H = log2 256 ...as all the levels are equally likely
∴ R = 10 × 103 × log2 256 = 10 × 103 × 8
∴ R = 80 k bits/sec. ...Ans.
2. Channel capacity (C) :
In order to answer the question asked in (ii) we have to calculate the channel capacity C.
Given :
20. Digital Communication (GTU) 3-20 Information Theory
B = 10 kHz and = 20 dB = 100
∴ C = B log2 = 10 × 103 log2 [101].
∴ C = 66.582 k bits/sec.
For error free transmission, it is necessary that R ≤ C. But here R = 80 kb/s and C = 66.582 kb/s
hence R > C hence errorfree transmission is not possible.
3. S/N ratio for errorfree transmission in part (2) :
Substitute C = R = 80 kb/s. we get,
80 × 103 = B log2
∴ 80 × 103 = 10 × 103 log2 [1+ (S/N)]
∴ 8 = log2 [1+ (S/N)]
∴ 256 = 1+ (S/N)
∴ S/N = 255 or 24.06 dB ...Ans.
This is the required value of the signal to noise ratio to ensure the error free transmission.
4. BW required for the errorfree transmission :
Given :
C = 80 kb/s, S/N = 20 dB = 100
∴ C = B log2
∴ 80 = B log2 [1 + 100]
∴ B ≥ 12 kHz. ...Ans.
Ex. 3.12.5 : A channel has a bandwidth of 5 kHz and a signal to noise power ratio 63. Determine the
bandwidth needed if the S/N power ratio is reduced to 31. What will be the signal power
required if the channel bandwidth is reduced to 3 kHz ? .Page No. 3-68
Soln. :
1. To determine the channel capacity :
It is given that B = 5 kHz and = 63. Hence using the Shannon Hartley theorem the channel
capacity is given by,
C = B log2 = 5 × 103 log2 [1+ 63]
∴ C = 30 × 103 bits/sec ...(1)
2. To determine the new bandwidth :
The new value of = 31. Assuming the channel capacity “C” to be constant we can write,
30 × 103 = B log2 [1+ 31]
∴ B = = 6 kHz ...(2)
3. To determine the new signal power :
Given that the new bandwidth is 3 kHz. We know that noise power N = N0 B.
Let the noise power corresponding to a bandwidth of 6 kHz be N1 = 6 N0 and the noise power
corresponding to the new bandwidth of 3 kHz be N2 = 3 N0.
21. Digital Communication (GTU) 3-21 Information Theory
∴ = =2 ...(3)
The old signal to noise ratio = = 31
∴ S1 = 31 N1 ...(4)
The new signal to noise ratio = . We do not know its value, hence let us find it out.
30 × 103 = 3 × 103 log2
∴ = 1023 ...(5)
∴ S2 = 1023 N2
But from Equation (3), N2 = , substituting we get,
∴ S2 = 1023 ...(6)
Dividing Equation (6) by Equation (4) we get,
= = 16.5
∴ S2 = 16.5 S1 ...Ans.
Thus if the bandwidth is reduced by 50% then the signal power must be increased 16.5 times i.e.
1650% to get the same capacity.
Ex. 3.12.6 : A 2 kHz channel has signal to noise ratio of 24 dB :
(a) Calculate maximum capacity of this channel.
(b) Assuming constant transmitting power, calculate maximum capacity when channel
bandwidth is : 1. halved 2. reduced to a quarter of its original value.
.Page No. 3-68
22. Digital Communication (GTU) 3-22 Information Theory
Soln. :
Data : B = 2 kHz and (S/N) = 24 dB.
The SNR should be converted from dB to power ratio.
∴ 24 = 10 log10 (S/N)
∴ = 251 ...(1)
(a) To determine the channel capacity :
C = B log2 = 2 × 103 log2 [1 + 251] = 2 × 103
∴ C = 15.95 × 103 bits/sec ...Ans.
(b) 1. Value of C when B is halved :
The new bandwidth B2 = 1 kHz, let the old bandwidth be denoted by B1 = 2 kHz.
We know that the noise power N = N0 B
∴ Noise power with old bandwidth = N1 = N0 B1 ...(2)
and Noise power with new bandwidth = N2 = N0 B2 ...(3)
∴ = = =
∴ = ...(4)
As the signal power remains constant, the SNR with new bandwidth is,
= =2
But we know that = 251 ...See Equation (1)
∴ = 2 × 251 = 502 ...(5)
Hence the new channel capacity is given by,
C = B2 log2 = 1 × 103 log2 (503)
= 1 × 103
∴ C = 8.97 × 103 bits/sec ...Ans.
2. Value of C when B is reduced to 1/4 of original value :
The Equation (4) gets modified to,
= ...(6)
∴ = 4 = 4 × 251 = 1004 ...(7)
Hence new channel capacity is given by,
C = B3 log2 = 500 log2 (1004)
∴ C = 4.99 × 103 bits/sec ...Ans.