Information Theory: Communication Basics
Information Theory: Communication Basics
Session 1
Measure of Information
What is communication?
Communication involves explicitly the transmission of information from one point
to another, through a succession of processes.
What are the basic elements to every communication system?
o Transmitter
o Channel and
o Receiver
How are information sources classified?
Source definition
Analog : Emit a continuous amplitude, continuous time electrical wave from.
Discrete : Emit a sequence of letters of symbols.
How to transform an analog information source into a discrete one?
What will be the output of a discrete information source?
A string or sequence of symbols.
Message
signal
Receiver
User
of
information
Transmitter
Source
of
information
CHANNEL
Transmitted
signal
Received
signal
Estimate of
message signal
Communication System
INFORMATION SOURCE
ANALOG DISCRETE
1
EVERY MESSAGE PUT OUT BY A SOURCE CONTAINS SOME
INFORMATION. SOME MESSAGES CONVEY MORE INFORMATION THAN
OTHERS.
How to measure the information content of a message quantitatively?
To answer this, we are required to arrive at an intuitive concept of the amount of
information.
Consider the following examples:
A trip to Mercara (Coorg) in the winter time during evening hours,
1. It is a cold day
2. It is a cloudy day
3. Possible snow flurries
Amount of information received is obviously different for these messages.
o Message (1) Contains very little information since the weather in coorg is cold
for most part of the time during winter season.
o The forecast of cloudy day contains more information, since it is not an event
that occurs often.
o In contrast, the forecast of snow flurries conveys even more information,
since the occurrence of snow in coorg is a rare event.
On an intuitive basis, then with a knowledge of the occurrence of an event, what can be
said about the amount of information conveyed?
It is related to the probability of occurrence of the event.
What do you conclude from the above example with regard to quantity of information?
Message associated with an event least likely to occur contains most information.
How can the information content of a message be expressed quantitatively?
The above concepts can now be formed interns of probabilities as follows:
Say that, an information source emits one of q possible messages m
1
, m
2
m
q
with p
1
,
p
2
p
q
as their probs. of occurrence.
2
Based on the above intusion, the information content of the k
th
message, can be
written as
I (m
k
)
k
p
1
Also to satisfy the intuitive concept, of information.
I (m
k
) must zero as p
k
1
Can I (m
k
) be negative?
What can I (m
k
) be at worst?
What then is the summary from the above discussion?
I (m
k
) > I (m
j
); if p
k
< p
j
I (m
k
) O (m
j
); if p
k
1 ------ I
I (m
k
) O; when O < p
k
< 1
Another requirement is that when two independent messages are received, the total
information content is
Sum of the information conveyed by each of the messages.
Thus, we have
I (m
k
& m
q
) I (m
k
& m
q
) = I
mk
+ I
mq
------ II
Can you now suggest a continuous function of pk that satisfies the constraints
specified in I & II above?
,
_
k
p
1
------ III
3
What is the unit of information measure?
Base of the logarithm will determine the unit assigned to the information content.
Natural logarithm base : nat
Base - 10 : Hartley / decit
Base - 2 : bit
Use of binary digit as the unit of information?
Is based on the fact that if two possible binary digits occur with equal proby
(p
1
= p
2
= ) then the correct identification of the binary digit conveys an amount of
information.
I (m
1
) = I (m
2
) = log
2
( ) = 1 bit
One bit is the amount if information that we gain when one of two possible and
equally likely events occurs.
Illustrative Example
A source puts out one of five possible messages during each message interval. The
probs. of these messages are p
1
=
2
1
; p
2
=
4
1
; p
1
=
4
1
: p
1
=
16
1
, p
5
16
1
What is the information content of these messages?
I (m
1
) = - log
2
,
_
2
1
= 1 bit
I (m
2
) = - log
2
,
_
4
1
= 2 bits
I (m
3
) = - log
,
_
8
1
= 3 bits
I (m
4
) = - log
2
,
_
16
1
= 4 bits
I (m
5
) = - log
2
,
_
16
1
= 4 bits
HW: Calculate I for the above messages in nats and Hartley
4
Digital Communication System
What then is the course objective?
Source of
information
Source
encoder
Channel
encoder
Modulator
Channel
User of
information
Source
decoder
Channel
decoder
Demodulator
Message signal
Transmitter
Receiver
Source
code word
Channel
code word
Estimate of
source codeword
Waveform
Received
signal
Estimate of
channel codeword
Estimate of the
Message signal
5
INFORMATION THEORY AND CODING
Session 2
Entropy and rate of Information of an Information Source /
Model of a Markoff Source
1. Average Information Content of Symbols in Long Independence Sequences
Suppose that a source is emitting one of M possible symbols s
0
, s
1
.. s
M
in a
statically independent sequence
Let p
1
, p
2
, .. p
M
be the problems of occurrence of the M-symbols resply.
suppose further that during a long period of transmission a sequence of N symbols have
been generated.
On an average s
1
will occur NP
1
times
S
2
will occur NP
2
times
:
:
s
i
will occur NP
i
times
The information content of the i
th
symbol is I (s
i
) = log
,
_
i
p
1
bits
P
i
N occurrences of s
i
contributes an information content of
P
i
N . I (s
i
) = P
i
N . log
,
_
i
p
1
bits
Total information content of the message is = Sum of the contribution due to each of
M symbols of the source alphabet
i.e., I
total
=
,
_
M
1 i i
1
p
1
log NP
bits
by given in symbol er p
content n inforamtio Average
H =
,
_
M
1 i i
1
total
p
1
log NP
N
I
symbol
per bits
---- IV
This is equation used by Shannon
Average information content per symbol is also called the source entropy.
6
What is the average information associated with an extremely unlikely message?
What is the average information associated with an extremely likely message?
What is the dependence of H on the probabilities of messages?
To answer this, consider the situation where you have just two messages of probs.
p and (1-p).
Average information per message is H =
p 1
1
log ) p 1 (
p
1
log p
+
At p = O, H = O and at p = 1, H = O again,
The maximum value of H can be easily obtained as,
H
max
= log
2
2 + log
2
2 = log
2
2 = 1
H
max
= 1 bit / message
Plot and H can be shown below
The above observation can be generalized for a source with an alphabet of M
symbols.
Entropy will attain its maximum value, when the symbol probabilities are equal,
i.e., when p
1
= p
2
= p
3
= . = p
M
=
M
1
H
max
= log
2
M bits / symbol
H
max
=
M
M
p
1
log p
H
max
=
M
M
1
1
log p
H
1
O
P
7
H
max
=
M log M log
M
1
2 2
What do you mean by information rate?
If the source is emitting symbols at a fixed rate of r
s
symbols / sec, the average
source information rate R is defined as
R = r
s
. H bits / sec
Illustrative Examples
1. Consider a discrete memoryless source with a source alphabet A = {
s
o
, s
1
, s
2
} with respective probs. p
0
= , p
1
= , p
2
= . Find the entropy of the
source.
Solution: By definition, the entropy of a source is given by
H =
M
i i
i
p
p
1
1
log
bits/ symbol
H for this example is
H (A) =
2
0
1
log
i i
i
p
p
Substituting the values given, we get
H (A) =
o
p
log
o
P
1
+ P
1
log
2
2
1
1
log
1
p
p
p
+
=
2
log 4 +
2
log 4 +
2
log 2
=
,
_
2
3
= 1.5 bits
if
s
r
= 1 per sec, then
H (A) =
s
r
H (A) = 1.5 bits/sec
2. An analog signal is band limited to B Hz, sampled at the Nyquist
rate, and the samples are quantized into 4-levels. The quantization levels Q
1,
Q
2,
Q
3,
and Q
4
(messages) are assumed independent and occur with probs.
P
1
= P
2
=
8
1
and P
2
= P
3
=
8
3
. Find the information rate of the source.
8
Solution: By definition, the average information H is given by
H =
1
p log
1
1
p
+
2
p log
2
1
p
+
3
p
log
3
1
p
+
4
p log
4
1
p
Substituting the values given, we get
H =
8
1
log 8 +
8
3
log
3
8
+
8
3
log
3
8
+
8
1
log 8
= 1.8 bits/ message.
Information rate of the source by definition is
R =
s
r
H
R = 2B, (1.8) = (3.6 B) bits/sec
3. Compute the values of H and R, if in the above example, the
quantities levels are so chosen that they are equally likely to occur,
Solution:
Average information per message is
H = 4 ( log
2
4) = 2 bits/message
and R =
s
r
H = 2B (2) = (4B) bits/sec
Markoff Model for Information Sources
Assumption
A source puts out symbols belonging to a finite alphabet according to certain
probabilities depending on preceding symbols as well as the particular symbol in question.
Define a random process
A statistical model of a system that produces a sequence of symbols stated above is
and which is governed by a set of probs. is known as a random process.
9
Therefore, we may consider a discrete source as a random process
and
the converse is also true.
i.e. A random process that produces a discrete sequence of symbols chosen from a
finite set may be considered as a discrete source.
Can you give an example of such a source?
What is a discrete stationary Markoff process?
Provides a statistical model for the symbol sequences emitted by a discrete source.
General description of the model can be given as below:
1. At the beginning of each symbol interval, the source will be in the one of n possible
states 1, 2, .. n
Where n is defined as
n (M)
m
M = no of symbol / letters in the alphabet of a discrete stationery source,
m = source is emitting a symbol sequence with a residual influence lasting
m symbols.
i.e. m: represents the order of the source.
m = 2 means a 2
nd
order source
m = 1 means a first order source.
The source changes state once during each symbol interval from say i to j. The
proby of this transition is P
ij
. P
ij
depends only on the initial state i and the final state j but
does not depend on the states during any of the preceeding symbol intervals.
P
1
(1) =
1
/
3
P
2
(1) =
1
/
3
P
3
(1) =
1
/
3
10
2. When the source changes state from i to j it emits a symbol.
Symbol emitted depends on the initial state i and the transition i j.
3. Let s
1
, s
2
, .. s
M
be the symbols of the alphabet, and let x
1
, x
2
, x
3
, x
k
, be a
sequence of random variables, where x
k
represents the k
th
symbol in a sequence emitted
by the source.
Then, the probability that the k
th
symbol emitted is s
q
will depend on the previous
symbols x
1
, x
2
, x
3
, , x
k1
emitted by the source.
i.e., P (X
k
= s
q
/ x
1
, x
2
, , x
k1
)
4. The residual influence of
x
1
, x
2
, , x
k1
on x
k
is represented by the state of the system at the beginning of the
k
th
symbol interval.
i.e. P (x
k
= s
q
/ x
1
, x
2
, , x
k1
) = P (x
k
= s
q
/ S
k
)
When S
k
in a discrete random variable representing the state of the system at the
beginning of the k
th
interval.
Term states is used to remember past history or residual influence in the same
context as the use of state variables in system theory / states in sequential logic circuits.
P
1
(1) =
1
/
3
P
2
(1) =
1
/
3
P
3
(1) =
1
/
3
B
C
11
INFORMATION THEORY AND CODING
Session 3
System Analysis with regard to Markoff sources
Representation of Discrete Stationary Markoff sources:
How?
o Are represented in a graph form with the nodes in the graph to represent states
and the transition between states by a directed line from the initial to the final state.
o Transition probs. and the symbols emitted corresponding to the transition will
be shown marked along the lines of the graph.
A typical example for such a source is given below.
What do we understand from this source?
o It is an example of a source emitting one of three symbols A, B, and C
o The probability of occurrence of a symbol depends on the particular symbol in
question and the symbol immediately proceeding it.
What does this imply?
o Residual or past influence lasts only for a duration of one symbol.
P
1
(1) =
1
/
3
P
2
(1) =
1
/
3
P
3
(1) =
1
/
3
B
B
A
C
A
1
C
2
3
C
A
B
B
C
12
What is the last symbol emitted by this source?
o The last symbol emitted by the source can be A or B or C. Hence past history
can be represented by three states- one for each of the three symbols of the
alphabet.
What do you understand form the nodes of the source?
o Suppose that the system is in state (1) and the last symbol emitted by the source
was A.
o The source now emits symbol (A) with probability and returns to state (1).
OR
o The source emits letter (B) with probability and goes to state (3)
OR
o The source emits symbol (C) with probability and goes to state (2).
State transition and symbol generation can also be illustrated using a tree diagram.
What is a tree diagram?
Tree diagram is a planar graph where the nodes correspond to states and
branches correspond to transitions. Transitions between states occurs once every
T
s
seconds.
Along the branches of the tree, the transition probabilities and symbols emitted will be
indicated.
1 A
B
C
To state 3
To state 2
1
/
3
1
/
3
1
/
3
13
What is the tree diagram for the source considered?
1
/
3
1
/
3
1
/
3
14
1
1
3
1
2
3
1
2
3
2
1
2
3
1
/
3
A
C
B
AA
AC
AB
A
C
B
CA
CC
CB
A
C
B
BA
BC
BB
C
B
2
1
3
1
2
3
1
2
3
2
1
2
3
1
/
3
A
A
C
B
AA
AC
AB
A
C
B
CA
CC
CB
A
C
B
BA
BC
BB
C
B
3
1
3
1
2
3
1
2
3
2
1
2
3
1
/
3
A
A
C
B
AA
AC
AB
A
C
B
CA
CC
CB
A
C
B
BA
BC
BB
C
B
Symbol
probs.
Symbols
emitted
State at the end of the
first symbol internal
State at the end of the second
symbol internal
Symbol
sequence
Initial
state
15
What is the use of the tree diagram?
Tree diagram can be used to obtain the probabilities of generating various symbol
sequences.
How to generate a symbol sequence say AB?
This can be generated by any one of the following transitions:
OR
OR
Therefore proby of the source emitting the two symbol sequence AB is given by
P(AB) = P ( S
1
= 1, S
2
= 1, S
3
= 3 )
Or
P ( S
1
= 2, S
2
= 1, S
3
= 3 ) ----- (1)
Or
P ( S
1
= 3, S
2
= 1, S
3
= 3 )
Note that the three transition paths are disjoint.
Therefore P (AB) = P ( S
1
= 1, S
2
= 1, S
3
= 3 ) + P ( S
1
= 2, S
2
= 1, S
3
= 3 )
+ P ( S
1
= 2, S
2
= 1, S
3
= 3 ) ----- (2)
The first term on the RHS of the equation (2) can be written as
P ( S
1
= 2, S
2
= 1, S
3
= 3 )
= P ( S
1
= 1) P (S
2
= 1 / S
1
= 1) P (S
3
= 3 / S
1
= 1, S
2
= 1)
= P ( S
1
= 1) P (S
2
= 1 / S
1
= 1) P (S
3
= 3 / S
2
= 1)
C
C
3
/
4
3
/
4
16
1 2 3
2 1 3
3 1 3
C
C
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
17
Recall the Markoff property.
Transition probability to S
3
depends on S
2
but not on how the system got to S
2
.
Therefore, P (S
1
= 1, S
2
= 1, S
3
= 3 ) =
1
/
3
x x
Similarly other terms on the RHS of equation (2) can be evaluated.
Therefore P (AB) =
1
/
3
x x +
1
/
3
x x +
1
/
3
x x =
48
4
=
12
1
Similarly the probs of occurrence of other symbol sequences can be computed.
What do you conclude from the above computation?
In general the probability of the source emitting a particular symbol sequence can
be computed by summing the product of probabilities in the tree diagram along all the
paths that yield the particular sequences of interest.
Illustrative Example:
1. For the information source given draw the tree diagram and find the probs. of messages
of lengths 1, 2 and 3.
Source given emits one of 3 symbols A, B and C
Tree diagram for the source outputs can be easily drawn as shown.
C
C
2 B
3
/
4
1
A
3
/
4
p
1
= P
2
=
3
/
4
3
/
4
3
/
4
3
/
4
18
Messages of length (1) and their probs
A x =
3
/
8
B x =
3
/
8
C x + x =
8
1
8
1
+
=
Message of length (2)
How may such messages are there?
Seven
Which are they?
AA, AC, CB, CC, BB, BC & CA
What are their probabilities?
Message AA : x x =
32
9
Message AC: x x =
32
3
and so on.
1
A
1
2
1
2
AAA
AAC
ACC
ACB
1
1
2
A
C
A
C
C
3
/
4
B
2
2
1
2
CCA
CCC
CBC
CBB
1
1
2
C
B
A
C
C
B
3
/
4
C
2
C
1
2
1
2
CAA
CAC
CCC
CCB
1
1
2
A
C
A
C
C
3
/
4
B
2
2
1
2
BCA
BCC
BBC
BBB
1
1
2
C
B
A
C
C
B
3
/
4
B
19
20
Tabulate the various probabilities
Messages of Length (1) Messages of Length (2) Messages of Length (3)
A
,
_
8
3
AA
,
_
32
9
AAA
,
_
128
27
B
,
_
8
3
AC
,
_
32
3
AAC
,
_
128
9
C
,
_
4
1
CB
,
_
32
3
ACC
,
_
128
3
CC
,
_
32
2
ACB
,
_
128
9
BB
,
_
32
9
BBB
,
_
128
27
BC
,
_
32
3
BBC
,
_
128
9
CA
,
_
32
3
BCC
,
_
128
3
BCA
,
_
128
9
CCA
,
_
128
3
CCB
,
_
128
3
CCC
,
_
128
2
CBC
,
_
128
3
CAC
,
_
128
3
CBB
,
_
128
9
CAA
,
_
128
9
1
/
8
7
/
8
1
/
8
7
/
8
1
/
4
21
1
/
8
7
/
8
1
/
8
7
/
8
1
/
4
22
A second order Markoff source
Model shown is an example of a source where the probability of occurrence of a
symbol depends not only on the particular symbol in question, but also on the two symbols
proceeding it.
No. of states: n (M)
m
;
4 M
2
M = 2
m = No. of symbols for which the residual influence lasts
(duration of 2 symbols)
or
M = No. of letters / symbols in the alphabet.
Say the system in the state 3 at the beginning of the symbols emitted by the source
were BA.
Similar comment applies for other states.
Write the tree diagram for this source.
(AA)
1
/
8
B
A
7
/
8
1 2
3 4
1
/
8
7
/
8
1
/
4
1
/
4
(AA)
(AB)
(BB)
P
1
(1)
P
2
(1)
P
3
(1)
P
4
(1)
B
3
/
4
3
/
4
A
B
B
B
C
C
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
23
Entropy and Information Rate of Markoff Sources
Session 4
Definition of the entropy of the source
Assume that, the probability of being in state i at he beginning of the first symbol
interval is the same as the probability of being in state i at the beginning of the second
symbol interval, and so on.
The probability of going from state i to j also doesnt depend on time, Entropy of
state i is defined as the average information content of the symbols emitted from the i-th
state.
n
j ij
ij
p
p
1
2 i
1
log H
bits / symbol ------ (1)
Entropy of the source is defined as the average of the entropy of each state.
i.e. H = E(H
i
) =
n
1 j
i i
H p
------ (2)
Where,
p
i
= the proby that the source is in state i'.
using eqn (1), eqn. (2) becomes,
H =
,
_
,
_
n
1 j ij
ij
n
1 i
i
p
1
log p p
bits / symbol ------ (3)
Average information rate for the source is defined as
R = r
s
. H bits/sec
Where, r
s
is the number of state transitions per second or the symbol rate of the
source.
The above concepts can be illustrated with an example
Illustrative Example:
1. Consider an information source modelled by a discrete stationary Markoff random
process shown in the figure. Find the source entropy H and the average information
content per symbol in messages containing one, two and three symbols.
C
C
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
24
The source emits one of three symbols A, B and C.
A tree diagram can be drawn as illustrated in the previous session to understand the
various symbol sequences and their probabilities.
C
C
2 B
3
/
4
1
A
3
/
4
p
1
= P
2
=
1
A
1
2
1
2
AAA
AAC
ACC
ACB
1
1
2
A
C
A
C
C
3
/
4
B
2
2
1
2
CCA
CCC
CBC
CBB
1
1
2
C
B
A
C
C
B
3
/
4
C
2
C
1
2
1
2
CAA
CAC
CCC
CCB
1
1
2
A
C
A
C
C
3
/
4
B
2
2
1
2
BCA
BCC
BBC
BBB
1
1
2
C
B
A
C
C
B
3
/
4
B
25
As per the outcome of the previous session we have
Messages of Length (1) Messages of Length (2) Messages of Length (3)
A
,
_
8
3
AA
,
_
32
9
AAA
,
_
128
27
B
,
_
8
3
AC
,
_
32
3
AAC
,
_
128
9
C
,
_
4
1
CB
,
_
32
3
ACC
,
_
128
3
CC
,
_
32
2
ACB
,
_
128
9
BB
,
_
32
9
BBB
,
_
128
27
BC
,
_
32
3
BBC
,
_
128
9
CA
,
_
32
3
BCC
,
_
128
3
BCA
,
_
128
9
CCA
,
_
128
3
CCB
,
_
128
3
CCC
,
_
128
2
CBC
,
_
128
3
CAC
,
_
128
3
CBB
,
_
128
9
CAA
,
_
128
9
26
By definition H
i
is given by
n
1 j ij
ij i
p
1
log p H
Put i = 1,
2 n
1 j j 1
j 1 i
p
1
log p H
12
12
11
11
1
log
p
1
log
p
p p +
Substituting the values we get,
( )
,
_
+
4 / 1
1
log
4
1
4 / 3
1
log
4
3
2 2 1
H
=
( ) 4 log
4
1
3
4
log
4
3
2 2
,
_
,
_
H
1
= 0.8113
Similarly H
2
=
4
1
log 4 +
4
3
log
3
4
= 0.8113
By definition, the source entropy is given by,
2
1 i
i i
n
1 i
i i
H p H p H
=
2
1
(0.8113) +
2
1
(0.8113)
= (0.8113) bits / symbol
To calculate the average information content per symbol in messages
containing two symbols.
How many messages of length (2) are present? And what is the information
content of these messages?
There are seven such messages and their information content are:
I (AA) = I (BB) = log
) (
1
AA
= log
) (
1
BB
27
i.e., I (AA) = I (BB) = log
) 32 / 9 (
1
= 1.83 bits
Similarly calculate for other messages and verify that they are
I (BB) = I (AC) =
I (CB) = I (CA) =
I (CC) = log =
) 32 / 2 ( P
1
= 4 bits
Compute the average information content of these messages.
Thus, we have
H
(two)
=
. sym / bits
P
1
log P
i
7
1 i
i
= i
7
1 i
i
I . P
Where I
i
= the Is calculated above for different messages of length two
Substituting the values we get,
) 83 . 1 ( x
32
9
) 415 . 3 ( x
32
3
) 415 . 3 ( x
32
3
) 4 (
32
2
) 415 . 3 (
32
3
) 415 . 3 ( x
32
3
) 83 . 1 (
32
9
H
(two)
+ +
+ + + +
bits 56 . 2 H
(two)
Compute the average information content persymbol in messages containing two
symbols using the relation.
G
N
=
message the in symbols of Number
N length of messages the of content n informatio Average
Here, N = 2
G
N
=
2
(2) length of messages the of content n informatio Average
28
log
) 32 / 3 (
1
= 3.415 bits
=
2
H
) two (
= symbol / bits 28 . 1
2
56 . 2
28 . 1 G
2
Similarly compute other Gs of interest for the problem under discussion viz G
1
& G
3
.
You get them as
G
1
= 1.5612 bits / symbol
and G
3
= 1.0970 bits / symbol
What do you understand from the values of Gs calculated?
You note that,
G
1
> G
2
> G
3
> H
How do you state this in words?
It can be stated that the average information per symbol in the message reduces as the
length of the message increases.
What is the generalized from of the above statement?
If P(m
i
) is the probability of a sequence m
i
of N symbols form the source with the
average information content per symbol in the messages of N symbols defined by
G
N
=
N
) m ( P log ) m ( P
i
i i
-----(2)
for this example,
3 , 2 , 1 i ;
p
1
log p H
ij
3
1 j
ij i
------ (3)
Put i = 1
j
j
j i
p p H
1
3
1
1
log
= - p
11
log p
11
p
12
log p
12
p
13
log p
13
Substituting the values, we get
H
1
= -
2
1
x log
,
_
2
1
-
2
1
log
,
_
2
1
- 0
= +
2
1
log (2) +
2
1
log (2)
H
1
= 1 bit / symbol
Put i = 2, in eqn. (2) we get,
H
2
= -
3
1 j
j 2 j 2
p log p
i.e., H
2
= -
[ ]
23 23 22 22 21 21
p log p p log p p log p + +
Substituting the values given we get,
H
2
= -
1
]
1
,
_
,
_
,
_
4
1
log
4
1
2
1
log
2
1
4
1
log
4
1
= +
4
1
log 4 +
2
1
log 2 +
4
1
log 4
=
2
1
log 2 +
2
1
+ log 4
H
2
= 1.5 bits/symbol
32
Similarly calculate H
3
and it will be
H
3
= 1 bit / symbol
33
With H
i
computed you can now compute H, the source entropy, using.
H =
3
1 i
i i
H P
= p
1
H
1
+ p
2
H
2
+ p
3
H
3
Substituting the values we get,
H =
4
1
x 1 +
2
1
x 1.5 +
4
1
x 1
=
4
1
+
2
5 . 1
+
4
1
=
2
1
+
2
5 . 1
=
2
5 . 2
= 1.25 bits / symbol
H = 1.25 bits/symbol
Now, using equation (1) we have
Source information rate = R = r
s
1.25
Taking r
s
as one per second we get
R = 1 x 1.25 = 1.25 bits / sec
34
Encoding of the Source Output
Session 5
Why encoding?
Suppose that, M messages = 2
N
, which are equally likely to occur. Then recall
that average information per messages interval in H = N.
Say further that each message is coded into N bits,
c
=
^
N
H
) S ( H
Shannons Encoding Algorithm
How to formulate the design of the source encoder?
Can be formulated as follows:
q messages : m
1
, m
2
, ..m
i
, .., m
q
Probs. of messages : p
1
, p
2
, ....p
i
, ..., p
q
n
i
: an integer
What should be the objective of the designer?
To find n
i
and c
i
for i = 1, 2, ...., q such that the average number of bits per
symbol
^
N
H
used in the coding scheme is as close to G
N
as possible.
One of q
possible
messages
A message
N-symbols
Source
encoder
INPUT
OUTPUTU
T
A unique binary code
word c
i
of length
n
i
bits for the
message m
i
q
1 i
i i
p n
N
1
and G
N
=
q
1 i i
i
p
1
log p
N
1
i.e., the objective is to have
^
N
H
G
N
as closely as possible
What is the algorithm proposed by Shannon and Fano?
Step 1: Messages for a given block size (N) m
1
, m
2
, ....... m
q
are to be arranged in
decreasing order of probability.
Step 2: The number of n
i
(an integer) assigned to message m
i
is bounded by
log
2
i
2 i
i
p
1
log 1 n
p
1
+ < <
Step 3: The code word is generated from the binary fraction expansion of F
i
defined as
F
i
=
1 i
1 k
k
p
, with F
1
taken to be zero.
Step 4: Choose n
i
bits in the expansion of step (3)
Say, i = 2, then if n
i
as per step (2) is = 3 and
If the F
i
as per stop (3) is 0.0011011
Then step (4) says that the code word is: 001 for message (2)
With similar comments for other messages of the source.
The codeword for the message m
i
is the binary fraction expansion of F
i
upto n
i
bits.
i.e., C
i
= (F
i
)
binary, ni bits
Step 5: Design of the encoder can be completed by repeating the above steps for all the
messages of block length chosen.
Illustrative Example
Design of source encoder for the information source given,
C
C
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
3
/
4
37
Compare the average output bit rate and efficiency of the coder for N = 1, 2 & 3.
C
C
2 B
3
/
4
1
A
3
/
4
p
1
= P
2
=
3
/
4
3
/
4
3
/
4
3
/
4
38
Solution:
The value of N is to be specified.
Case I: Say N = 3 Block size
Step 1: Write the tree diagram and get the symbol sequence of length = 3.
Tree diagram for illustrative example (1) of session (3)
From the previous session we know that the source emits fifteen (15) distinct three symbol
messages.
They are listed below
Messages AAA AAC ACC ACB BBB BBC BCC BCA CCA CCB CCC CBC CAC CBB CAA
Probability
,
_
128
27
,
_
128
9
,
_
128
3
,
_
128
9
,
_
128
27
,
_
128
9
,
_
128
3
,
_
128
9
,
_
128
3
,
_
128
3
,
_
128
2
,
_
128
3
,
_
128
3
,
_
128
9
,
_
128
9
Step 2: Arrange the messages m
i
in decreasing order of probability.
Messages
m
i
AAA BBB CAA CBB BCA BBC AAC ACB CBC CAC CCB CCA BCC ACC CCC
1
A
1
2
1
2
AAA
AAC
ACC
ACB
1
1
2
A
C
A
C
C
3
/
4
B
2
2
1
2
CCA
CCC
CBC
CBB
1
1
2
C
B
A
C
C
B
3
/
4
C
2
C
1
2
1
2
CAA
CAC
CCC
CCB
1
1
2
A
C
A
C
C
3
/
4
B
2
2
1
2
BCA
BCC
BBC
BBB
1
1
2
C
B
A
C
C
B
3
/
4
B
39
Probability
p
i
,
_
128
27
,
_
128
27
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
2
Step 3: Compute the number of bits to be assigned to a message m
i
using.
Log
2
i
2 i
i
p
1
log 1 n
p
1
+ < <
; i = 1, 2, . 15
Say i = 1, then bound on n
i
is
log
27
128
log 1 n
27
128
1
+ < <
,
_
n
1
can be taken as,
n
1
= 3
Step 4: Generate the codeword using the binary fraction expansion of F
i
defined as
F
i
=
1 i
1 k
k
p
; with F
1
= 0
Say i = 2, i.e., the second message, then calculate n
2
you should get it as 3 bits.
Next, calculate F
2
=
128
27
p p
1
1 k
k
1 2
1 k
k
,
_
128
27
,
_
128
27
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
9
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
3
,
_
128
2
0
27/128
54/128
63/128
72/128
81/128
90/128
99/128
108/128
111/128
114/128
117/128
120/128
123/128
126/128
3
3
4
4
4
4
4
4
6
6
6
6
6
6
6
.00000
.001101
0110110
0111111
.1001100
1010001
1011010
1100011
110110
1101111
1110010
1110101
1111000
1111011
1111110
000
001
0110
0111
1001
1010
1011
1100
110110
110111
111001
111010
111100
111101
111111
What is the average number of bits per symbol used by the encoder?
Average number of bits =
i i
p n
Substituting the values from the table we get,
41
Average Number of bits = 3.89
Here N = 3,
3
89 . 3
H
^
3
= 1.3 bits / symbol
State entropy is given by
H
i
=
,
_
ij
n
1 j
ij
p
1
log p
bits / symbol
Here number of states the source can be in are two
i.e., n = 2
H
i
=
,
_
ij
2
1 j
ij
p
1
log p
Say i = 1, then entropy of state (1) is
H
i
=
2
1 j
ij
p
12
12
11
11
j 1
p
1
log p
p
1
log p
p
1
log +
,
_
H
1
= 0.8113
Similarly we can compute, H
2
as
H
2
=
22
2
1
22 21
21
21
1
log
1
log
p
p p
p
p
j
+
Substituting we get,
H
2
=
( )
,
_
+
4 / 3
1
log
4
3
4 / 1
1
log x
4
1
42
=
( )
,
_
+
3
4
log
4
3
4 log
4
1
x
H
2
= 0.8113
Entropy of the source by definition is
H =
; H p
n
1 j
i i
P
i
= Probability that the source is in the i
th
state.
H =
;
2
1
i
i i
H p
= p
1
H
1
+ p
2
H
2
Substituting the values, we get,
H = x 0.8113 + x 0.8113 = 0.8113
c
=
% 4 . 62 100
3 . 1
8113 . 0
100 100
^
3
^
2
x x
H
H
x
H
H
c
for N = 3 is, 62.4%
Case II
Say N = 2
The number of messages of length two and their probabilities (obtained from the
tree diagram) can be listed as shown in the table.
Given below
C
C
3
/
4
3
/
4
43
N = 2
Message pi ni ci
AA
BB
AC
CB
BC
CA
CC
9/32
9/32
3/32
3/32
3/32
3/32
2/32
2
2
4
4
4
4
4
00
01
1001
1010
1100
1101
1111
Calculate
^
N
H
and verify that it is 1.44 bits / sym.
c
=
100 x
H
H
^
N
Substituting the values we get,
c
= 56.34%
Case III: N = 1
Proceeding on the same lines you would see that
N = 1
Message pi ni ci
A
B
C
3/8
3/8
1/4
2
2
2
00
01
11
^
1
H
= 2 bits / symbol and
c
= 40.56%
What do you conclude from the above example?
We note that the average output bit rate of the encoder
,
_
^
N
H
decreases as N
increases and hence the efficiency of the encoder increases as N increases.
C
C
3
/
4
3
/
4
44
Operation of the Source Encoder Designed
Session 6
I. Consider a symbol string ACBBCAAACBBB at the encoder input. If the encoder
uses a block size of 3, find the output of the encoder.
Recall from the outcome of session (5) that for the source given possible three
symbol sequences and their corresponding code words are given by
Message
m
i
n
i
Codeword
c
i
Determination of the
code words and their
size as illustrated in
the previous session
AAA
BBB
CAA
CBB
BCA
BBC
AAC
ACB
CBC
CAC
CCB
CCA
BCC
ACC
CCC
3
3
4
4
4
4
4
4
6
6
6
6
6
6
6
000
001
0110
0111
1001
1010
1011
1100
110110
110111
111001
111010
111100
111101
111111
Output of the encoder can be obtained by replacing successive groups of three
input symbols by the code words shown in the table. Input symbol string is
C
C
2 B
3
/
4
1
A
3
/
4
p
1
= P
2
=
SOURCE
ENCODER
OUTPUT
INFORMN. SOURCE
45
Substituting we get,
H = - [p
1
log p
1
+ p
2
log p
2
+ p
3
log p
3
+ p
4
log p
4
+ p
5
log p
5
]
= - [0.4 log 0.4 + 0.2 log 0.2 + 0.2 log 0.2 + 0.1 log 0.1 + 0.1 log 0.1]
H = 2.12 bits/symbol
(ii) Some encoder with N = 2
Different two symbol sequences for the source are:
(s
1
s
1
) AA ( ) BB ( ) CC ( ) DD ( ) EE
A total of 25 messages
(s
1
s
2
) AB ( ) BC ( ) CD ( ) DE ( ) ED
(s
1
s
3
) AC ( ) BD ( ) CE ( ) DC ( ) EC
(s
1
s
4
) AD ( ) BE ( ) CB ( ) DB ( ) EB
(s
1
s
5
) AE ( ) BA ( ) CA ( ) DA ( ) EA
Arrange the messages in decreasing order of probability and determine the number
of bits n
i
as explained.
Messages
Proby.
p
i
No. of bits
n
i
AA 0.16 3
AB
AC
BC
BA
CA
0.08
0.08
0.08
0.08
0.08
4
...
...
...
...
...
...
...
0.04
0.04
0.04
0.04
0.04
0.04
0.04
5
...
...
...
...
...
...
...
0.02
0.02
0.02
0.02
0.02
0.02
0.02
6
R
1
/
2
49
... 0.02
...
...
...
...
0.01
0.01
0.01
0.01
7
Calculate
^
1
H
=
Substituting,
^
1
H
= 2.36 bits/symbol
2. A technique used in constructing a source encoder consists of arranging the
messages in decreasing order of probability and dividing the message into two almost
equally probable groups. The messages in the first group are given the bit O and the
messages in the second group are given the bit 1. The procedure is now applied
again for each group separately, and continued until no further division is possible.
Using this algorithm, find the code words for six messages occurring with
probabilities, 1/24, 1/12, 1/24, 1/6, 1/3, 1/3
Solution: (1) Arrange in decreasing order of probability
m5 1/3 0 0
m6 1/3 0 1
m4 1/6 1 0
m2 1/12 1 1 0
m1 1/24 1 1 1 0
m3 1/24 1 1 1 1
50
1
st
division
2
nd
division
3
rd
division
4
th
division
Example (3)
a) For the source shown, design a source encoding scheme using block size of
two symbols and variable length code words
b) Calculate
^
2
H
used by the encoder
c) If the source is emitting symbols at a rate of 1000 symbols per second,
compute the output bit rate of the encoder.
Solution (a)
1. The tree diagram for the source is
R
S
3 R
1
/
2
L
S
2
1
L
p
1
= P
2
=
S
P
3
=
51
2. Note, there are seven messages of length (2). They are SS, LL, LS, SL, SR, RS & RR.
3. Compute the message probabilities and arrange in descending order.
4. Compute n
i
, F
i.
F
i
(in binary) and c
i
as explained earlier and tabulate the results, with
usual notations.
Message
m
i
p
i
n
i
F
i
F
i
(binary) c
i
SS 1/4 2 0 .0000 00
LL 1/8 3 1/4 .0100 010
LS 1/8 3 3/8 .0110 011
SL 1/8 3 4/8 .1000 100
1
1
LL
LS
C
2
1
2
2
1
2
3
1
2
2
1
2
3
1
2
3
2
3
3
C
3
2
2
3
1
2
SL
SS
SR
LL
LS
SL
SS
SR
RS
RR
RS
RR
SL
SS
SR
(1/16)
(1/16)
(1/32)
(1/16)
(1/32)
(1/16)
(1/16)
(1/16)
(1/8)
(1/16)
(1/8)
(1/8)
(1/16)
(1/16)
(1/32)
(1/16)
(1/32)
LL
LS
SL
SS
SR
RS
RR
Different
Messages
of Length
Two
52
SR 1/8 3 5/8 .1010 101
RS 1/8 3 6/8 .1100 110
RR 1/8 3 7/8 .1110 111
G
2
=
1
]
1
7
1
2
log
2
1
i
i i
p p
= 1.375 bits/symbol
(b)
^
2
H
=
1
]
1
7
1
2
1
i
i i
n p
= 1.375 bits/symbol
Recall,
^
N
H
G
N
+
N
1
; Here N = 2
^
2
H
G
2
+
2
1
(c) Rate = 1375 bits/sec.
53
SOURCE ENCODER DESIGN AND
COMMUNICATION CHANNELS
Session 7
The schematic of a practical communication system is shown.
Fig. 1: BINARY COMMN. CHANNEL CHARACTERISATION
What do you mean by the term Communication Channel?
Carries different meanings and characterizations depending on its terminal points
and functionality.
(i) Portion between points c & g:
Referred to as coding channel
Accepts a sequence of symbols at its input and produces a sequence of symbols
at its output.
Completely characterized by a set of transition probabilities p
ij
. These
probabilities will depend on the parameters of (1) The modulator, (2)
Transmission media, (3) Noise, and (4) Demodulator
A discrete channel
(ii) Portion between points d and f:
Provides electrical connection between the source and the destination.
Channel
Encoder
Channel
Encoder
Electrical
Commun-
ication
channel
OR
Transmissi
on medium
Demodulator
Channel
Decoder
b
c d
e f g
h
Noise
Data Communication Channel (Discrete)
Coding Channel (Discrete)
Modulation Channel (Analog)
Transmitter Physical channel Receiver
54
The input to and the output of this channel are analog electrical waveforms.
Referred to as continuous or modulation channel or simply analog channel.
Are subject to several varieties of impairments
Due to amplitude and frequency response variations of the channel
within the passband.
Due to variation of channel characteristics with time.
Non-linearities in the channel.
Channel can also corrupt the signal statistically due to various types of additive
and multiplicative noise.
What is the effect of these impairments?
Mathematical Model for Discrete Communication Channel:
Channel between points c & g of Fig. (1)
What is the input to the channel?
A symbol belonging to an alphabet of M symbols in the general case.
What is the output of the channel?
A symbol belonging to the same alphabet of M input symbols.
Is the output symbol in a symbol interval same as the input symbol during the
same symbol interval?
The discrete channel is completely modeled by a set of probabilities
t
i
p Probability that the input to the channel is the i
th
symbol of the alphabet.
(i = 1, 2, . M)
and
ij
p Probability that the i
th
symbol is received as the j
th
symbol of the alphabet at the
output of the channel.
What do you mean by a discrete M-ary channel?
If a channel is designed to transmit and receive one of M possible symbols, it is called
a discrete M-ary channel.
p
ij
= p(Y = j / X=i)
= p(X = o); P(X = 1)
= p(Y = o); P(Y = 1)
p
oo
+ p
o1
= 1 ;
p
11
+ p
10
= 1
55
What is a discrete binary channel?
p
ij
= p(Y = j / X=i)
= p(X = o); P(X = 1)
= p(Y = o); P(Y = 1)
p
oo
+ p
o1
= 1 ;
p
11
+ p
10
= 1
= p(X = i)
= p(Y = j)
= p(Y = j / X = i)
56
What is the statistical model of a binary channel?
Shown in Fig. (2).
Fig. (2)
What are its features?
X & Y: random variables binary valued
Input nodes are connected to the output nodes by four paths.
(i) Path on top of graph : Represents an input O appearing
correctly as O as the channel
output.
(ii) Path at bottom of graph :
(iii) Diogonal path from 0 to 1 : Represents an input bit O appearing
incorrectly as 1 at the channel output
(due to noise)
(iv) Diagonal path from 1 to 0 : Similar comments
Errors occur in a random fashion and the occurrence of errors can be statistically
modelled by assigning probabilities to the paths shown in figure (2).
A memoryless channel:
If the occurrence of an error during a bit interval does not affect the behaviour of the
system during other bit intervals.
Probability of an error can be evaluated as
p(error) = P
e
= P (X Y) = P (X = 0, Y = 1) + P (X = 1, Y = 0)
O P
00
O
1
p
11
1
Transmitted
digit X
Received
digit X
p
ij
= p(Y = j / X=i)
= p(X = o); P(X = 1)
= p(Y = o); P(Y = 1)
p
oo
+ p
o1
= 1 ;
p
11
+ p
10
= 1
P
10
P
01
= p(X = i)
= p(Y = j)
= p(Y = j / X = i)
57
P
e
= P (X = 0) . P (Y = 1 / X = 0) + P (X = 1), P (Y = 0 / X= 1)
Can also be written as,
P
e
=
t
o
p p
01
+
t
1
p p
10
------ (1)
We also have from the model
11
t
1 01
t
o
r
1
10
t
1 00
t
o
r
o
p p p p p
and , p . p p p p
+
+
----- (2)
What do you mean by a binary symmetric channel?
If, p
00
= p
11
= p (say), then the channel is called a BSC.
What are the parameters needed to characterize a BSC?
Write the model of an M-ary DMC.
This can be analysed on the same lines presented above for a binary channel.
M
1 i
ij
t
i
r
j
p p p
----- (3)
1
2
j
M
1
2
i
M
INPUT X
OUTPUT Y
= p(X = i)
= p(Y = j)
= p(Y = j / X = i)
Fig. (3)
p
11
p
12
p
ij
p
iM
58
What is p(error) for the M-ary channel?
Generalising equation (1) above, we have
1
1
1
]
1
M
i j
1 j
ij
M
1 i
t
i e
p p P ) error ( P
----- (4)
In a DMC how many statistical processes are involved and which are they?
Two, (i) Input to the channel and
(ii) Noise
Define the different entropies for the DMC.
(i) Entropy of INPUT X: H(X).
( ) ( )
M
1 i
t
i
t
i
p log p X H
bits / symbol ----- (5)
(ii) Entropy of OUTPUT Y: H(Y)
( ) ( )
M
1 i
r
i
r
i
p log p Y H
bits / symbol ----- (6)
(iii) Conditional entropy: H(X/Y)
( ) ( )
M
1 j
M
1 i
) j Y / i X ( p log ) j Y , i X ( P Y / X H
bits/symbol - (7)
(iv) Joint entropy: H(X,Y)
( ) ( )
M
1 i
M
1 i
) j Y , i X ( p log ) j Y , i X ( P Y , X H
bits/symbol - (8)
(v) Conditional entropy: H(Y/X)
( )
M
1 i
M
1 i
) i X / j Y ( p log ) j Y , i X ( P (Y/X) H
bits/symbol - (9)
What does the conditional entropy represent?
59
H(X/Y) represents how uncertain we are of the channel input x, on the average,
when we know the channel output Y.
Similar comments apply to H(Y/X)
(vi)
H(X/Y) H(Y) H(Y/X) H(X) Y) H(X, Entropy Joint + +
- (10)
60
ENTROPIES PERTAINING TO DMC
Session 8
To prove the relation for H(X Y)
By definition, we have,
H(XY) =
M
j
M
i
) j , i ( p log ) j , i ( p
i associated with variable X, white j with variable Y.
H(XY) =
[ ]
j i
) i / j ( p ) i ( p log ) i / j ( p ) i ( p
=
1
]
1
+
) i ( p log ) i / j ( p ) i ( p ) i ( p log ) i / j ( p ) i ( p
j i
Say, i is held constant in the first summation of the first term on RHS, then we
can write H(XY) as
H(XY) =
1
]
1
+
) i / j ( p log ) ij ( p ) i ( p log 1 ) i ( p
) X / Y ( H ) X ( H ) XY ( H +
Hence the proof.
1. For the discrete channel model shown, find, the probability of error.
P(error) = P
e
= P(X Y) = P(X = 0, Y = 1) + P (X = 1, Y = 0)
= P(X = 0) . P(Y = 1 / X = 0) + P(X = 1) . P(Y = 0 / X = 1)
X Y
Transmitted
digit
Received
digit
0 0
1 1
p
p
Since the channel is symmetric,
p(1, 0) = p(0, 1) = (1 - p)
Proby. Of error means, situation
when X Y
P(Y/X)Y P 01 0 X02/31/3 X 11/32/3 1
61
Assuming that 0 & 1 are equally likely to occur
P(error) =
2
1
x (1 p) +
2
1
(1 p) =
2
1
-
2
p
+
2
1
-
2
p
P(error) = (1 p)
P(Y/X)Y P 01 0 X02/31/3 X 11/32/3 1
62
2. A binary channel has the following noise characteristics:
If the input symbols are transmitted with probabilities & respectively, find H(X),
H(Y), H(XY), H(Y/X).
Solution:
Given = P(X = 0) = and P(Y = 1)
H(X) =
+
i
2 2 i i
symbol / bits 811278 . 0 4 log
4
1
3
4
log
4
3
p log p
Compute the probability of the output symbols.
Channel model is-
p(Y = Y
1
) = p(X = X
1
, Y = Y
1
) + p(X = X
2
, Y = Y
1
) ----- (1)
To evaluate this construct the p(XY) matrix using.
P(XY) = p(X) . p(Y/X) =
1
1
1
1
1
1
1
1
]
1
1
1
1
1
1
]
1
6
1
12
1
4
1
y
2
1
y
x
x
4
1
.
3
2
4
1
.
3
1
4
3
.
3
1
4
3
.
3
2
2 1
2
1
----- (2)
P(Y/X)Y P 01 0 X02/31/3 X 11/32/3 1
x
1
y
1
x
2
y
2
63
P(Y = Y
1
) =
12
7
12
1
2
1
+
-- Sum of first column of matrix (2)
64
Similarly P(Y
2
) =
12
5
sum of 2
nd
column of P(XY)
Construct P(X/Y) matrix using
P(XY) = p(Y) . p(X/Y) i.e., p(X/Y) =
) Y ( p
) XY ( p
p
( )
( )
on so and
7
6
14
12
12
7
2
1
Y p
Y X p
Y
X
1
1 1
1
1
,
_
p(Y/X) =
1
1
1
1
]
1
5
2
7
1
5
3
7
6
-----(3)
= ?
H(Y) =
+
i i
i
979868 . 0
5
12
log
12
5
7
12
log
12
7
p
1
log p
bits/sym.
H(XY) =
j i
) XY ( p
1
log ) XY ( p
= 6 log
6
1
12 log
12
1
4 log
4
1
2 log
2
1
+ + +
1 . 0 0 05 . 0 05 . 0
1 . 0 2 . 0 0 0
0 1 . 0 1 . 0 0
05 . 0 2 . 0 0 05 . 0
Solution:
Row sum of P(XY) gives the row matrix P(X)
5
2
0
3
1
2
1
5
2
5
2
0 0
0
2
1
2
1
0
6
1
3
2
0
6
1
Get the condition probability matrix P(X/Y)
P(X/Y) =
1
1
1
1
1
1
1
1
1
]
1
5
2
0
3
1
2
1
5
2
5
2
0 0
0
5
1
3
2
0
5
1
5
2
0
2
1
Now compute the various entropies required using their defining equations.
(i) H(X) =
( )
( )
1
]
1
+
1
]
1
2 . 0
1
log 2 . 0 2
3 . 0
1
log 3 . 0 2
X p
1
log . X p
i
+
1
]
1
+
1
]
1
2 . 0
1
log 2 . 0 2
1 . 0
1
log 1 . 0 4
05 . 0
1
log 05 . 0 4
H(XY) = 3.12192
(iv) H(X/Y) =
) Y / X ( p
1
log ) XY ( p
Substituting the values, we get.
3
1
3
1
6
1
6
1
6
1
6
1
3
1
3
1
x
x
y y y y
2
1
4 3 2 1
NOTE: 2
nd
row of P(Y/X) is 1
st
row written in reverse order. If this is the situation, then
channel is called a symmetric one.
First row of P(Y/X) . P(X
1
) =
6
1
x
2
1
6
1
x
2
1
3
1
x
2
1
3
1
x
2
1
+ +
,
_
+
,
_
P(X
1
Y
1
) = p(X
1
) . p(Y
1
X
1
) = 1 .
3
1
P(X
1
Y
2
) =
3
1
, p(X
1
, Y
3
) =
6
1
= (Y
1
X
4
) and so on.
P(X/Y) =
1
1
1
1
1
]
1
6
1
6
1
12
1
12
1
12
1
12
1
6
1
6
1
H(Y/X) =
) X / Y ( p
1
log ) XY ( p
Substituting for various probabilities we get,
3 log
6
1
3 log
6
1
6 log
12
1
6 log
12
1
6 log
12
1
6 log
12
1
3 log
6
1
3 log
6
1
H(Y/X)
+ + +
+ + + +
68
= 4 x 6 log
12
1
x 4 3 log
6
1
+
= 2 x 6 log
3
1
3 log
3
1
+ =
?
69
5. Given joint proby. matrix for a channel compute the various entropies for the input
and output rvs of the channel.
P(X . Y) =
1
1
1
1
1
1
]
1
2 . 0 02 . 0 06 . 0 0
06 . 0 01 . 0 04 . 0 04 . 0
0 02 . 0 02 . 0 0
01 . 0 01 . 0 01 . 0 1 . 0
0 2 . 0 0 2 . 0
Solution:
P(X) = row matrix: Sum of each row of P(XY) matrix.
27 . 0
2 . 0
26 . 0
02 . 0
13 . 0
06 . 0
0
27 . 0
06 . 0
26 . 0
01 . 0
13 . 0
04 . 0
34 . 0
04 . 0
0
26 . 0
02 . 0
13 . 0
02 . 0
0
27 . 0
01 . 0
26 . 0
01 . 0
13 . 0
01 . 0
34 . 0
1 . 0
0
36 . 0
2 . 0
0
34 . 0
2 . 0
4. H(X/Y) =
) Y / X ( p log ) XY ( p
1.26118 bits/sym.
HW
70
Construct p(Y/X) matrix and hence compute H(Y/X).
71
Rate of Information Transmission over a Discrete Channel
For an M-ary DMC, which is accepting symbols at the rate of r
s
symbols per
second, the average amount of information per symbol going into the channel is
given by the entropy of the input random variable X.
i.e., H(X) =
M
1 i
t
i 2
t
i
p log p
----- (1)
Assumption is that the symbol in the sequence at the input to the channel occur in a
statistically independent fashion.
Average rate at which information is going into the channel is
D
in
= H(X), r
s
bits/sec ----- (2)
Is it possible to reconstruct the input symbol sequence with certainty by operating
on the received sequence?
Given two symbols 0 & 1 that are transmitted at a rate of 1000 symbols or bits per
second. With
2
1
p &
2
1
p
t
1
t
0
Din at the i/p to the channel = 1000 bits/sec. Assume that the channel is symmetric
with the probability of errorless transmission p equal to 0.95.
What is the rate of transmission of information?
Recall H(X/Y) is a measure of how uncertain we are of the input X given output Y.
What do you mean by an ideal errorless channel?
H(X/Y) may be used to represent the amount of information lost in the channel.
Define the average rate of information transmitted over a channel (D
t
).
D
t
s
r
lost n informatio
of Amount
channel the into going
n informatio of Amount
1
]
1
,
_
,
_
72
Symbolically it is,
D
t
=
[ ]
s
r . ) Y / X ( H ) H ( H
bits/sec.
When the channel is very noisy so that output is statistically independent of the input,
H(X/Y) = H(X) and hence all the information going into the channel is lost and no
information is transmitted over the channel.
73
In this session you will -
Understand solving problems on discrete channels through a variety of
illustrative of examples.
DISCRETE CHANNELS
Session 9
1. A binary symmetric channel is shown in figure. Find the rate of information
transmission over this channel when p = 0.9, 0.8 & 0.6. Assume that the symbol
(or bit) rate is 1000/second.
Example of a BSC
Solution:
H(X) = . sym / bit 1 2 log
2
1
2 log
2
1
2 2
+
sec / bit 1000 ) X ( H r D
s in
By definition we have,
D
t
= [H(X) H(X/Y)]
Where, H(X/Y) =
( ) ( ) [ ]
i j
Y / X p log XY p
. r
s
Where X & Y can take values.
X Y
0
0
1
1
0
1
0
1
Input
X
Output
Y
1 p
1 p
p
p
p(X = 0) = p(X = 1) =
P(y/x)Y P 0123 0 X01 X 1p(1p) 1 2(1p)(p) 2 31 3
74
Where
p(Y = 0) = p(Y = 0 / X = 0) . p(X = 0) + p (X = 1) . p
,
_
1 X
0 Y
= p .
2
1
2
1
+ (1 p)
p(Y = 0) =
2
1
p(X = 0 /Y = 0) = p
Similarly we can calculate
p(X = 1 / Y = 0) = 1 p
p(X = 1 / Y = 1) = p
p(X = 0 / Y = 1) = 1 p
H (X / Y) = -
1
]
1
+
+ +
) p 1 ( log ) p 1 (
2
1
p log p
2
1
) p 1 ( log ) p 1 (
2
1
p log p
2
1
2
2 2
= - [ ] ) p 1 ( log ) p 1 ( p log p
2 2
+
D
t
rate of inforn. transmission over the channel is = [H(X) H (X/Y)] . r
s
P(y/x)Y P 0123 0 X01 X 1p(1p) 1 2(1p)(p) 2 31 3
75
with, p = 0.9, D
t
= 531 bits/sec.
p = 0.8, D
t
= 278 bits/sec.
p = 0.6, D
t
= 29 bits/sec.
What does the quantity (1 p) represent?
What do you understand from the above example?
2. A discrete channel has 4 inputs and 4 outputs. The input probabilities are P, Q,
Q, and P. The conditional probabilities between the output and input are.
Write the channel model.
Solution: The channel model can be deduced as shown below:
Given, P(X = 0) = P
P(X = 1) = Q
P(X = 2) = Q
P(X = 3) = P
Off course it is true that: P + Q + Q + P = 1
i.e., 2P + 2Q = 1
Channel model is
P(y/x)Y P 0123 0 X01 X 1p(1p) 1 2(1p)(p) 2 31 3
Input
X
Output
Y
1 p = q
(1 p) = q
p
p
1 0
1
1
2
3
0
1
2
3
76
What is H(X) for this?
H(X) = - [2P log P + 2Q log Q]
What is H(X/Y)?
H(X/Y) = - 2Q [p log p + q log q]
= 2 Q .
1. A source delivers the binary digits 0 and 1 with equal probability into a noisy
channel at a rate of 1000 digits / second. Owing to noise on the channel the
probability of receiving a transmitted 0 as a 1 is 1/16, while the probability of
transmitting a 1 and receiving a 0 is 1/32. Determine the rate at which
information is received.
Solution:
Rate of reception of information is given by
R = H
1
(X) - H
1
(X/Y) bits / sec -----(1)
Where, H(X) =
i
. sym / bits ) i ( p log ) i ( p
H(X/Y) =
j i
. sym / bits ) j / i ( p log ) ij ( p
-----(2)
H(X) =
. sym / bit 1
2
1
log
2
1
2
1
log
2
1
1
]
1
+
Channel model or flow graph is
Probability of transmitting a symbol (i) given that a symbol 0 was received was received
is denoted as p(i/j).
Input Output
1/32
1/16
15/16
31/32
0
1
0
1
Index i' refers to the I/P of the
channel and index j referes to the
output (R
x
)
77
What do you mean by the probability p
,
_
0 j
0 i
?
How would you compute p(0/0)
Recall the probability of a joint event AB p(AB)
P(AB) = p(A) p(B/A) = p(B) p(A/B)
i.e., p(ij) = p(i) p(j/i) = p(j) p(i/j)
from which we have,
p(i/j) =
) j ( p
) i / j ( p ) i ( p
-----(3)
What are the different combinations of i & j in the present case?
Say i = 0 and j = 0, then equation (3) is p(0/0)
) 0 j ( p
) 0 / 0 ( p ) 0 ( p
What do you mean by p(j = 0)? And how to compute this quantity?
Substituting, find p(0/0)
Thus, we have, p(0/0) =
) 0 ( p
) 0 / 0 ( p ) 0 / 0 ( p
=
64
31
16
15
x
2
1
=
31
30
= 0.967
p(0/0) = 0.967
Similarly calculate and check the following.
33
22
1
0
p ;
33
31
1
1
p ,
31
1
0
1
p
,
_
,
_
,
_
,
_
+
,
_
+
,
_
+
,
_
1
1
p log ) 11 ( p
0
1
p log ) 10 ( p
1
0
p log ) 01 ( p
0
0
p log ) 00 ( p H(X/Y)
Substituting for the various probabilities we get,
1
]
1
,
_
+
,
_
+
,
_
+
,
_
31
1
log
64
1
33
31
p log
64
31
33
2
log
32
1
31
30
log
32
15
H(X/Y)
Simplifying you get,
H(X/Y) = 0.27 bit/sym.
[H(X) H(X/Y)] . r
s
= (1 0.27) x 1000
R = 730 bits/sec.
2. A transmitter produces three symbols ABC which are related with joint
probability shown.
p(i) i p(j/i)
j
A B C
9/27 A
i
A 0
5
4
5
1
16/27 B B
2
1
2
1
0
2/27 C C
2
1
5
2
10
1
Calculate H(XY)
Solution:
By definition we have
H(XY) = H(X) + H(Y/X) -----(1)
Where, H(X) =
i
symbol / bits ) i ( p log ) i ( p
-----(2)
and H(Y/X) =
j i
symbol / bits ) i / j ( p log ) ij ( p
-----(3)
79
From equation (2) calculate H(X)
From equation (3), calculate H(Y/X) and verify, it is
H(Y/X) = 0.934 bits / sym.
Using equation (1) calculate H(XY)
3
0 i
i i
P log P
Substituting we get,
)] 3 x ( P log ) 3 x ( P ) 2 x ( P log ) 2 x ( P
) 1 x ( P log ) 1 x ( P ) 0 x ( P log ) 0 x ( P [ ) x ( H
+ +
+
= - [P log Q + Q log Q + Q log Q + P log P]
[ ] Q log Q 2 P log P 2 ) x ( H
2 2
+ bits/sym. ----(4)
Step 2: Calculate H(x/y) =
j i
) y / x ( P log ) xy ( P
Note, i & j can take values
i 1 1 2 2
j 1 2 1 2
)] 2 y / 2 x ( log ) 2 y , 2 x ( P
) 1 y / 2 x ( p log ) 1 y , 2 x ( P
) 2 y / 1 x ( p log ) 2 y , 1 x ( P
) 1 y / 1 x ( p log ) 1 y , 1 x ( P [ ) y / x ( H
+
+
+
. ----(5)
Step 3: Calculate P(x/y) using,
) y ( P
) x / y ( P . ) x ( P
) y / x ( P
(i) P(x = 1 / y = 1) =
) 1 y ( P
) 1 x / 1 y ( P . ) 1 x ( P
82
Where,
P(y=1) = P(x=1, y=1) + P(x=2, y=1)
= P(x=1) . P(y=1 / x=1) + P(x=2,) . P(y=1 / x=2)
= Q . p + Q . q
= QP + Q (1 p)
Q ) 1 y ( P
p
Q
p . Q
) 1 y / 1 x ( P
Similarly calculate other p(x/y)s
They are
(ii) P(x=1 / y=2) = q
(iii) P(x=2 / y=1) = q
(iv) P(x=2 / y=2) = p
,
_
,
_
,
_
P
2
1
2 P
2
1
log P
2
1
2 - P log 2P - D
2 t
i.e.,
+
,
_
P 2 P
2
1
log 2P) - (1 P log 2P - D
2 t
----- (8)
Step 5: By definition of channel capacity we have,
[ ]
t
) X ( Q ) x ( P
D Max C
84
(i) Find [ ]
t
D
dP
d
i.e.,
[ ]
1
]
1
+
,
_
P 2 P
2
1
log ) P 2 1 ( P log P 2
dP
d
D
dP
d
2 t
1
1
1
1
]
1
,
_
,
_
,
_
+ + 2 P
2
1
log 2
P
2
1
e log P
2
1
2
P log ) 2 (
P
e log P . 2
2
2
2
Simplifying we get,
[ ] + + Q log P log D
dP
d
2 2 t
-----(9)
Setting this to zero, you get
log
2
P = log
2
Q +
OR
P = Q . 2
= Q . ,
Where, = 2
-----(10)
How to get the optimum values of P & Q?
Substitute, P = Q in 2P + 2Q = 1
i.e., 2Q + 2Q = 1
OR
Q =
) 1 ( 2
1
+
-----(11)
Hence, P = Q . =
) 1 ( 2
1
+
. =
) 1 ( 2 +
-----(12)
Step 6: Channel capacity is,
85
[ ] Q 2 Q log Q 2 P log P 2 Max C
) X ( Q ) x ( P
Substituting for P & Q we get,
'
+
+
1
]
1
+ +
+
1
]
1
2
log
) 1 ( 2
1
) 1 ( 2
1
log .
) 1 ( 2
1
) 1 ( 2
log .
) 1 ( 2
2 C
Simplifying you get,
C = log [2 (1+)] log
or
C = log
2
1
]
1
+ ) 1 ( 2
bits/sec. -----(13)
Remember, = 2
and
= - (p log p + q log q)
What are the extreme values of p?
Case I: Say p = 1
What does this case correspond to?
What is C for this case/
Note: = 0, = 2
0
= 1
C log
2
. sym / bits 2 ] 4 [ log
1
) 1 1 ( 2
2
1
]
1
+
If r
s
= 1/sec, C = 2 bits/sec.
Can this case be thought of in practice?
Case II: p =
2
1
Now, =
1
2
1
log
2
1
log
2
1
,
_
+
86
= 2
= 2
log
. sec / bits 585 . 1 ] 3 [ log
2
) 1 2 ( 2
2
1
]
1
+
87
Which are the symbols often used for the channel under discussion and why it is so?
OVER A NOISY CHANNEL ONE CAN NEVER SEND
INFORMATION WITHOUT ERRORS
2. For the discrete binary channel shown find H(x), H(y), H(x/y) & H(y/x) when P(x=0) =
1/4, P(x=1) = 3/4 , = 0.75, & = 0.9.
Solution:
What type of channel in this?
H(x) =
i
t
i
t
i
p log p
= [P(x=0) log P(x=0) + P(x=1) log P(x=1)]
= [p log p+ (1p) log (1p)]
Where, p = P(x=0) & (1-p) = p(x=1)
Substituting the values, H(x) = 0.8113 bits/sym.
H(y) =
i
r
i
r
i
p log p
= [P(y=0) log P(y=0) + P(y=1) log P(y=1)]
Input
X
Output
Y
1 p = q
(1 p) = q
p
p
1 0
1
1
2
3
0
1
2
3
X Y
(1 )
(1 )
0
1
0
1
P(Y/X)Y P 0?1 0 X0pqp X 1oqp 1
88
Recall,
10
t
1 00
t
0
r
0
p p p . p p +
P(y=0) = P(x=0) . P
00
+ P(x=1) P
10
= p + (1 p) (1 )
Similarly, P(Y=1) = p (1 ) + (1 p)
{ [ ] [ ]
[ ] [ ] } ) 1 ( ) 1 ( p log ) 1 ( ) 1 ( p
) 1 ( ) p 1 ( p log ) 1 ( ) p 1 ( p H(Y)
+ + +
+ +
H(Y) = - Q
1
(p) . log Q
1
(p) Q
2
(p) log Q
2
(p)
Substituting the values, H(y) = 0.82
H(y/x) =
j i
) i x / j y ( P log ) j y , i x ( P
Simplifying, you get,
H(y/x) = [p . . log + (1 p) (1 ) log (1 ) + p(1 ) log (1 )]
+ (1 p) log ]
= p [ . log (1 ) log (1 ) log (1 ) log (1 )]
+ terms not dependent on p
H(y/x) = PK + C
Compute K
1
]
1
1
]
1
Receiver he t by received is
informn which at Rate
Receiver the to channel the by
supplied is informn which at Rate
i.e., H(x) H(x/y) = H(y) H(y/x)
Home Work:
Calculate D
t
= H(y) H(y/x)
Find
dp
d
D
t
Set
dp
d
D
t
to zero and obtain the condition on probability distbn.
P(Y/X)Y P 0?1 0 X0pqp X 1oqp 1
P(X/Y)Y P y
1
Y
2
Y
333
XX
1
1PO 1 X
2
OQ1 O
P(XY)Y P y
1
Y
2
Y
333
XX
1
PpPqO P X
2
OQq
Qp Q
89
Finally compute C, the capacity of the channel.
Answer is: 344.25 b/second.
2. A channel model is shown
What type of channel is this?
Write the channel matrix
Do you notice something special in this channel?
What is H(x) for this channel?
Say P(x=0) = P & P(x=1) = Q = (1 P)
H(x) = P log P Q log Q = P log P (1 P) log (1 P)
What is H(y/x)?
H(y/x) = [p log p + q log q]
INPUT
X
OUTPUT
Y
p
q
0
0
?
p
q
1 1
P(Y/X)Y P 0?1 0 X0pqp X 1oqp 1
P(X/Y)Y P y
1
Y
2
Y
333
XX
1
1PO 1 X
2
OQ1 O
P(XY)Y P y
1
Y
2
Y
333
XX
1
PpPqO P X
2
OQq
Qp Q
90
What is H(y) for the channel?
H(y) =
) y ( P log ) y ( P
= [Pp log (Pp) + q log q + Q . p log (Q . p)]
What is H(x/y) for the channel?
H(y/x) =
) y / x ( P log ) xy ( P
H(x/y) = + Pp log 1 + Pq log
P
1
+ Q . q log
Q
1
+ Qp log 1
P(X/Y)Y P y
1
Y
2
Y
333
XX
1
1PO 1 X
2
OQ1 O
P(XY)Y P y
1
Y
2
Y
333
XX
1
PpPqO P X
2
OQq
Qp Q
91