0% found this document useful (0 votes)
7 views43 pages

Lecture 6 A

This document discusses Linear Block Codes in the context of channel coding for digital communications. It covers key concepts such as error detection and correction capabilities, encoding and decoding processes, and specific types of codes like Hamming and cyclic codes. The lecture also includes definitions related to vector spaces, coding rates, and the construction of encoding matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views43 pages

Lecture 6 A

This document discusses Linear Block Codes in the context of channel coding for digital communications. It covers key concepts such as error detection and correction capabilities, encoding and decoding processes, and specific types of codes like Hamming and cyclic codes. The lecture also includes definitions related to vector spaces, coding rates, and the construction of encoding matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd

Digital Communications I:

Modulation and Coding Course

Spring - 2013
Jeffrey N. Denenberg
Lecture 6: Linear Block Codes
Last time we talked about:
 Evaluating the average probability
of symbol error for different
bandpass modulation schemes

 Comparing different modulation


schemes based on their error
performances.

Lecture 9 2
Today, we are going to talk
about:
 Channel coding

 Linear block codes


 The error detection and correction
capability
 Encoding and decoding

 Hamming codes

 Cyclic codes

Lecture 9 3
Block diagram of a DCS

Source Channel Pulse Bandpass


Format
encode encode modulate modulate

Digital modulation

Channel
Digital demodulation

Source Channel Demod.


Format Detect
decode decode Sample

Lecture 9 4
What is channel coding?
 Channel coding:
Transforming signals to improve
communications performance by
increasing the robustness against
channel impairments (noise,
interference, fading, ...)
 Waveform coding: Transforming
waveforms to better waveforms
 Structured sequences: Transforming data

sequences into better sequences, having


structured redundancy.
-“Better” in the sense of making the decision
process less subject
Lecture 9 to errors. 5
Error control techniques
 Automatic Repeat reQuest (ARQ)
 Full-duplex connection, error detection codes
 The receiver sends feedback to the transmitter,
saying that if any error is detected in the
received packet or not (Not-Acknowledgement
(NACK) and Acknowledgement (ACK),
respectively).
 The transmitter retransmits the previously sent
packet if it receives NACK.
 Forward Error Correction (FEC)
 Simplex connection, error correction codes
 The receiver tries to correct some errors
 Hybrid ARQ (ARQ+FEC)
 Full-duplex, error detection and correction
codes
Lecture 9 6
Why using error correction
coding?
 Error performance vs. bandwidth
 Power vs. bandwidth
P
 Data rate vs. bandwidth B

 Capacity vs. bandwidth Coded

A
F
Coding gain:
For a given bit-error probability, C B
the reduction in the Eb/N0 that can be
realized through the use of code: D
E
 Eb   Eb  Uncoded
G [dB]   [dB]    [dB]
 N0  u  N0  c Eb / N 0 (dB)

Lecture 9 7
Channel models
 Discrete memory-less channels
 Discrete input, discrete output
 Binary Symmetric channels
 Binary input, binary output
 Gaussian channels
 Discrete input, continuous output

Lecture 9 8
Linear block codes

 Let us review some basic definitions


first that are useful in
understanding Linear block codes.

Lecture 9 9
Some definitions

 Binary field :
 The set {0,1}, under modulo 2 binary
addition and multiplication forms a field.
Addition Multiplication
0  0 0 0 0 0
0  1 1 0 1 0
1  0 1 1 0 0
1  1 0 1 1 1
 Binary field is also called Galois field,
GF(2).

Lecture 9 10
Some definitions…
 Fields :
 Let F be a set of objects on which two
operations ‘+’ and ‘.’ are defined.
 F is said to be a field if and only if
1. F forms a commutative group under +
operation. The additive identity element is
labeled
a, b  F“0”.
 a  b b  a  F

1. F-{0} forms a commutative group under .


Operation. The multiplicative identity
element is labeled “1”.
a, b  F  a b b a  F
1. The operations “+” and “.” are distributive:
a (b  c) (a b)  (a c)
Lecture 9 11
Some definitions…
 Vector space:
 Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector
space over F if:
1. Commutative:u, v  V  u  v v  u  F
2.a  F , v  V  a v u  V
3. Distributive:
(a  b) v a v  b v and a (u  v ) a u  a v
4. Associative:a, b  F , v  V  (a b) v a (b v )
5. v  V, 1 v v

Lecture 9 12
Some definitions…
 Examples of vector spaces

Vnby
The set of binary n-tuples, denoted

V4 {(0000), (0001), (0010), (0011), (0100), (0101), (0111),


(1000), (1001), (1010), (1011), (1100 ), (1101), (1111 )}
 Vector subspace:
 Vn
A subset S of the vector space is called a
subspace if:
 The all-zero vector is in S.
 The sum of any two vectors in S is also in S.

Example:
{(0000), (0101), (1010), (1111 )} is a subspace of V4 .

Lecture 9 13
Some definitions…
 Spanning set:

G v1 , v 2 ,  , v n 
A collection of vectors , is
said to be a spanning set for V or to span V if
linear combinations of the vectors in G include
all vectors in the vector space V,
 Example:
(1000), (0110), (1100), (0011), (1001) spans V4 .
 Bases:
 The spanning set of V that has minimal
cardinality is called the basis for V.
 Cardinality of a set is the number of objects in the set.
 Example:
(1000), (0100), (0010), (0001) is a basis for V4 .
Lecture 9 14
Linear block codes
 Linear block code (n,k)
k
 2
A set C  Vn with cardinality is called
a linear block code if, and only if, it is a
subspace of the vector spaceVn .
Vk  C  Vn
 Members of C are called code-words.
 The all-zero codeword is a codeword.

 Any linear combination of code-words is a

codeword.

Lecture 9 15
Linear block codes – cont’d

mapping Vn
Vk
C

Bases of C

Lecture 9 16
Linear block codes – cont’d
 The information bit stream is chopped into blocks of
k bits.
 Each block is encoded to a larger block of n bits.
 The coded bits are modulated and sent over the
channel.
 The reverse procedure is done at the receiver.
Channel
Data block Codeword
encoder
k bits n bits

n-k Redundant bits


k
Rc  Code rate
n

Lecture 9 17
Linear block codes – cont’d
 The Hamming weight of the vector U,
denoted by w(U), is the number of non-
zero elements in U.
 The Hamming distance between two
vectors U and V, is the number of
elements in which they differ.
d (U, V ) w(U  V )
 The minimum distance of a block code
is d min min d (U i , U j ) min w(U i )
i j i

Lecture 9 18
Linear block codes – cont’d

 Error detection capability is given by

e d min  1

 Error correcting-capability t of a code is


defined as the maximum number of
guaranteed correctable errors per
codeword, that is
 d min  1
t  
 2 

Lecture 9 19
Linear block codes – cont’d

 For memory less channels, the


probability that the decoder
commits an Perroneous
n
 n  j decoding is
 
M    j
p (1  p ) n j

j t 1  
 p is the transition probability or bit error
probability over channel.
 The decoded bit error probability is
1 n
 n j
PB 
n
 j 
 
j t 1  j 
p (1  p ) n j

Lecture 9 20
Linear block codes – cont’d
 Discrete, memoryless, symmetric channel
model 1 1-p 1
p
Tx. bits Rx. bits
p
0 1-p 0

 Note that for coded systems, the coded bits


are modulated and transmitted over the
channel. For example, for M-PSK modulation
2on AWGN
 2log 2 channels
M Ec   (M>2):
 2  2log 2 M Eb Rc   
p Q sin     Q sin   
log 2 M  N0 M   log 2 M  N0 M 
Ec Ec Rc Eb
where is energy per coded bit, given by
Lecture 9 21
Linear block codes –cont’d

mapping Vn
Vk
C

Bases of C
 A matrix G is constructed by taking as
V1 , V2 ,  , Vk }
its rows the vectors of the {basis,
.  v11 v12  v1n 
 V1  v v22  v2 n 
G     21
   
 Vk   
 vk 1 vk 2  vkn 

Lecture 9 22
Linear block codes – cont’d
 Encoding in (n,k) block code

U mG  V1 
V 
(u1 , u2 ,  , un ) (m1 , m2 ,  , mk )  2 
 
 
 Vk 
(u1 , u2 ,  , un ) m1 V1  m2 V2    m2 Vk

 The rows of G are linearly independent.

Lecture 9 23
Linear block codes – cont’d
 Example: Block code (6,3)
Message vector Codeword

000 000000
 V1   1 1 0 1 0 0 100 110100
G  V2   0 1 1 0 1 0 010 011010
 V3   1 0 1 0 0 1  110 1 01 1 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
111 0 0 0 111

Lecture 9 24
Linear block codes – cont’d
 Systematic block code (n,k)
 For a systematic code, the first (or last) k
elements in the codeword are information
bits. G [P I ]
k

I k k k identity matrix
Pk k (n  k ) matrix

U (u1 , u2 ,..., un ) ( p1 , p2 ,..., pn  k , m1 , m2 ,..., mk )


          
parity bits message bits

Lecture 9 25
Linear block codes – cont’d

 For any linear code we can find a


matrix H ( n  k )n , such that its rows
are orthogonal to the rows G of :
T
GH 0
 H is called the parity check matrix
and its rows are linearly
independent.
 For systematic
H linear
[I n  k Pblock
T
] codes:

Lecture 9 26
Linear block codes – cont’d

Data source Format


m Channel U Modulation
encoding
channel
Channel Demodulation
Data sink Format
decoding Detection
m̂ r

r U  e
r (r1 , r2 ,...., rn ) received codeword or vector
e (e1 , e2 ,...., en ) error pattern or vector
 Syndrome testing:
 S is the syndrome of r, corresponding to the
error pattern Se.rH T eH T

Lecture 9 27
Linear block codes – cont’d
 Standard array
 For rowi 2,3,...,2 n  k find a vector Vinn of
minimum weight that is not already listed in the
array. e i : th
i
 Call this pattern and form the row as the
zerocorresponding coset
codeword U1 U2  U 2k
e2 e2  U 2  e 2  U 2k
coset
   
e 2 n k e 2 n k  U 2  e 2 n k  U 2 k
coset leaders

Lecture 9 28
Linear block codes – cont’d

 Standard array and syndrome table


decoding S rHT
1. Calculate eˆ e i S
ˆ r leader,
2. Find the coset
U eˆ , corresponding
m̂ to
.
3. Calculate and the corresponding .
ˆ r  eˆ (U  e)  eˆ U  (e  eˆ )
U
eˆ e
 Note that
eˆ e
 If , the error is corrected.
 If , undetectable decoding error occurs.

Lecture 9 29
Linear block codes – cont’d
 Example: Standard array for the (6,3)
code codewords

000000 110100 011010 101110 101001 011101 110011 000111


000001 110101 011011 101111 101000 011100 110010 000110
000010 110111 011000 101100 101011 011111 110001 000101
000100 110011 011100 101010 101101 011010 110111 000110
001000 111100   
010000 100100 coset
100000 010100 
010001 100101   010110

Coset leaders

Lecture 9 30
Linear block codes – cont’d

Error pattern Syndrome


000000 000 U (101110) transmitted.
000001 101
r (001110) is received.
000010 011
The syndrome of r is computed :
000100 110
001000 001 S rH T (001110) H T (100)
010000 010 Error pattern corresponding to this syndrome is
100000 100
eˆ (100000)
010001 111
The corrected vector is estimated
ˆ r  eˆ (001110)  (100000) (101110)
U

Lecture 9 31
Hamming codes
 Hamming codes
 Hamming codes are a subclass of linear block
codes and belong to the category of perfect
codes.
 Hamming codesm 2are expressed as a function of
a single integer . m
Code length : n 2  1
Number of information bits : k 2 m  m  1
Number of parity bits : n-k m
Error correction capability : t 1

 The columns of the parity-check matrix, H,


consist of all non-zero binary m-tuples.
Lecture 9 32
Hamming codes
 Example: Systematic Hamming code
(7,4)
1 0 0 0 1 1 1
H  0 1 0 1 0 1 1 [I 33 PT ]
 0 0 1 1 1 0 1
 0 1 1 1 0 0 0
1 0 1 0 1 0 0
G   [P I 44 ]
1 1 0 0 0 1 0
 
1 1 1 0 0 0 1 

Lecture 9 33
Cyclic block codes
 Cyclic codes are a subclass of linear
block codes.
 Encoding and syndrome calculation
are easily performed using
feedback shift-registers.
 Hence, relatively long block codes can
be implemented with a reasonable
complexity.
 BCH and Reed-Solomon codes are
cyclic codes.
Lecture 9 34
Cyclic block codes

 A linear (n,k) code is called a Cyclic


code if all cyclic shifts of a codeword
are also codewords.
“i” cyclic shifts of U
U (u0 , u1 , u2 ,..., un  1 )
(i )
U (un  i , un  i 1 ,..., un  1 , u0 , u1 , u2 ,..., un  i  1 )
 Example:
U (1101)
U (1) (1110 ) U ( 2 ) (0111) U (3) (1011) U ( 4 ) (1101) U

Lecture 9 35
Cyclic block codes
 Algebraic structure of Cyclic codes, implies
expressing codewords in polynomial form
U( X ) u0  u1 X  u2 X 2  ...  un  1 X n  1 degree (n-1)
 Relationship between a codeword and its cyclic
XU( X ) u0 X  u1 X 2  ..., un  2 X n  1  un  1 X n
shifts:
un  1  u0 X  u1 X 2  ...  un  2 X n  1  un  1 X n  un  1
                 
U (1 ) ( X ) u n 1 ( X n 1)

U (1) ( X )  un  1 ( X n  1)
U (1) ( X )  XU( X ) modulo ( X n  1)
 By extension
Hence:
U (i ) ( X )  X i U( X ) modulo ( X n  1)

Lecture 9 36
Cyclic block codes
 Basic properties of Cyclic codes:
 Let C be a binary (n,k) linear cyclic
code
1. Within the set of code polynomials g( X ) in C,
there is a unique r  monic
n. g ( X )polynomial
with minimal degree r
is
calledgthe  g 0  g1 X polynomial.
( X ) generator ...  g r X
U( X )
1. Every code polynomial U( X ) min ( XC)gcan
( X ) be
expressed uniquely as g ( X )
X n  1generator polynomial
2. The is a
factor of

Lecture 9 37
Cyclic block codes
 The orthogonality of G and H in
polynomial form is gexpressed ( X )h( X )  Xasn  1
h( X . This
) means X nalso
is 1 a
factor of
i, i 1,..., k
1. The row , of the generator" i  1"
matrix is formed by the coefficients of
the  g( X ) cyclic 1  of
 g 0 gshift g r the generator 0
  
polynomial.
   g 0 g1  g r 
Xg ( X )
G        
    
 k1   g0 g1  g r 
 X g ( X )  0
 g 0 g1  g r 

Lecture 9 38
Cyclic block codes

 Systematic encoding algorithm


for an (n,k) Cyclic code:
n k
m( X )
1. Multiply the message polynomial X by

1. Divide the result of Step 1 by the


generator gpolynomial
(X ) p( X ) . Let be
the reminder.
p( X ) X n  k m( X )
1. U (X )
Add to to form the
codeword

Lecture 9 39
Cyclic block codes
 Example: For the systematic (7,4) Cyclic
g ( X ) 1  X  X 3
code with generator polynomial
1. Find the codeword for the message m (1011)
n 7, k 4, n  k 3
m (1011)  m( X ) 1  X 2  X 3
X n  k m( X )  X 3m( X )  X 3 (1  X 2  X 3 )  X 3  X 5  X 6
Divide X n  k m( X ) by g ( X) :
X 3  X 5  X 6 (1  X  X 2  X 3 )(1  X  X 3 )  1
              
quotient q(X) generator g(X) remainder p ( X )

Form the codeword polynomial :


U( X ) p( X )  X 3m( X ) 1  X 3  X 5  X 6
U (1 0 0 1 0 1 1 )
parity bits message bits

Lecture 9 40
Cyclic block codes
 Find the generator and parity check matrices, G
and H, respectively.
g ( X ) 1  1 X  0 X 2  1 X 3  ( g 0 , g1 , g 2 , g 3 ) (1101)
1 1 0 1 0 0 0
0 Not in systematic form.
1 1 0 1 0 0
G  We do the following:
0 0 1 1 0 1 0 row(1)  row(3)  row(3)
 
0 0 0 1 1 0 1 row(1)  row(2)  row(4)  row(4)

1 1 0 1 0 0 0
0  1 0 0 1 0 1 1
1 1 0 1 0 0
G  H  0 1 0 1 1 1 0
1 1 1 0 0 1 0
   0 0 1 0 1 1 1
1 0 1 0 0 0 1
I 33 PT
P I 44

Lecture 9 41
Cyclic block codes
 Syndrome decoding for Cyclic codes:
 Received codeword in polynomial form is given by
Received r ( X ) U( X )  e( X ) Error
codeword pattern
 The syndrome is the remainder obtained by dividing
the received polynomial by the generator
polynomial.
r ( X ) q( X )g ( X )  S( X ) Syndrome

 With syndrome and Standard array, the error is


estimated.
 In Cyclic codes, the size of standard array is considerably
reduced.

Lecture 9 42
Example of the block codes

PB
8PSK

QPSK

Eb / N 0 [dB]
Lecture 9 43

You might also like