Lecture Notes - 5 PDF
Lecture Notes - 5 PDF
MODULATOR:
The Modulator converts the input bit stream into an electrical waveform
suitable for transmission over the communication channel. Modulator can be
effectively used to minimize the effects of channel noise, tomatch the frequency
spectrum of transmitted signal with channel characteristics, to provide the
capability to multiplex many signals.
DEMODULATOR:
The extraction of the message from the information bearing waveform produced by
the modulation is accomplished by the demodulator. The output of the demodulator
is bit stream. The important parameter is the method of demodulation.
CHANNEL:
The Channel provides the electrical connection between the source and destination.
The different channels are: Pair of wires, Coaxial cable, Optical fibre, Radio
channel, Satellite channel or combination of any of these.
1. Amount of energy in each digital bit (or pulse): Generally, the more energy a
digital bit (or pulse) has, the better the performance that the system will have.
2. The distance between the transmitter and receiver: Because energy is spread
or attenuated as it travels over the channel and more noise is added due to the
existence of more noise sources over long channels, generally the longer the path that
the digital transmitted signal has to travel, the worse the performance that the system
will have. However, you do not always have control over the distance between the
transmitted and receiver.
3. Amount of noise that is added to the signal: Certainly, the less the noise that
is added to the transmitted signal, the better the performance of the communication
system. We usually have limited control over the added noise.
4. Bandwidth of the transmission channel: By using larger bandwidth, we can
either transmit at a higher transmission bit rate while keeping the same probability of
bit error, or we can transmit at the same transmission bit rate but reduce the
probability of bit error. Generally, the larger the bandwidth allocated to a
communication system, the better the performance it will have.
Bandwidth:
Bandwidth is simply a measure of frequency range. The range of frequencies
contained in a composite signal is its bandwidth. The bandwidth is normally a
difference between two numbers. For example, if a composite signal contains
frequencies between 1000 and 5000, its bandwidth is 5000 - or 4000. If a range of
2.40 GHz to 2.48 GHz is used by a device, then the bandwidth would be 0.08 GHz
(or more commonly stated as 80MHz).It is easy to see that the bandwidth we define
here is closely related to the amount of data you can transmit within it - the more
room in frequency space, the more data you can fit in at a given moment. The term
bandwidth is often used for something we should rather call a data rate, as in my
Internet connection has 1 Mbps of bandwidth, meaning it can transmit data at 1
megabit per second.
Sampling
A message signal may originate from a digital or analog source. If the
message signal is analog in nature, then it has to be converted into digital form before
it can transmitted by digital means. The process by which the continuous-time signal
is converted into a discretetime signal is called Sampling.
Sampling operation is performed in accordance with the sampling theorem.
Part - I If a signal x(t) does not contain any frequency component beyond W Hz, then
the signal is completely described by its instantaneous uniform samples with
sampling interval (or period ) of Ts< 1/(2W) sec.
Part II The signal x(t) can be accurately reconstructed (recovered) from the set of
uniform instantaneous samples by passing the samples sequentially through an ideal
(brick-wall) lowpass filter with bandwidth B, where W B < fs W and fs = 1/(Ts).
As the samples are generated at equal (same) interval (Ts) of time, the process of
sampling is called uniform sampling. Uniform sampling, as compared to any non-
uniform sampling, is more extensively used in time-invariant systems as the theory of
uniform sampling (either instantaneous or otherwise) is well developed and the
techniques are easier to implement in practical systems.
Conceptually, one may think that the continuous-time signal x(t) is multiplied by an
(ideal) impulse train to obtain {x(nTs)} as in equation(1) can be rewritten as,
Now, from the theory of Fourier Transform, we know that the F.T of (t- nTs), the
impulse train in time domain, is an impulse train in frequency domain:
If Xs(f) denotes the Fourier transform of the energy signal xs(t), we can write using
Eq. (1.2.4) and the convolution property:
+
= fs. X(). (f nfs )d = fs. X().(f- nfs-)d = fs. X(f- nfs) .
(5)
[By sifting property of (t) and considering (f) as an even function, i.e. (f) = (-
f)]
Fig. 1.3 Spectra of (a) an analog signal x(t) and (b) its sampled version
Fig. 1.3 indicates that the bandwidth of this instantaneously sampled wave
xs(t) is infinite while the spectrum of x(t) appears in a periodic manner, centered at
discrete frequency values n.fs.
Now, Part I of the sampling theorem is about the condition fs > 2.W i.e. (fs W) >
W and ( fs + W) < W. As seen from Fig. 1.3, when this condition is satisfied, the
spectra of xs(t), centered at f = 0 and f = fs do not overlap and hence, the spectrum
of x(t) is present in xs(t) without any distortion. This implies that xs(t), the
appropriately sampled version of x(t), contains all information about x(t) and thus
represents x(t).
The second part of Nyquists theorem suggests a method of recovering x(t) from its
sampled version xs(t) by using an ideal lowpass filter. As indicated by dotted lines in
Fig. 1.3, an ideal lowpass filter (with brick-wall type response) with a bandwidth W
B < (fs W), when fed with xs(t), will allow the portion of Xs(f), centered at f = 0
and will reject all its replicas at f = n fs, for n 0. This implies that the shape of the
continuous-time signal xs(t), will be retained at the output of the ideal filter.
Hartley Shannon Law
The theory behind designing and analyzing channel codes is called Shannons noisy
channel coding theorem. It puts an upper limit on the amount of information you can
send in a noisy channel using a perfect channel code. This is given by the following
equation:
where C is the upper bound on the capacity of the channel (bit/s), B is the bandwidth
of the channel (Hz) and SNR is the Signal-to-N ise ratio (unitless).
Bandwidth-S/N Tradeoff
The expression of the channel capacity of the Gaussian channel makes intuitive
sense:
Thus we may trade off bandwidth for SNR. For example, if S/N = 7 and B = 4kHz,
then the channel capacity is C = 12 103 bits/s. If the SNR increases to S/N = 15 and
B is decreased to 3kHz, the channel capacity remains the same. However, as B tends
to 1, the channel capacity does not become infinite since, with an increase in
bandwidth, the noise power also increases. If the noise power spectral density is /2,
then the total noise power is N = B, so the Shannon-Hartley law becomes
Pulse Code Modulation
Introduction
In the simplest model of a telephone speech communication there is a direct,
dedicated, physical connection between the two participants in the conversation, and
this link is held for the duration of the conversation. The analogue electrical signal
produced by the telephone at either end is sent on to connection without
modification.
In Pulse Amplitude Modulation (PAM), the unmodified electrical signal is not sent
on to the connection. Instead, short samples of the signal are taken at regular
intervals, and these samples are sent on to the connection. The amplitude of each
sample is identical to the signal voltage at the time when the sample was taken.
Typically, 8,000 samples are taken per second, so that the interval between samples
is 125s, and the duration of each sample is approximately 4s.
Because each sample is very short (~4s) there is a lot of time between samples
(~121s). Samples from other conversations are put into this spare time. Usually
the samples from 32 separate conversations are put on to a single line. This process is
called Time Division Multiplexing (TDM).
Each sample is very short, and will be distorted as it travels across a communications
network. In order to reconstruct the original analogue signal the only information the
receiver needs to have about a sample is its amplitude, but if this is distorted then all
information about the sample has been lost. To overcome this problem, the pulse is
not transmitted directly, instead its amplitude is measured and converted into an 8
binary number - a sequence of 1s and 0s. At the receiver end, the receiver merely
needs to detect if a 1 or a 0 has been received so that it can still recover the amplitude
of a PAM pulse even if the 1s and 0s used to describe it have been distorted.
The process of converting the amplitude of each pulse into a stream of 1s and 0s is
called Pulse Code Modulation (PCM)
Note that the process of PAM and PCM (but without the use of TDM) is essentially
used to store music and speech on CDs, but with a higher sample rate, more bits per
sample and complex error correction mechanisms.
A n a lo g u e PCM
In p u t P a ra lle l D ig ita l O u tp u t
A to D B in a ry
S a m p le r to S e ria l P u lse
C o n v e rte r C oder
C o n v e rte r G e n e ra to r
D e m o d u la to r
PCM A n a lo g u e
S e ria l to
In p u t D to A O u tp u t
P a ra lle l LPF
C o n v e rte r
C o n v e rte r
Fig. 2.1
PCM is a true digital process as compared to PAM. In PCM the speech signal is
converted from analogue to digital form.
In quantization the levels are assigned a binary codeword. All sample values falling
between two quantization levels are considered to be located at the centre of the
quantization interval. In this manner the quantization process introduces a certain
amount of error or distortion into the signal samples. This error known as
quantization noise, is minimised by establishing a large number of small quantization
intervals. Of course, as the number ofquantization intervals increase, so must the
number or bits increase to uniquely identify the quantization intervals. For example,
if an analogue voltage level is to be converted to a digital system with 8 discrete
levels or quantization steps three bits are required. In the ITU-T version there are 256
quantization steps, 128 positive and 128 negative, requiring 8 bits. A positive level is
represented by having bit 8 (MSB) at 0, and for a negative level the MSB is 1.
Quantization
Assume that a signal with power Psis to be quantized using a quantizer with L = 2n
levels ranging in voltage from mp tomp as shown in the fig. 2.2
L = 2n
L levels t
0
n bits 0 Ts 2Ts 3Ts 4Ts 5Ts
mp
Q uantizer Input S am ples x
Q uantizer O utput S am ples x q
Fig. 2.2
We can define the variable v to be the height of the each of the L levels of the
quantizer as shown above. This gives a value of v equal to
2m p
v .
L
Therefore, for a set of quantizers with the same mp, the larger the number of levels of
a quantizer, the smaller the size of each quantization interval, and for a set of
quantizers with the same number of quantization intervals, the larger mp is the larger
the quantization interval length to accommodate all the quantization range.
Now if we look at the input output characteristics of the quantizer, it will be similar
to the red line in the following figure. Note that as long as the input is within the
quantization range of the quantizer, the output of the quantizer represented by the red
line follows the input of the quantizer. When the input of the quantizer exceeds the
range of mp tomp, the output of the quantizer starts to deviate from the input and
the quantization error (difference between an input and the corresponding output
sample) increases significantly.
Quantizer
Output xq
x
xq
v/2
v/2
v/2
v/2
Quantizer
v v v
v v/2 v v v v Input x
v/2
v/2
v/2
mp
Fig. .2.3
Now let us define the quantization error represented by the difference between the
input sample and the corresponding output sample to be q, or
q x x q
.
Plotting this quantization error versus the input signal of a quantizer is seen next.
Notice that the plot of the quantization error is obtained by taking the difference
between the blow and red lines in the above Fig. 2.3
Quantization Error q
v/2
Quantizer
v/2 Input x
v v v v v v v v
mp
Fig. 2.4
It is seen from the Fig 2.4 that the quantization error of any sample is restricted
between v/2 andv/2 except when the input signal exceeds the range of
quantization of mp to mp.
Uniform Quantization
We assume that the amplitude of the signal m(t) is
confined to the range (-mp, +mp ). This range (2mp) is
O u tp u t
-m p +m p = 2 mp / L
In p u t
Fig. 2.5
Companding
-High amplitude analog signals are compressed prior to txn. and then expanded in the
receiver
-Higher amplitude analog signals are compressed and Dynamic range is improved
-Early PCM systems used analog companding, where as modern systems use digital
companding.
Analog companding
2.7 PCM system with analog companding
--In the transmitter, the dynamic range of the analog signal is compressed, and then
converted o a linear PCM code.
--In the receiver, the PCM code is converted to a PAM signal, filtered, and then
expanded back to its original dynamic range.
-- There are two methods of analog companding currently being used that closely
approximate a logarithmic function and are often called log-PCM codes.
2) A-law
-law companding
V
V m a x ln 1 in
V m a x
V out
ln 1
A-law companding
--With digital companding, the analog signal is first sampled and converted to a
linear PCM code, and then the linear code is digitally compressed.
-- In the receiver, the compressed PCM code is expanded and then decoded back to
analog.
-- The most recent digitally compressed PCM systems use a 12- bit linear PCM code
and an 8-bit compressed PCM code.
the signed bit is 1. The remaining 7 bits are used to code the sample value. The ITU-
T define a look up table which allocates a particular binary code to each quantified
A-law value.
The line coding which is used assigns opposite polarities to successive 1s. This
eliminates any DC voltage on the line, and reduces the inter symbol interference if
adjacent bits are 1. If there is silence on the PCM channel then the measured
samples will be 0 Vrms and the output of the DAC will be 1000 0000. A stream of all
zeros is not desirable on an active channel because
This is a bipolar signalling technique (i.e. relies on the transmission of both positive
and negative pulses).
In AMI positive and negative pulses (of equal amplitude) are used for alternative
symbols 1. No pulse is used for symbol 0. In either case the pulse returns to 0 before
the end of the bit interval. This eliminates any DC on the line.
HDB3 encoding rules follow those for AMI, except that a sequence of four
consecutive 0's are encoded using a special "violation" bit. The 4th 0 bit is given the
same polarity as the last 1-bit which was sent using the AMI encoding rule. This
prevents long runs of 0's in the data stream which may otherwise prevent a receiver
from tracking the centre of each bit. By introducing violations, extra "edges" are
introduced, enabling a Digital PLL to reliably reconstruct the clock signal at the
receiver. The HDB3 is transparent to the sequence of bits being transmitted (i.e.
whatever data is sent, the Digital PLL can reconstruct the data and extract the bits at
the receiver).
To prevent a DC being introduced by excessive runs of zeros any run of more than
four zeros encodes as B00V. The value of B is assigned + or - alternately throughout
the bit stream.
B BBBBBBB
1010 1010 = + 0 - 0 + 0 - 0
B 0 B 0 B 0 B 0
1000 0001 + 0 0 0 + 0 0 -
= B 0 0 0 V 00B
1000 0110 = + 0 0 0 + - +0
= B 0 0 0 V BB0
PCM Timing and Synchronisation
The PCM receiver must be able to identify the start and finish of each full sampling
sequence and to identify each bit position. The sampling clock needs to be either sent
to, or regenerated at, the receiving side to determine when each full sequence of
sampling begins and ends. The data clock is also needed to determine exactly when
to read each bit of information.
d a ta c lo c k 6 4 k b it/s
1 5 .6 2 5 s
fr a m e c lo c k 125 s
A PCM channel is sampled at 8,000 Hz or
once every 125 s. If there is one channel or
30 TDM channels the sampling period is fixed
at 125 s and this period is known as a frame.
B1 B2 B3 B4 B5 B6 B7 B8 B1 Therefore the frame clock must have a period
of 125 s. The rising edge of the frame clock
1 0 1 0 0 1 1 1 ?
Fig. 2.9 informs the receiver that the next bit will be
Bit 1 of a new sample. The falling edge of the
data clock informs the receiver that it must read the data bit.
When the bit stream is transmitted along a line the pulses become distorted and the
rise and fall times become significant. Ideally, a 1 will be high for 15.625 s. In
practice the pulse may only be above the high threshold for a few s so it is very
important that the bit is read within a certain time limit of the clock pulse.
The simplest way to synchronise a PCM sender to a PCM receiver is to send the
clock signals on different circuits to the data This would be done in a self-contained
system such as private branch exchange (PBX). Telephony is full duplex so that there
is a coder and a decoder at each port, but each would use the same clock.
0 0
using a one bit coder. If the sample is
1
0 0 0 greater than the previous sample a 1 is
0
generated. Otherwise a 0 is generated.
The advantage of delta modulation over
PCM is its simplicity and lower cost. But the noise performance is not as Fig.
2.10
good as PCM.
where: x(t) = input signal, q = step size, T = period between samples, fs = sampling
frequency
Assume that the input signal has maximum amplitude A and maximum frequency F.
The most rapidly changing input is provided by x(t) = A * sin (2 * * F * t).
For this dx(t)/dt = 2 * * F * A * sin (2 * * F * t).
This slope has a maximum value of 2 * * F * A
Overload occurs if 2 * * F * A > q * fs
To prevent overload we require q * fs> 2 * * F * A
Example A = 2 V, F = 3.4 kHz, and the signal is sampled 1,000,000 times per
second, requires q > 2 * 3.14 * 3,400 * 2 /1,000,000 V > 42.7 mV
Granular noise occurs if the slope changes more slowly than the step size. The
reconstructed signal oscillates by 1 step size in every sample. It can be reduced by
decreasing the step size. This requires that the sample rate be increased. Delta
Modulation requires a sampling rate much higher than twice the bandwidth. It
requires oversampling in order to obtain an accurate prediction of the next input,
since each encoded sample contains a relatively small amount of information. Delta
Modulation requires higher sampling rates than PCM.
D if f e r e n tia to r E ncoded
A n a lo g u e D if f e r e n c e
Q u a n tis e r
In p u t B a n d L im itin g S a m p le s
+ E noder
F ilte r
- ADC
A c c u m u la to r
DAC
Fig. 2.11
The principle of ADPCM is to use our knowledge of the signal in the past time to
predict the signal one sample period later, in the future. The predicted signal is then
compared with the actual signal. The difference between these is the signal which is
sent to line - it is the error in the prediction. However this is not done by making
comparisons on the incoming audio signal - the comparisons are done after PCM
coding.
To implement ADPCM the original (audio) signal is sampled as for PCM to produce
a code word. This code word is manipulated to produce the predicted code word for
the next sample. The new predicted code word is compared with the code word of the
second sample. The result of this comparison is sent to line. Therefore we need to
perform PCM before ADPCM.
The ADPCM word represents the prediction error of the signal, and has no
significance itself. Instead the decoder must be able to predict the voltage of the
recovered signal from the previous samples received, and then determine the actual
value of the recovered signal from this prediction and the error signal, and then to
reconstruct the original waveform.
ADPCM is sometimes used by telecom operators to fit two speech channels onto a
single 64 kbit/s link. This was very common for transatlantic phone calls via satellite
up until a few years ago. Now, nearly all calls use fibre optic channels at 64 kbit/s.
Delta modulation, like DPCM is a predictive waveform coding technique and can be
considered as a special case of DPCM. It uses the simplest possible quantizer,
namely a two level (one bit) quantizer. The price paid for achieving the simplicity of
the quantizer is the increased sampling rate (much higher than the Nyquist rate) and
the possibility of slope-overload distortion in the waveform reconstruction, as
explained in greater detail later on in this section.
In DM, the analog signal is highly over-sampled in order to increase the adjacent
sample correlation. The implication of this is that there is very little change in two
adjacent samples, thereby enabling us to use a simple one bit quantizer, which like in
DPCM, acts on the difference (prediction error) signals.
In its original form, the DM coder approximates an input time function by a series of
linear segments of constant slope. Such a coder is therefore referred to as a Linear (or
non-adaptive) Delta Modulator (LDM). Subsequent developments have resulted in
delta modulators where the slope of the approximating function is a variable. Such
coders are generally classified under Adaptive Delta Modulation (ADM) schemes.
We use DM to indicate either of the linear or adaptive variety.
Deltamodulation principleofoperation
Deltamodulationwasintroducedinthe1940sasasimplifiedformofpulsecodemodulatio
n(PCM),whichrequiredadifficult-to-implementanalog-to-digital(A/D)converter.
Theoutputofadeltamodulatorisabitstreamofsamples,atarelatively
highrate(eg,100kbit/sor
moreforaspeechbandwidthof4 kHz)thevalueofeachbitbeing determinedaccordingas
towhethertheinputmessagesampleamplitudehasincreasedordecreasedrelativetothepr
evioussample.Itisan exampleofdifferentialpulsecodemodulation(DPCM).
Blockdiagram
Theoperationofadeltamodulatoristoperiodicallysampletheinputmessage,tomakeac
omparisonofthecurrentsamplewiththatprecedingit,andtooutputasinglebit which
indicatesthesignofthe differencebetweenthe twosamples.Thisinprinciple would
requirea sample-and-hold type circuit.
DeJager(1952)hitonanideafordispensingwiththeneedforasampleandholdcircuit.Her
easonedthatifthesystemwasproducingthedesiredoutputthenthisoutputcouldbesentb
acktotheinputandthetwoanalogsignalscomparedinacomparator.Theoutput isa
delayedversionoftheinput,andsothecomparison
isineffectthatofthecurrentbitwiththepreviousbit,asrequiredbythedeltamodulationpri
nciple.
Figure2.13illustratesthebasicsysteminblockdiagramform,andthiswillbethemodulat
oryouwill bemodelling.
Thesystemisintheformofafeedbackloop.Thismeansthatitsoperationisnotn ecessaril
yobvious,anditsanalysisnon-
trivial.Butyoucanbuildit,andconfirmthatitdoesbehaveinthe manner adelta
modulatorshould.
Thesystemisacontinuoustimetodiscretetimeconverter.Infact,itisaformofanalogtodi
gitalconverter,andis thestarting pointfrom which more
sophisticateddeltamodulatorscanbe developed.
Thesamplerblockisclocked.Theoutputfromthesamplerisabipolarsignal,intheblockd
iagrambeingeither Vvolts.Thisisthedeltamodulatedsignal,thewaveformof
whichisshowninFigure 2.Itisfedback,inafeedbackloop,viaanintegrator,toasummer.
Theintegratoroutputisasawtooth-likewaveform,alsoillustratedinFigure 2.15.Itis
shownoverlaid uponthemessage,ofwhich itisanapproximation.
t ime
Thesawtoothwaveformissubtractedfromthemessage,alsoconnectedtothesummer,
andthe difference-anerror signal-isthe signalappearingatthe summeroutput.
Anamplifierisshowninthefeedbackloop..Thiscontrolstheloopgain.Inpracticeitmaybe
aseparateamplifier,partoftheintegrator,orwithinthesummer.Itisusedtocontrolthesize
oftheteethofthesawtoothwaveform,inconjunctionwiththeintegratortimeconstant.
WhenanalysingtheblockdiagramofFigure
2.13itisconvenienttothinkofthesummerhavingunitygainbetweenbothinputsandtheou
tput.Themessagecomes in at
afixedamplitude.Thesignalfromtheintegrator,whichisasawtoothapproximationtothe
message,isadjustedwiththeamplifiertomatchitasclosely aspossible.
stepsizecalculation
InthedeltamodulatorofFigure2.13theoutputoftheintegratorisasawtooth-
likeapproximationtotheinputmessage.Theteethofthesaw
mustbeabletorise(orfall)fastenoughtofollowthemessage.Thustheintegratortimecons
tantisanimportantparameter.
Foragivensampling(clock)ratethestepslope(volt/s)determinesthesize(volts)oftheste
p withinthesamplinginterval.
Supposetheamplitudeof therectangularwavefromthesamplerisV
volt.Forachangeofinputsampleto theintegratorfrom(say)negativeto
positive,thechangeofintegrator output will be,
afteraclockperiodT:
slopeoverloadandgranularity
ThebinarywaveformillustratedinFigure2.15isthesignaltransmitted.Thisisthedeltamo
dulatedsignal.
Theintegralofthebinarywaveformisthesawtoothapproximationto themessage.
IntheexperimententitledDeltademodulation(inthisVolume)youwillseethatthissawto
othwave istheprimaryoutputfromthedemodulatoratthereceiver.
Lowpassfilteringofthesawtooth(fromthedemodulator)givesabetterapproximationtot
hemessage.Buttherewillbeaccompanyingnoiseanddistortion,productsoftheapproxi
mationprocessatthemodulator.
Theunwantedproductsofthemodulationprocess,observedatthereceiver,areoftwo
kinds.Thesearedue toslopeoverload, and granularity.
slopeoverload
Thisoccurswhenthesawtoothapproximationcannotkeepupwiththerate-of-
changeoftheinput signalinthe regionsofgreatestslope.
Thestepsizeisreasonableforthosesectionsofthesampledwaveformofsmallslope,butt
heapproximationispoorelsewhere.Thisisslopeoverload,duetotoosmalla step.
Slopeoverloadisillustrated inFigure2.16.
slo p e o v e rlo a d
tim e
Figure2.16:slopeoverload
Toreducethepossibilityofslopeoverloadthestepsizecanbeincreased(forthesamesamp
ling rate).This isillustratedin Figure
2.17.Thesawtoothisbetterabletomatchthemessageinthe regionsofsteep slope.
An alternativemethodofslopeoverloadreductionis
toincreasethesamplingrate.ThisisillustratedinFigure
2.18,wheretheratehasbeenincreasedbyafactorof2.4times, but thestep isthe same size
asinFigure2.15.
tim e
1.4 Granularnoise
ReferbacktoFigure
2.16.Thesawtoothfollowsthemessagebeingsampledquitewellintheregionsofsmallslo
pe.Toreducetheslopeoverloadthestepsizeisincreased,andnow(Figure
2.17)thematchovertheregionsofsmallslopehasbeendegraded.
Thedegradationshowsup,atthedemodulator,asincreasedquantizingnoise,orgranulari
ty.
1.5 noiseanddistortionminimization
Thereisaconflictbetweentherequirementsforminimizationofslopeoverloadandthegra
nularnoise.Theonerequiresanincreasedstepsize,theotherareducedstepsize.You
shouldrefertoyourtextbook formorediscussion
ofwaysandmeansofreachingacompromise.Youwillmeetanexampleintheexperimente
ntitledAdaptivedeltamodulation(inthisVolume).Anoptimumstepcanbedeterminedby
minimizingthequantizingerroratthesummer output, or thedistortionatthe
demodulatoroutput.
From previous chapter, we know that the disadvantage of delta modulation is when
the input audio signal frequency exceeded the limitation of delta modulator, i.e.
Then this situation will produce the occurrence of slope overload and cause signal
distortion. However, the adaptive delta modulation (ADM) is the modification of
delta modulation to improve the disadvantage of the occurrence of slope overload.
Figure 2.20 is the block diagram of ADM modulator. In figure 2.20, we can see that
the delta modulator is comprised by comparator, sampler and integrator, then the
slope controller and the level detect algorithm comprise a quantization level adjuster,
which can control the gain of the integrator in the delta modulator. ADM modulator
is the modification of delta modulator, therefore, due to the delta modulator has the
problem of slope overload at low and high frequencies. The reason is the magnitude
of the (t) of delta modulator is fixed, i.e. the increment of or - is unable to
follow the variation of the slope of the input signal. When the variation of the slope
of the input signal is large, the magnitude of (t) still can increase by following the
variation, then this situation will not occur the problem of slope overload. On the
other hand, there is another technique, which is known as continuous variable slope
delta (CVSD) modulation. This technique is commonly used in Bluetooth
application. CVSD modulator is also the modification of delta modulator, use to
improve the occurrence of slope overload. The different between the CVSD and
ADM modulators are the quantization level adjuster A. ADM modulator is discrete
values and the quantization level adjuster of CVSD modulator is continuous. Simply,
the quantization value of ADM modulator is the variation of digital, such as the
quantization values of +1, +2, +3, -2, -3 and so on. As for CVSD modulator, the
quantization value is the variation of analog, such as the quantization values of +1,
+1.1, +1.2, -1.5, -0.3, -0.9 and so on.
Fig. 2.20 The Operation Theory of ADM Modulation
UNIT - II
Tb
S 2 (t) 0 for symbol 0
Tb
S 2(t) 2Eb Cos2f 2t for symbol 0
Tb
3. PSK[Phase Shift Keying]:
In a binary PSK system the pair of signals S1(t) and S2(t) are used to
represent binary symbol 1 and 0 respectively.
S1 (t) 2Eb Cos2fc t --------- for Symbol 1
Tb
Non Return to
Zero Level Product
Encoder Modulator
Binary Binary PSK Signal
Data Sequence
(t) 2 Cos2f t
1 c
Tb
x(t) Tb
x1 Decision Choose 1 if x1>0
dt Device
0
Choose 0 if x1<0
Correlator
1 (t) Threshold = 0
In a Coherent binary PSK system the pair of signals S1(t) and S2(t) are used
to represent binary symbol 1 and 0 respectively.
2Eb
S1 (t) Cos2fc t --------- for Symbol 1
Tb
Tb
(a)
(b)
Fig. 3.4: (a) FSK transmitter ( b) FSK Receiver
A binary FSK Transmitter is as shown in fig. (a). The incoming binary data
sequence is applied to on-off level encoder. The output of encoder is Eb volts for symbol
1 and 0 volts for symbol 0. When we have symbol 1 the upper channel is
switched on with oscillator frequency f1, for symbol 0, because of inverter the lower
channel is switched on with oscillator frequency f2. These two frequencies are combined
using an adder circuit and then transmitted. The transmitted signal is nothing but
required BFSK signal.
The detector consists of two correlators. The incoming noisy BFSK signal x(t) is
common to both correlator. The Coherent reference signal 1 (t) and 2 (t) are supplied
to upper and lower correlators respectively.
The correlator outputs are then subtracted one from the other and resulting a
random vector l (l=x1 - x2). The output l is compared with threshold of zero volts.
If l > 0, the receiver decides in favour of symbol 1.
l < 0, the receiver decides in favour of symbol0
BINARY ASK SYSTEM:-
(t) 2 Cos2f t
1 e
Tb
Fig. 3.5 BASK transmitter
Tb
Decision
x(t) X dt If x > choose symbol 1
0 Device
(t) 2 Cos2f t 0 t T
1 e b
Tb
The transmitted signals S1(t) and S2(t) are given by
Message
Point 2
Eb
1 (t)
E
0 b
Message
2 Point 1
Fig. 3.7 Signal Space representation of BASK signal
In transmitter the binary data sequence is given to an on-off encoder. Which gives an
output Eb volts for symbol 1 and 0 volt for symbol 0. The resulting binary wave [in unipolar
form] and sinusoidal carrier 1 (t) are applied to a product modulator. The desired BASK wave is
obtained at the modulator output.
In demodulator, the received noisy BASK signal x(t) is apply to correlator with coherent
reference signal 1 (t) as shown in fig. (b). The correlator output x is compared with threshold .
If x > the receiver decides in favour of symbol 1.
If x < the receiver decides in favour of symbol 0.
Incoherent detection:
Fig. 3.9 shows the block diagram of incoherent type FSK demodulator. The detector
consists of two band pass filters one tuned to each of the two frequencies used to
communicate 0s and 1s., The output of filter is envelope detected and then baseband detected
using an integrate and dump operation. The detector is simply evaluating which of two possible
sinusoids is stronger at the receiver. If we take the difference of the outputs of the two envelope
detectors the result is bipolar baseband.
The resulting envelope detector outputs are sampled at t = kTb and their values are
compared with the threshold and a decision will be made infavour of symbol 1 or 0.
d d k 1 bk d
k k 1 bk
Where bk is the input binary digit at time kTb and dk-1 is the previous value of the
differentially encoded digit. Table illustrate the logical operation involved in the
generation of DPSK signal.
2
1 (t) T b
cos 2 fct 0 t T
2
2 (t) T b
sin 2 fct 0 t T
There are four message points and the associated signal vectors are defined by
E cos 2i 1
Si 4 i 1,2,3,4
E sin 2i 1
4
The table shows the elements of signal vectors, namely Si1 & Si2
Table:-
Input dibit Phase of Coordinates of message
QPSK points
signal(radians) Si1 Si2
10 E E
4 2 2
00 3 E E
4 2 2
01 5 E E
4 2 2
11 7 E E
4 2 2
Unit III
BASEBAND:
Pulse Shaping for Optimum Transmissions
Base Band Reception Techniques
Receiving Filter:
Correlative receiver
For an AWGN channel and for the case when the transmitted signals are
equally likely, the optimum receiver consists of two subsystems
MATCHED FILTER
Science each of the orthonormal basic functions are 1(t) ,2(t) .M(t) is
assumed to be zero outside the interval 0<t<T. we can design a linear filter with
impulse response hj(t), with the received signal x(t) the fitter output is given by
the convolution integral
yj(t) = xj
MATCHED FILTER
(t) = input signal
PROPERTY 1
The spectrum of the output signal of a matched filter with the matched signal as
input is, except for a time delay factor, proportional to the energy spectral
density of the input signal.
PROPERTY 2
PROPERTY 3
The output Signal to Noise Ratio of a Matched filter depends only on the ratio
of the signal energy to the power spectral density of the white noise at the filter
input.
PROPERTY 4
EYE PATTERN
The quality of digital transmission systems are evaluated using the bit
error rate. Degradation of quality occurs in each process modulation,
transmission, and detection. The eye pattern is experimental method that
contains all the information concerning the degradation of quality. Therefore,
careful analysis of the eye pattern is important in analyzing the degradation
mechanism.
The sensitivity of the system to timing error is determined by the rate of closure
of the eye as the sampling time is varied.
INFORMATION THEORY
where 0 log(0) = 0. The base of the logarithm is generally 2. When this is the case, the units
of entropy are bits.
There are other possibilities besides being completely random and completely deter-mined.
Imagine a weighted coin, such that heads occurr 75% of the time. The entropy would be:
0.75log0.75 + 0.25log0.25= 0.8113. After 100 trials, Id only need a message of about 82 bits
on average to describe the sample. Shannon showed that there exists a coder thatcan construct
messages of length H(X)+1, nearly matching this ideal rate.
Just as with probabilities, we can compute joint and conditional entropies. Joint
entropy is the randomness contained in two variables, while conditional entropy is a measure
of the randomness of one variable given knowledge of another. Joint entropy is defined as:
while the conditional entropy is:
There are several interesting facts that follow from these definitions. For example, two
random variables, X and Y, are considered independent if and only if HY| X= HY
There are several facts about discrete entropy, H(), that do not hold for continuous
ordifferential entropy, h(). The most important is that while HX0 h() can actually be
negative. Worse, even a distribution with an entropy of can still have uncertainty.
Luckily, for us, even though differential entropy cannot provide us with an absolute measure
of randomness, it is still that case that if hXhY then X has more randomness than Y.
Mutual Information
Although conditional entropy can tell us when two variables are completely independent, it is
not an adequarte measure of dependence. A small value for HY| Xmay imply that X tells
us a great deal about Y or that H(Y) is small to begin with. Thus, we measure dependence
using mutual information:
IXY= HYHY|X
IXY= HYHY | X
= HXHX | Y
=HX+HYHXY
= IYX
KL divergence measures the difference between two distributions. It is sometimes called the
relative entropy. It is always non-negative and zero only when p=q; however, it is not a
distance because it is not symmetic.
In other words, mutual information is a measure of the difference between the joint
probability and product of the individual probabilities. These two distributions are equivalent
only when X and Y are independent, and diverge as X and Y become more dependent.
Shannon-Fano Code
ShannonFano coding, named after Claude Elwood Shannon and Robert Fano, is a technique
for constructing a prefix code based on a set of symbols and their probabilities. It is
suboptimal in the sense that it does not achieve the lowest possible expected codeword length
like Huffman coding; however unlike Huffman coding, it does guarantee that all codeword
lengths are within one bit of their theoretical ideal I(x) =log P(x).
In ShannonFano coding, the symbols are arranged in order from most probable to least
probable, and then divided into two sets whose total probabilities are as close as possible to
being equal. All symbols then have the first digits of their codes assigned; symbols in the first
set receive "0" and symbols in the second set receive "1". As long as any sets with more than
one member remain, the same process is repeated on those sets, to determine successive
digits of their codes. When a set has been reduced to one symbol, of course, this means the
symbol's code is complete and will not form the prefix of any other symbol's code.
The algorithm works, and it produces fairly efficient variable-length encodings; when the two
smaller sets produced by a partitioning are in fact of equal probability, the one bit of
information used to distinguish them is used most efficiently. Unfortunately, ShannonFano
does not always produce optimal prefix codes.
For this reason, ShannonFano is almost never used; Huffman coding is almost as
computationally simple and produces prefix codes that always achieve the lowest expected
code word length. ShannonFano coding is used in the IMPLODE compression method,
which is part of the ZIP file format, where it is desired to apply a simple algorithm with high
performance and minimum requirements for programming.
Shannon-Fano Algorithm:
Sort the lists of symbols according to frequency, with the most frequently occurring
symbols at the left and the least common at the right.
Divide the list into two parts, with the total frequency counts of the left part being as
close to the total of the right as possible.
The left part of the list is assigned the binary digit 0, and the right part is assigned the
digit 1. This means that the codes for the symbols in the first part will all start with 0,
and the codes in the second part will all start with 1.
Recursively apply the steps 3 and 4 to each of the two halves, subdividing groups and
adding bits to the codes until each symbol has become a corresponding code leaf on the tree.
Example:
The source of information A generates the symbols {A0, A1, A2, A3 and A4} with the
corresponding probabilities {0.4, 0.3, 0.15, 0.1 and 0.05}. Encoding the source symbols using
binary encoder and Shannon-Fano encoder gives:
A DMS is a source whose output is a sequence of letters such that each letter is
independently selected from a fixed alphabet consisting of letters; say a1, a2 , .ak.
The letters in the source output sequence are assumed to be random and statistically
independent of each other. A fixed probability assignment for the occurrence of each
letter is also assumed. Let us, consider a small example to appreciate the importance of
probability assignment of the source letters.
Let us consider a source with four letters a1, a2, a3 and a4 with P(a1)=0.5,
P(a2)=0.25, P(a3)= 0.13, P(a4)=0.12. Let us decide to go for binary coding of these four
source letters. While this can be done in multiple ways, two encoded representations are
shown below:
It is easy to see that in method #1 the probability assignment of a source letter has
not been considered and all letters have been represented by two bits each. However in
the second method only a1 has been encoded in one bit, a2 in two bits and the remaining
two in three bits. It is easy to see that the average number of bits to be used per source
letter for the two methods are not the same. ( a for method #1=2 bits per letter and a for
method #2 < 2 bits per letter). So, if we consider the issue of encoding a long sequence of
letters we have to transmit less number of bits following the second method. This is an
important aspect of source coding operation in general. At this point, let us note the
following:
a) We observe that assignment of small number of bits to more probable
letters and assignment of larger number of bits to less probable letters (or symbols) may
lead to efficient source encoding scheme.
b) However, one has to take additional care while transmitting the encoded
letters. A careful inspection of the binary representation of the symbols in method #2
reveals that it may lead to confusion (at the decoder end) in deciding the end of binary
representation of a letter and beginning of the subsequent letter.
1) The average number of coded bits (or letters in general) required per source letter is as
small as possible and
2) The source letters can be fully retrieved from a received encodedsequence.
In the following we discuss a popular variable-length source-coding scheme
satisfying the above two requirements.
Let us assume that a DMS U has a K- letter alphabet {a1, a 2, .aK} with
probabilities P(a 1), P(a2),. P(aK). Each source letter is to be encoded into a
codeword made of elements (or letters) drawn from a code alphabet containing D
symbols. Often for ease of implementation a binary code alphabet (D = 2) is chosen. As
we observed earlier in an example, different codeword may not have same number of
code symbols. If nk denotes the number of code symbols corresponding to the source
letter ak , the average number of code letters per source letter ( n ) is:
Now, a code is said to be uniquely decodable if for each source sequence of finite
length, the sequence of code letters corresponding to that source sequence is different
from the sequence of code letters corresponding to any other possible source sequence.
Example: consider the following table and find out which code out of the four shown is
/are prefix condition code. Also determine n for each code.
A prefix condition code can be decoded easily and uniquely. Start at the beginning of a
sequence and decode one word at a time. Finding the end of a code word is not a problem
as the present code word is not a prefix to any other codeword.
In Binary Huffman Coding each source letter is converted into a binary code
word. It is a prefix condition code ensuring minimum average length per source letter in
bits.
Let the source letters a1, a 2, .aK have probabilities P(a1), P(a2),.
P(aK) and let us assume that P(a1) P(a2) P(a 3). P(aK).
We now consider a simple example to illustrate the steps for Huffman coding.
Arrange the letters in descending order of their probability (here they are
arranged).
Consider the last two probabilities. Tie up the last two probabilities. Assign, say,
0 to the last digit of representation for the least probable letter (a6) and 1 to the
last digit of representation for the second least probable letter (a5). That is, assign
1 to the upper arm of the tree and 0 to the lower arm.
1
P(a5)=0.12
0.2
P(a6)=0.08 0
(3) Now, add the two probabilities and imagine a new letter, say b1, substituting for
a6 and a5. So P(b1) =0.2. Check whether a4 and b1are the least likely letters. If
not, reorder the letters as per Step#1 and add the probabilities of two least likely
letters. For our example, it leads to:
P(a1)=0.3, P(a2)=0.2, P(b1)=0.2, P(a3)=0.15 and P(a4)=0.15
(4) Now go to Step#2 and start with the reduced ensemble consisting of a1 , a2 , a3 ,
P(a3)=0.15 1
0.3
0
P(a4)=0.15
t Continue till the first digits of the most reduced ensemble of two letters are
assigned a 1 and a 0.
Again go back to the step (2): P(a1)=0.3, P(b2)=0.3, P(a2)=0.2 and P(b1)=0.2.
Now we consider the last two probabilities:
P(a2)=0.2
0.4
P(b1)=0.2
So, P(b3)=0.4. Following Step#2 again, we get, P(b3)=0.4, P(a1)=0.3 and
P(b2)=0.3.
0.6
P(b2)=0.3 0
P(b4)=0.6 1
1.00
P(b3)=0.4
0
6. Now, read the code tree inward, starting from the root, and construct the
codewords. The first digit of a codeword appears first while reading the code tree
inward.
Hence, the final representation is: a1=11, a2=01, a3=101, a4=100, a5=001, a6=000.
4. Note that the entropy of the source is: H(X)=2.465 bits/symbol. Average length
per source letter after Huffman coding is a little bit more but close to the source
entropy. In fact, the following celebrated theorem due to C. E. Shannon sets the
limiting value of average length of codewords from a DMS.
CONVOLUTIONAL CODES
Convolutional codes are commonly described using two parameters: the code rate
and the constraint length. The code rate, k/n, is expressed as a ratio of the number of bits
into the convolutional encoder (k) to the number of channel symbols output by the
convolutional encoder (n) in a given encoder cycle. The constraint length parameter, K,
denotes the "length" of the convolutional encoder, i.e. how many k-bit stages are
available to feed the combinatorial logic that produces the output symbols. Closely
related to K is the parameter m, which indicates how many encoder cycles an input bit is
retained and used for encoding after it first appears at the input to the convolutional
encoder. The m parameter can be thought of as the memory length of the encoder.
If the encoder generates a group of n encoded bits per group of k information bits,
the code rate R is commonly defined as R = k/n. In Fig. 7.1, k = 1 and n = 2. The
number, K of elements in the shift register which decides for how many codewords
one information bit will affect the encoder output, is known as the constraint length
of the code. For the present example, K = 3.
Fig. 7.2 State diagram representation for the encoder in Fig. 7.1
b)Tree Diagram Representation
The tree diagram representation shows all possible information and encoded
sequences for the convolutional encoder. Fig. 7.3 shows the tree diagram for the encoder
in Fig. 7.1. The encoded bits are labeled on the branches of the tree. Given an input
sequence, the encoded sequence can be directly read from the tree. As an example, an
input sequence (1011) results in the encoded sequence (11, 10, 00, 01).
c) Trellis Diagram Representation
The trellis diagram of a convolutional code is obtained from its state diagram. All
state transitions at each time step are explicitly shown in the diagram to retain the time
dimension, as is present in the corresponding tree diagram. Usually, supporting
descriptions on state transitions, corresponding input and output bits etc. are labeled in the
trellis diagram. It is interesting to note that the trellis diagram, which describes the
operation of the encoder, is very convenient for describing the behavior of
thecorresponding decoder, especially when the famous Viterbi Algorithm (VA) is
followed. Figure 7.4 shows the trellis diagram for the encoder in Figure 7.1.
Fig. 7.4Trellis diagram, used in the decoder corresponding to the encoder in Fig.7.1
Hard-Decision and Soft-Decision Decoding
Hard-decision and soft-decision decoding are based on the type of quantization
used on the received bits. Hard-decision decoding uses 1-bit quantization on the received
samples. Soft-decision decoding uses multi-bit quantization (e.g. 3 bits/sample) on the
received sample values.
The basic operations which are carried out as per the hard-decision Viterbi Algorithm
after receiving one codeword are summarized below:
All the branch metrics of all the states are determined;
.
Accumulated metrics of all the paths (two in our example code) leading to a state
are calculated taking into consideration the accumulated path metrics of the
states from where the most recent branches emerged;
Only one of the paths, entering into a state, which has minimum accumulated
path metric is chosen as the survivor path for the state (or, equivalently node);
So, at the end of this process, each state has one survivor path. The history of a
survivor path is also maintained by the node appropriately ( e.g. by storing the
codewords or the information bits which are associated with the branches making
the path);
(5) Steps a) to d) are repeated and decoding decision is delayed till sufficient number
of codewords has been received. Typically, the delay in decision making = Lx k
codewords where L is an integer, e.g. 5 or 6. For the code in Fig. 7.1, the decision
delay of 5x3 = 15 codewords may be sufficient for most occasions. This means,
we decide about the first received codeword after receiving the 16thcodeword.
The decision strategy is simple. Upon receiving the 16thcodeword and carrying
out steps a) to d), we compare the accumulated path metrics of all the states (
four in our example) and chose the state with minimum overall accumulated path
metric as the winning node for the first codeword. Then we trace back the
history of the path associated with this winning node to identify the codeword
tagged to the first branch of the path and declare this codeword as the most likely
transmitted first codeword.
The above procedure is repeated for each received codeword hereafter. Thus, the
decision for a codeword is delayed but once the decision process starts, we decide once
for every received codeword. For most practical applications, including delay-sensitive
digital speech coding and transmission, a decision delay of Lx k codewords is acceptable.
Initially developed for military applications during II world war, that was less sensitive to
intentional interference or jamming by third parties.
Spread spectrum technology has blossomed into one of the fundamental building blocks in
current and next-generation wireless systems
Solution
A spread spectrum system is therefore designed to make these tasks as difficult as possible.
Secondly, the signal should be difficult to disturb with a jamming signal, i.e.,
thetransmitted signal should possess an anti-jamming (AJ) property
Remedy
Spread the narrow band signal into a broad band to protect against interference
In a digital communication system the primary resources are Bandwidth andPower. The
study of digital communication system deals with efficient utilization ofthese two resources,
but there are situations where it is necessary to sacrifice their efficient utilization in order to
meet certain other design objectives.
For example to provide a form of secure communication (i.e. the transmitted signal is
not easily detected or recognized by unwanted listeners) the bandwidth of the transmitted
signal is increased in excess of the minimum bandwidth necessary to transmit it. This
requirement is catered by a technique known as Spread SpectrumModulation.
One of the basic concepts in data communication is the idea of allowing several
transmitters to send information simultaneously over a single communication channel. This
allows several users to share a bandwidth of frequencies. This concept is called multiplexing.
CDMA employs spread-spectrum technology and a special coding scheme (where each
transmitter is assigned a code) to allow multiple users to be multiplexed over the same
physical channel. By contrast, time division multiple access (TDMA) divides access by time,
while frequency-division multiple access (FDMA) divides it by frequency. CDMA is a form
of "spread-spectrum" signaling, since the modulated coded signal has a much higher data
bandwidth than the data being communicated.
Technical details
CDMA uses Direct Sequence spreading, where spreading process isdone by directly
combining the baseband information to high chip rate binary code. The Spreading Factor is
the ratio of the chips (UMTS = 3.84Mchips/s) to baseband information rate. Spreading
factors vary from 4 to 512 in FDD UMTS. Spreading process gain can in expressed in dBs
(Spreading factor 128 = 21dB gain).
Fig. 8.4
Each user in a CDMA system uses a different code to modulate their signal. Choosing
the codes used to modulate the signal is very important in the performance of CDMA
systems. The best performance will occur when there is good separation between the signal
of a desired user and the signals of other users. The separation of the signals is made by
correlating the received signal with the locally generated code of the desired user. If the
signal matches the desired user's code then the correlation function will be high and the
system can extract that signal. If the desired user's code has nothing in common with the
signal the correlation should be as close to zero as possible (thus eliminating the signal); this
is referred to as cross correlation. If the code is correlated with the signal at any time offset
other than zero, the correlation should be as close to zero as possible. This is referred to as
auto-correlation and is used to reject multi-path interference.
Fig. 8.5
PSUEDO-NOISE SEQUENCE:
Generation of PN sequence:
Clock
Shift Shift Shift Output
Register1 Register2 Register3
S0 S3
Logic Circuit
A feedback shift register is said to be Linear when the feed back logic consists of
entirely mod-2-address ( Ex-or gates). In such a case, the zero state is not permitted. The
period of a PN sequence produced by a linear feedback shift register with n flip flops cannot
exceed 2n-1. When the period is exactly 2n-1, the PN sequence is called a
Example1: Consider the linear feed back shift register as shown in fig 2involve
three flip-flops. The input so is equal to the mod-2 sum of S1 and S3. If the initial state of the
shift register is 100. Then the succession of states will be as follows.
100,110,011,011,101,010,001,100 . . . . . .
N = 2m-1
(7) Contents of required stages are modulo 2 added and fed back to input.
1 0 0 0
0 1 0 0
0 0 1 0
1 0 0 1
1 1 0 0
0 1 1 0
1 0 1 1
0 1 0 1
1 0 1 0
1 1 0 1
1 1 1 0
1 1 1 1
0 1 1 1
0 0 1 1
0 0 0 1
1 0 0 0
We can see for shift Register of length m=4.
.At each clock the change in state of flip-flop is shown.
000100110101111
Properties of PN Sequence
Randomness of PN sequence is tested by following properties
u Balance property
v Run length property
w Autocorrelation property
1. Balance property
In each Period of the sequence , number of binary ones differ from binary zeros by
at most one digit .
Consider output of shift register 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 Seven
zeros and eight ones -meets balance condition.
Among the runs of ones and zeros in each period, it is desirable that about one half the runs
of each type are of length 1, one- fourth are of length 2 and one-eighth are of length 3 and so-
on.
0 0 0 1 0 0 1 1 0 1 01 1 1 1
3 1 2 2 1 1 1 4
Yields PN autocorrelation as
Range of PN Sequence Lengths
7 127
8 255
9 511
10 1023
11 2047
12 4095
13 8191
17 131071
19 524287
A Notion of Spread Spectrum:
An important attribute of Spread Spectrum modulation is that it can provide
protection against externally generated interfacing signals with finite power. Protection
against jamming (interfacing) waveforms is provided by purposely making the information
bearing signal occupy a BW far in excess of the minimum BW necessary to transmit it.
This has the effect of making the transmitted signal a noise like appearance so as to blend
into the background. Therefore Spread Spectrum is a method of camouflaging the
information bearing signal.
V
b(t) m(t). . r(t) z(t) Tb Decisi
on
dt Device
0
{ cK } denotes a PN sequence.
The desired modulation is achieved by applying the data signal b(t) and PN signal c(t) to a
product modulator or multiplier. If the message signal b(t) is narrowband and the PN
sequence signal c(t) is wide band, the product signal m(t) is also wide band. The PN
For base band transmission, the product signal m(t) represents the transmitted
The received signal r(t) consists of the transmitted signal m(t) plus an additive
interference noise n(t), Hence
= c(t).b(t) + n(t)
+1
-1
+1
0 -1
+1
0 -1
To recover the original message signal b(t), the received signal r(t) is applied to a
demodulator that consists of a multiplier followed by an integrator and a decision device. The
multiplier is supplied with a locally generated PN sequence that is exact replica of that used
in the transmitter. The multiplier output is given by
Z(t) = r(t).c(t)
5. c2(t).b(t) + c(t).n(t)
The data signal b(t) is multiplied twice by the PN signal c(t), where as unwanted signal
n(t) is multiplied only once. But c2(t) = 1, hence the above equation reduces to
Now the data component b(t) is narrowband, where as the spurious component c(t)n(t)
is wide band. Hence by applying the multiplier output to a base band (low pass) filter most of
the power in the spurious component c(t)n(t) is filtered out. Thus the effect of the interference
n(t) is thus significantly reduced at the receiver output.
The integration is carried out for the bit interval 0 t Tb to provide the sample
value V. Finally, a decision is made by the receiver.
1. Slow frequency hopping:- In which the symbol rate Rs of the MFSK signal is an
integer multiple of the hop rate Rh. That is several symbols are transmitted on each
frequency hop.
2. Fast Frequency hopping:- In which the hop rate Rh is an integral multiple of the
MFSK symbol rate Rs. That is the carrier frequency will hoop several times during
the transmission of one symbol.
Fig. 8.12 a) Shows the block diagram of an FH / MFSK transmitter, which involves
The incoming binary data are applied to an M-ary FSK modulator. The resulting
modulated wave and the output from a digital frequency synthesizer are then applied to a
mixer that consists of a multiplier followed by a band pass filter. The filter is designed
to select the sum frequency component resulting from the multiplication process as the
transmitted signal. An k bit segments of a PN sequence drive the frequencysynthesizer,
which enables the carrier frequency to hop over 2n distinct values. Since frequency
synthesizers are unable to maintain phase coherence over successive hops, most frequency
hops spread spectrum communication system use non coherent M-ary modulation system.
An individual FH / MFSK tone of shortest duration is referred as a chip. The chip rate
Rc for an FH / MFSK system is defined by
Rc = Max(Rh,Rs)
In a slow rate frequency hopping multiple symbols are transmitted per hop. Hence
each symbol of a slow FH / MFSK signal is a chip. The bit rate R b of theincoming binary
data. The symbol rate Rs of the MFSK signal, the chip rate Rc and the hop rate Rn are
related by
Rc = Rs = Rb /k Rh
where k= log2M
A fast FH / MFSK system differs from a slow FH / MFSK system in that there
are multiple hops per m-ary symbol. Hence in a fast FH / MFSK system each hop is a chip.
Fig. illustrates the variation of the frequency of a slow FH/MFSK signal with time for one
complete period of the PN sequence. The period of the PN sequence is 24-1 = 15. The
FH/MFSK signal has the following parameters: