Module 3
Module 3
Syllabus
Data Acquisition & Signal Processing
Signal conditioning.
Classification of signals
Signals essentially convey information.
Signals are often distinguished by the repetition frequencies of periodic events and
so one of the most fundamental ways of evaluating signals is in terms of their
‘frequency spectrum’, showing how their constitutive components are distributed with
frequency.
Mathematically, this is done with various forms of Fourier analysis, but at this stage it
is sufficient to see how the various signal types manifest themselves in the time and
frequency domains.
Features of signal can be constant over a period of time or vary with time. Based on
this condition, signals can be classified as stationary or nonstationary.
Figure3.1 shows the basic breakdown into different signal types.
Figure 3.1: Types of Signal in Vibration Monitoring
Stationary Signal
The signals whose statistical features do not change with time and are known as
stationary signals. i.e. statistical properties are invariant with time.
Cyclostationary Signal
Figure 3.5: The elements of the modern digital data acquisition system (Source: DEWESOFT)
Sampling Frequency
The data has to be sampled at an adequate sampling frequency. In Figure3.7, the original
sample adequately sampled by a sampling frequency of f = is represented by the dark line.
In the same figure, the dashed line indicates the same signal sampled at a slower sampling rate,
∗
f = ∗
.
Due to the slower sampling rate, the original signal appears to be a low-frequency signal. In
other words, the signal has been aliased as a low-frequency signal.
This is a serious error in the data acquisition system, and is known as the aliasing error. To
prevent signal aliasing, the signal has to be sampled at a rate at least two times higher than the
maximum frequency of the signal present in the system. This fact is stated in Shannon’s
sampling theory.
When data acquisition is done for a signal whose maximum frequency is not known, a low-
pass analog filter with a cut off frequency of is used to prevent signal aliasing, as shown in
Figure3.7.
This is a very important fact to consider when using data acquisition for dynamic signals like
noise and vibration that change very quickly. Data acquisition devices without the low-pass
antialiasing filters are available for acquiring static signals over a period of time, for example,
the temperature signals from thermocouples.
When there are multiple inputs to the ADC, each known as an input channel, the sampling
frequency is expressed as the number of data points per second per channel. A digital switch
known as a multiplexer is used to routinely scan the channels in a sequence for data acquisition
by an ADC.
Range
Amplitude Resolution =
2
For example, for a 3-bit ADC, there are a maximum of 8 digital values from 000 to 111, that
can be used to digitally represent the input analog voltage of 10V range. This corresponds to
an amplitude resolution of 1.25 V as per Equation.
The problem arises when the analog voltage at a particular instance is at an amplitude
resolution less than 1.25 V.
Thus the small voltage variations in the signal less than the amplitude resolution of the ADC
cannot be captured. This is shown in Figure3.8 and is known as the digitization error.
To overcome the digitization error in the ADC process, it is advisable to have an ADC of
a higher bit size. For example, for the same input voltage range of 10 V, with a bit size of 12,
the amplitude resolution would correspond to 2.49 mV.
Thus, very small deviations in the analog signal can be captured by the ADC conversion
process.
Figure 3.8: Effect of bit size on digitization
Many times during data acquisition, analog amplifiers are used before the ADC process to
amplify the signal so that the ADC device can capture the small changes in the analog signal.
The input analog signal to the data acquisition system can be unipolar, where the signal is
referenced to a ground voltage using a single wire system, or it can be bipolar with reference
to a high and low value of the analog signal.
The noise associated with the data acquisition process reduces with the bipolar input.
Data Storage
The digital data thus obtained by the ADC process need to be stored in the digital memory for
further computations.
Depending upon the number of lines of FFT required, the data size, N, will change. Some ADC
devices have on-board random access memory (RAM) to store the digitized data.
The data acquisition process is controlled by driver software that is resident on the host
computer wherein the triggering of the data acquisition process can be initiated, based on a
certain input voltage level.
The software can also control the rate at which the data is stored in the on-board memory, the
mode and time of data acquisition, and whether it should be in a continuous mode or
intermittent.
Many standard commercial hardware systems are available for data acquisition along with their
driver software. The digital data thus acquired, needs to be transferred to the computer system
through a data transfer protocol, based on the architecture of the computer.
Thus the compatibility of the ADC hardware with the computer system must be ensured. Some
of the standard computer architectures over the years that have been used for interfacing with
the ADC are ISA (Industry Standard Architecture), EISA (Extended Industry Standard
Architecture), PCI (Peripheral Component Interconnect), PCMCIA (Personal Computer
Memory Card International Association), and USB (Universal Serial Bus).
Signal Analysis
Figure 3.9: Time signal consisting of sinusoidal components, and how it can be represented in the frequency domain
Development of FFT
Basics of FFT
1. FFT assumes time domain continues forever.
2. Number of points in time domain equals number of points in FFT.
3. The alias region in normally hidden.
• Mirror image about f /2 “Aliasing” (Where f is the sampling frequency)
4. Frequency Resolution = ∆f = f /N
Then,
The factor of 2 comes from the famous Nyquist criteria (or more correctly from the
Nyquist–Shannon sampling theorem), which says that maximal signal frequency
adequately presented in the digitized wave is the half of the sampling rate.
(ii) Frequency Resolution: It is the minimum change in frequency that FFT can detect
The result of FFT is a set of amplitudes of certain frequencies.
The amount of amplitudes in the set is given by the Number of Lines parameter for the
FFT.
The Number of Lines parameter is user-selectable, and it determines the resolution of the
FFT.
Line resolution is a change in frequency between two frequency lines, which are extracted
from the signal and is calculated with the equation:
Sample Rate
Line Resolution = 2
Number of Lines
So the question is, why not always use the maximum number of available frequency lines,
which gives more exact results?
The answer is simple, because, with more frequency lines it takes more time to calculate
FFT spectra.
Number of lines × 2
Time to calculate =
Sample Rate
If we combine above equations then we get,
1
Line Resolution =
Time to calculate
The number of lines combined with the sample rate also defines the speed of the FFT
when non-stationary signals are applied.
With more lines, FFT will appear slower and changes in signal will not be shown that
rapidly.
Frequency Bin
Frequency bins are intervals between samples in frequency domain.
For example, if your sample rate is 100 Hz and your FFT size is 100, then you have 100
points between [0 to 100] Hz.
Therefore, you divide the entire 100 Hz range into 100 intervals, like 0-1 Hz, 1-2 Hz, and
so on.
Each such small interval, say 0-1 Hz, is a frequency bin.
Windowing
• Let us first understand concepts of Spectral Leakage.
But, the actual FFT spectrum is something different from the above mentioned assumptions as
shown in figure 3.14
It happens when the spectral content of the signal does not correspond to an available
spectral line i.e. when analyzed signals contain energy at frequencies not described by
the spectral lines of the FFT spectrum.
For example, configuring an FFT analyzer to a line spacing of 2 Hz; if the analyzed
signal contains energy at an uneven frequency like 10.5 Hz, for example, leakage will
occur. (Figure 3.15)
Why does this happen? Because no single spectral line can describe the energy at 10.5
Hz when the line spacing is 2 Hz.
This leakage phenomenon arises since FFT algorithms describe blocks of time data
with periodic sinusoidal components. Such a representation requires that time signals
are periodized into time blocks that are continuous at the ends where the blocks are
effectively joined into a loop.
Remember that the FFT time block length T is defined by the reciprocal of the line
spacing:
1
T=
∆f
Given a spectral line spacing of 2 Hz, the time block length is 0.5 sec. This causes all
non-even frequency components to have discontinuities at the looped block ends.
Adding to the example two paragraphs ago, 10.5 Hz will have a 90 o phase difference
between the block ends.
If time signals are periodized to have a block length T that causes the time blocks to
have discontinuities at the ends, FFT algorithms will try to represent such
discontinuities by leaking a portion of the energy to a broad range of sinusoidal
components.
For most signal types it is hard or impossible to find block lengths with no
discontinuities at the looped time block ends, and therefore time weighting window
functions are used to help with solving this problem.
Main Lobe
The centre of the main lobe of a window occurs at each frequency component of the time-
domain signal.
By convention, to characterize the shape of the main lobe, the widths of the main lobe at –3
dB and –6 dB below the main lobe peak describe the width of the main lobe. The unit of
measure for the main lobe width is FFT bins or frequency lines. (Refer Figure 3.16)
The width of the main lobe of the window spectrum limits the frequency resolution of the
windowed signal. Therefore, the ability to distinguish two closely spaced frequency
components increases as the main lobe of the smoothing window narrows.
As the main lobe narrows and spectral resolution improves, the window energy spreads into
its side lobes, increasing spectral leakage and decreasing amplitude accuracy. A trade-off
occurs between amplitude accuracy and spectral resolution.
Side Lobes
Side lobes occur on each side of the main lobe and approach zero at multiples of fs/N from the
main lobe.
The side lobe characteristics of the smoothing window directly affect the extent to which
adjacent frequency components leak into adjacent frequency bins. (Refer Figure )
The side lobe response of a strong sinusoidal signal can overpower the main lobe response of
a nearby weak sinusoidal signal.
Maximum side lobe level and side lobe roll-off rate characterize the side lobes of a smoothing
window. The maximum side lobe level is the largest side lobe level in decibels relative to the
main lobe peak gain.
Ideally, we would like a very narrow main lobe and very deep attenuation in side lobe.
Effect of Leakage
As a result of the amplitude errors caused by spectral leakage, small frequency peaks will
occur close to larger ones.
If there are two sinusoids, with different frequencies, leakage can interfere with the ability to
distinguish them spectrally.
If their frequencies are dissimilar, then the leakage interferes when one sinusoid is much
smaller in amplitude than the other. That is, its spectral component can be hidden or masked
by the leakage from the larger component.
But when the frequencies are near each other, the leakage can be sufficient to interfere even
when the sinusoids are equal strength; that is, they become undetectable.
In this example, although we are performing the FFT on the block of data with the
black background, the FFT calculation "assumes" that the data continues endlessly
before and after this block of data - as shown with the data with a gray background.
In this example it is true that the single frequency sine-wave begins and ends at zero
amplitude. Four complete cycles live within the time record.
If we are analysing a pure sine wave, i.e. just one frequency, and there is an integer
number of cycles in the time record, then this assumption is correct.
However it is seldom true that the time record starts and ends at zero. More commonly
they are similar to Figure 13.18.
When the FFT calculation is performed the signal is discontinuous. It seems to have a step
increase in level and looks similar to an impact to the FFT calculation.
It generates a peak that is spread over a wide frequency band similar to an impact as shown
in Figure3.18. That is not what we want to see as a result.
The real data in Figure 2 shows that the ends of each sample do not have the starting and
ending amplitude at zero.
Figure 2: Real data example where the ends of the sample blocks do not end at zero amplitude
Figure 3.21: Shapes of some commonly used window functions called Flat top, Hanning, and Rectangular
Figure illustrates the parameters defining the filter Selectivity. The frequency axis is linear
- giving a curved shape for the Side lobe fall-off rate.
https://www.youtube.com/watch?v=Q8N8pZ8P3f4
https://www.youtube.com/watch?v=pD7f6X9-_Kg
Averaging
In an ideal world, the data collector would collect a single time record free of noise from a
never changing vibration signal, then produce the FFT and store it.
But the vibration is constantly changing slightly and there is noise in the signal.
Changes occur as rotating elements go through cycles and there is random noise from
inside and outside the machine.
There is a way to minimize the effects of the noise and keep more of the changes due to
cycles inside the machine. The process used to correct this is called Averaging.
Averaging can be performed in the time domain or in the frequency domain. But, in this
section, we will focus mainly on averaging in the frequency domain, which is the primary
type of averaging used with FFT analysers.
Averaging in the frequency domain is sometimes referred to as Spectrum Averaging.
FFT analyzers often have different options for setting up the spectrum averaging process.
The most common averaging modes are described in the following sections
RMS Averaging
RMS averaging (also referred to as power-spectrum averaging or energy averaging) is
typically the default spectrum averaging mode in FFT analyzers.
RMS averaging is used to reduce the fluctuation of spectral noise levels.
With RMS spectrum averaging, the individual spectral lines are averaged over multiple
instantaneous power spectra or cross power spectra.
In the picture below a non-averaged instantaneous power spectrum (red) is compared to an
averaged spectrum averaged over 100 power spectra (blue).
Figure 3.25: Comparison of an instantaneous spectrum with no averaging (red) and an RMS averaged spectrum over 100
instantaneous spectra (blue).
When performing RMS averaging, the noise in the signal is averaged in the same way as
the pure/consistent signal. As a result, the noise is not reduced or averaged away by
spectrum averaging, but the spectral noise levels will become more and more steady
(averaged) with increasing numbers of averaged spectra.
The standard deviation of the random noise in RMS averaged spectra will be reduced by a
factor of 1 , where N is the number of averaged instantaneous spectra. This is a
√N
reduction of the standard deviation of -5 dB each time the number of averages is 10 times
greater. Conversely, the measurement time will increase as the number of averages
increase.
RMS averaging calculates the mean power sum which relates to the mean energy. The
square root of the mean power sum is calculated to output the same unit as the input signal:
Root-Mean-Square. As for all spectrum averaging modes, the averaging is done for all
spectral lines individually.
Overlap Averaging
Within the context of FFT analysis, the parameter Overlap refers to overlapping FFT time
blocks.
Overlap can be used to calculate FFT spectra more consistently when window functions
are applied, and to increase the rate of produced spectra.
Figure 3.27: Overlap- 0 % overlap and 50 % overlap of FFT time blocks, having a window function applied
Because FFT analysers produce a spectrum for every FFT time block, when these blocks
are overlapped, the analysis will produce spectra with an increased rate compared to when
using no overlap (0 % overlap). This increases the update rate of spectral displays, but
conversely, spectra will include overlapping signal content.
Window weighted FFT blocks typically have very small (or zero) values near the block
boundaries, as shown in the Figure 3.27 above.
The reduced values near the boundaries affect a significant portion of the time signal to be
effectively ignored in the analysis process. In measurement situations where data is
gathered at great expense, this situation should be avoided and hence overlapping FFT
blocks can be used to improve this situation.
When using rectangular windows, all block values will already be equally weighted, and
overlapping will only help to increase the rate of produced spectra.
Overlapping FFT blocks can be adjusted to obtain equal weighting for all time samples
over multiple overlapping spectra, giving a frequency representation of a flat (equally
weighted) time signal. This is used to obtain results equivalent to a real-time analysis,
where the overall weighting function must be uniform, for example when using Hanning
weighting. The overlap has to be at least ⅔ to obtain this.
As the overlap is increased, FFT spectra will also become more and more correlated to
subsequent spectra. Correlated spectra are in many cases unnecessary, and therefore not
much is gained after an overlap fraction is reached which provides near equal overall
weights for the time samples. Therefore, the ideal overlap fraction is often determined to
balance equal total weights of samples and small correlation.
Even though the ideal overlap depends on the window function and the measured signal
type, a reasonable overlap fraction to use is typically ⅔ or ¾.
Maximum Hold
Even though maximum hold does not perform averaging, it is sometimes listed under
available averaging modes for FFT analysers.
This might be due to the fact that multiple instantaneous spectra are involved in the process,
as when performing averaging.
Maximum hold keeps the maximum value of individual spectral lines over the specified
averaging time.
As a result, the resulting maximum hold spectrum might have some spectral lines holding
values from some instantaneous spectra, and other lines holding values from other
instantaneous spectra.
Peak hold averaging is normally not used in routine data collection. Instead, it is used for
special tests such as Run-up, Coast Down, and Bump Tests.
Maximum spectral hold can be used for the inspection of worst-case scenarios, by
obtaining a spectrum indicating maximum amplitudes for all frequencies over a determined
test time.
When the time domain averaging is performed on the vibration signal from a real machine,
the averaged time record gradually accumulates those portions of the signal that are
synchronized with the trigger, and other parts of the signal, such as noise and any other
components such as other rotating parts of the machine, etc., are effectively averaged out.
This is the only type of averaging that actually does reduce noise.
Another important application of time synchronous averaging is in the waveform analysis
of machine vibration, especially in the case of gear drives.
In this case, the trigger is derived from a tachometer that provides one pulse per revolution
of a gear in a machine.
This way, the time samples are synchronized in that they all begin at the same exact point
in the angular position of the gear.
After performing an enough number of averages, spectrum peaks that are harmonics of
RPM will remain when non-synchronous peaks will be averaged out from the spectrum.
Consider a gearbox containing a pinion with 13 teeth and a driven gear with 31 teeth as
shown in Figure3.31.
If a tachometer is connected to the pinion shaft, and its output is used to trigger an analyzer
capable of time synchronous averaging, the averaged waveform will gradually exclude
vibration components from everything except the events related to the pinion revolution.
Any vibration caused by the driven gear will be averaged out, and the resulting waveform
will show the vibration caused by each individual tooth on the pinion.
Figure 3.32: Application of TSA in Gearbox monitoring
Note that in the Figure3.32, the lower averaged waveform indicates one damaged tooth on
the pinion.
Number of Averages
Selection of number of averages for the analysis depends on a number of factors.
(i) If the frequency of the rotation of the machine (speed) changes; averaging from an FFT
would not work well. In this case use of Order Spectrum will suit our requirement.
(ii) If loads are changing (and speed is constant or you are using order spectrum); a smaller
number of averages are required in order to avoid averaging out of the effect of fast load
changes.
(iii) Do few experiments with different number of averages and note when the noise floor
appears stable.
(iv) If you want faster results; smaller number of averages will produce an "average complete"
result more often.
Plots
In order to analyse the data in a simple way it is represented in the form of plots.
Vibration data can be represented with the help of number of plot formats.
Trend Plot
A trend plot is simply a number of amplitude values, snapshots of the total vibration
(vibration at all frequencies) – over a period.
The interval between readings will be the time elapsed between those readings.
That time interval could be anything from months to milliseconds depending on the
specifics of the vibration program and system(s) involved.
Trend graphs provide a quick visual view to the changes that are occurring.
A trend plot offers limited analysis tools (there is no identification of specific frequencies,
for instance) but can be an important indicator of developing problems.
There are many different trend plots available in most software packages.
Time Domain Plot
Waterfall
In addition to two-dimensional plots, common display formats include orbit plots, waterfall
plots and spectrographs.
An orbit plot shows one time trace on the x axis and a second time trace on the y axis.
A waterfall is a three dimensional plot made by stacking up consecutive two-dimensional
plots. Waterfall plots show how a signal changes over time, or how a signal measured from
a rotating machine changes with variations in the RPM. They are also useful for Order
Analysis.
Figure 3.38 shows a typical waterfall plot of the spectrum of the vibration measured on a
rotating machine during a run-up and coast down. Often the waterfall plot includes an
option to display one slice, and record of the waterfall in separate panes.
Figure 3.35: Time waterfall plot of PSD measured from a rotating machine during run-up with spectrum slice on top.
Spectrograph
Waterfalls can also be presented as a spectrogram as shown in Figure 63.39, a two
dimensional format using colour to represent amplitude.
https://www.youtube.com/watch?v=LlyH6YciDhw
Signal Conditioning
The signal from the transducer on a machine may require additional processing, like signal
amplification, noise reduction, filtering, linearization, and so on.
These functions are usually done through standalone analog signal conditioners, and
sometimes some of these functions are done in the digital domain through dedicated digital
signal processing software after the analog-to-digital conversion.
Some of the transducers require an external power supply, which could be provided by the
signal conditioners.
A common requirement is to supply 4-mA current to many of the integrated charge–type noise
and vibration transducers
Signal Filtering
During signal processing, a requirement arises to analyse the acquired or measured signals in
a particular frequency band of interest. This is achieved by filtering the signals.
Figure 3.40: Signal Filtering (Low pass filter to remove high frequency 'noise' on signal)
Signal filtering can be done both in the analog domain and the digital domain.
Following are the common analog filters used in signal processing:
(i) high-pass filter
(ii) low-pass filter: Pass Frequencies below a limit
(iii) band-pass filter
(iv) notch filter
High Pass Filter
A high-pass filter allows signals with frequencies beyond a cut-on frequency to be passed
through.
Usually in machinery condition monitoring, high-pass filters are used to remove near-mean or
DC values of the signal, and cut-on frequencies of 0.1 Hz or 1 Hz are quite common.
Notch Filter
Many times, due to a ground loop with the electrical supply frequency, the electrical supply
frequency (50 Hz or 60 Hz) shows up in the acquired machinery signals.
This single frequency can be removed by using a notch filter.
The electrical supply frequency in some European and Asian countries is 50 Hz, whereas in
the Americas it is 60 Hz. At the cut off and cut-on frequencies, the filters are not sharp and
some roll off occurs, which depends on the order of the filter.