0 ratings0% found this document useful (0 votes) 238 views218 pagesTechKnowledge Image Processing U1-6 SPLIT
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
019 Course)
NI
Engineering (2
uter E
Fourth Year of Comp' ( ete v) (410252 (®))
piscrete Mathematics (210241 1)
Concepts.
ital Image Processing
shods for Image Enancement using sptta and Frequency Doman,
Segmentation. |
|
To Understand Digi
To Study Various Met
To Learn Classification Techniques for image
ject Recognition.
To Understand Image Compression ‘and Obj |
us Image Restoration Techniques
«To Study Variot Image Processing Applications.
+ To Understand various Medical and Satellite
be able to-
red for Digital Image Processing
can Method for Image Enhancement.
‘oncompletion ofthe course, student will
con: Apply Relevant Mathematics Req
coz: Apply Spectal and Frequency Dom
age segmentation.
C03: Apply algorithmic approaches for Im
pression and Object Recognition.
C04: Summarize the Concept of Image Com
COS: Explore the Image Restoration Techniques.
06: Explore the Medical and Satelite Image Processing Applications
Unit I: Introduction to Digital Image Processing 07 Hour,
Introduction, Fundamental step In Digital Image Processing, Components, Elements of visual percepin
Image Sensing and Acquisition, mage Sampling and Quantization, Relationships between pixels dferet
Colour Models, image Types, Image File Formats, Component Labeling algorithm.
Introduction to OpenCV too! to Open and Display Images using Python or Eclipse C/C++.
(Refer Chapters 1, 2,3.and1!)
Unit 0 1
Introducti
Image Fi
Correlatio
Frequency
Gaussian),
Unit 1 1
Introducti
Threshold
Transform,
Unit IV Ir
Image Co
Compressio
quantizatid
Object Rec
Methods an
Unit V_ In
Introductio
Blind-decon
Unie
Medical Im
Analysis (In
Satellite In
Photograph
Sensing, EaUnit Image Enhancement 08 Hours
Introduction to Image Enhancement and its Importance, Types of Image Enhancement Spatial Domain
Image Enhancement : Intensity Transformations, Contrast Stretching, Histogram Fqualization,
Correlation and Convolution, Smoothing Filters, Sharpening Filters, Gradient and Laplacian
Frequency Domain Image Enhancement : Low Pass filtering in Frequency Domain ((deal, Hutterworth,
Gaussian), High Pass filer in Frequency Domain (Ideal, Butterworth, Gaussian)
(Refer Chapters 4, 5 and 6)
Unit I Image Segmentation and Analysis 08 Hours
Introduction to Image Segmentation and its need. Classification of Image Segmentation Techniques
Threshold Based Image Segmentation, Edge Based Segmentation, Edge Detection, Edge Linking, Hough
Transform, Watershed Transform, Clustering Techniques, region approach, (Refer Chapter 7)
Unit IV Image Compression and Object Recognition 06 Hours
Image Compression: Introduction to Image Compression and its need, Classification of Image
Compression Techniques- run-length coding, Shannon Fano coding, Huffman coding. Scalar and vector
quantization, Compression Standards-)PEG/MPEG, Video compression
Object Recognition : Introduction, Computer Vision, Tensor Methods in Computer Vision, Classifications
‘Methods and Algorithm, Object Detection and Tracking, Object Recognition. (Refer Chapters # and 9)
Unit V_Image Restoration and Reconstruction 07 Hours
Introduction, Model of Image degradation, Noise Models, Classification of image restoration techniques,
Blind-deconvolution techniques, Lucy Richardson Filtering, Wiener Filtering. (Refer Chapter 10)
Unit VI Medical and Satellite Image Processing 07 Hours
Medical Image Processing : Introduction, Medical Image Enhancement, Segmentation, Medical Image
‘Analysis (Images of Brain MRI or Cardiac MRI or Breast Cancer)
Satellite Image Processing : Concepts and Foundations of Remote Sensing, GPS, GIS, Elements of
Photographic Systems, Basic Principles of Photogrammetry, Multispectral, Thermal, and Hyper spectral
Sensing, Earth Resource Satellites Operating in the Optical Spectrum. (Refer Chapter 12)
uuuets of inten ——
see Human isa SE
enti i EYE ——
WH image
46 Nel
47 Hig
48 Hig
Contrast Stretching = --——n—eem——
“Thresholding anv
Grey Level Slicing (Intensity Slicing) omen mmmn—nn
Bit Plane Sing. mma
Applications of Bit Plane Slicing ewnonemmmmm—
Dynamic Range Compression (Log Transformation).
Power Law Transformation
Spatial and Intensity Resolution...
631
63.1
63
634
6341
63.1
63.1
63.4
632W__ Image Processing
a7
48
49
410
4a.
Table of Contents
— ey TT)
Image Subtraction - en
Neighbourhood Processing
461 Low Pass Pitering (Smoothin
462 Noise.
463 Low Pass Averaging Filtor (Smoothing)
464 Low Pass Median Filtering
Highpass Ftering. 7
High-Boost Filtering.
48.1 Advantages of High Boot Fitering
Zooming —— -
491° Replication ~
492 Linear Interpolation
Solved Examples on Neighbourhood Processing.
Difference between Point Processing and Mask Processing.
SS O00 eeu
Chapter 5: Histogram Modelling 5-1 to 5-21
—_—_—_—_—_———
sa
52
53
54
55
Introduction st
5.1.1 Mean and Standard Deviation of Histogram.
Linear Stretching
Histogram Equalization
‘Additional Examples on Histogram Modelling.
Difference between Histogram Equalization and Contrast Stretching —___.
lo —
Chapter 6: Image Enhancement in Frequency Domain Hiwsn
61
62
63
Introduction.
‘The Fourier Transform,
Discrete Fourier Transform (DFT).
63.1 Properties of Discrete Fourier TransforM nee
63.1(A) The Separabiity Property.
63.1(8) Translation Property (Shifting. Property)
63.1(C) Periodicity and Conjugate Symmetry Property.
63.1(D) Rotation Property...
tt
6.3.1(E) Distributive and Scaling Property enn
63.1(F) Average Value Property
63.1(G) Laplacian Property (Second Derivative)
63.1(H) Convolution Property...
63218
16
27
2
13
ime
792
193
794
710 Addit
“ee
Chapter 8: |
—_—
a3 Into
a2 Redu
83 Error
aad
832
4 Loss
841
842
843
a4
85 Shann
6 Lossy’
861
742 462
pag Camps Op 463
asta) Applitions/advanages of COMPES OPES == — 87 (PEG2
sap sepmenin sing te Second Devan te PGT te comps
ps1. alam of Gaussian OEP a9 Vector
ge Liking ——— —_ B10 Data
783 ait Videos
762 Hough Transform..
76.2(A) Applicatons/Advantages of Ho
781
782
783
784 Splitand Merge——-
Image Segmentation based on Thresholding
794 Global Thresholding ou‘Table of Contents
78S,
737
Image Processing 4
7.9.2 Local (Adaptive) Thresholding —— ——
793° Optimum Thresholding......
7.94 Watershed Algorithm..
7.10 Additional Solved Examples. em
Chapter 8: Image Transforms
>
81 Introduction.
a2 Redundant and Irrelevant Data.
a3 ErrorCriteria
83.1 Objective Error Criteria,
832 Subjective Eror Criteria.
84 Lossless Compression Techniques
841 Dictionary Based Coding —.
842 Run Length Encoding (RLE)
B43 Statistical Coding..
844 Hulman Encoding.
85 Shannon-Fano Coding.
86 Lossy Compression...
861 Improved Grey Scale (105) Quantization
862 Transform Coding ()PEG Coding)
863 Joint Photographic Experts Group (JPEG).
87 JPEG 2000...
88 Comparison of Lossless and Lossy Compression,
89 Vector Quantization and Scalar Quantization
810 Data Redundancies
811 Video Compression Standard enn
812 Solved Examples.
Chapter 9: Object Recognition 9109-7
91 Object RECOgMN ene
9:44 Pattern and Pattern Classes...
92 Classifiers
92.1 Minimum Distance Classifier.
922 Template Matching Classifier (Correlation based Classifier)
923 Classifier Performance...
924 — Bayes Classifier
93° Computer Vision nu10.1
Introduction —
10.2 Degradation Model .
10.3 Degradation Functions
103.1 Noise and Degra
ise a \dation,
vos latio
Discrete Degradation Model
10S Inverse Filtering
105.1 Pseudo-Inverse Filtering.
106 — Wiener Filter
10.6.1 Drawback of Wiener Filters...
10.7 Power Spectrum Equalisation (PSE)
10.7.1 Blind Deconvolution...
Noise Models
RGB Colour Model.
1122 NTSC Colour Model...
1123 — YCbCr Colour Model...
11.2.4 CMY and CMYK Models...
11.2.5 HSI Colour Model meena
1Q Colour Model mene mm
11.2.6 Comparison of RGB and YI
113 Pseudo-Colouring— en
"Application eeenemeneenennnnemm
12.2 Medical Image Processing amen
12.3 Satellite Image Processing aan
12.3.1 Remote Sensing Process..mm-
12.3.2 _ Passive and Active Sensing
12.3.3 Advantages of Remote Sensing am---—~
12.3.4 Limitations of Remote Sensingewwm-n--vwrvnnsnnrn
124 Photogrammetric Imaging Devices mmm
42.5 Hyperspectral Sensing vwm-—Introduction to
a
Image Processing
1,1 Introduction
Human beings are primarily visual creatures who
depend on their eyes to gather information around them. Of
the five senses that human beings have, sight is what we
depend upon the most. Not many animals depend on their
visual systems; the way human beings do.
Bats use high frequency sound waves. They emit sound
waves which reflect back when they encounter some
obstruction, Cats have poor vision but an excellent sense of
smell. Snakes locate prey by heat emission and fish have
‘organs that sense electrical fields.
1.2 What do we Mean by Image
Processing ?
What happens when we look at an object 7
— The eye records the scene and sends signals to the brain.
These signals get processed in the brain and some
meaningful information is obtained. Let us take a simple
example ; when we see fire, we immediately identify it
‘as something hot. Two things have happened here.
1) The scene has been recorded by the eye.
2) The brain processed this scene and gave out a
warning signal,
This is image processing !!!
— We start processing images from the day we are born.
Hence image processing is an integral part of us and we
continue to process images till the day we die. So even if
this subject seems to be new, we have been
subconsciously doing this, all these years. The human
eye-brain mechanism represents the ultimate imaging
system.
Apart from our vision, we have another important trait
that is common to all human beings. We like to store
information, analyse it, discuss it with others and try to
better it. This trait of ours is responsible for the rapid
development of the human race,
- Early human beings strove to record their world by
carving crude diagrams on stone. All the drawings that
we see in old caves is just that; storing images seen,
trying to analyse them and discussing it with others in
the tribe. Refer Fig. 1.2.1.
— This art developed through the ages by way of materials
and skill. By the mid-nineteenth century, photography
was well established. Image processing that we study
starts from this era.
Fig. 1.2.1
— Though it was stated earlier that the human eye-brain.
mechanism represents the ultimate imaging system,
image processing as a subject involves processing
images obtained by a camera. With the advent of
computers, image processing as a subject grew rapidly.
— Images from a camera are fed into a computer where
algorithms are written to process these images. Here,
‘the camera replaces the human eye and the computer
does the processing.
— Hence image processing as an engineering subject is
basically manipulation of images by a computer.the handle of the tea CHF
All family pictures: phe
are ‘2-dimensionsl. If this
take a: simple example.
er ea nec ane one OP
in Fig. 1.3.2, We are all
ind. Here the voltage IS
e. This is a typical
to locate a dot on the
responding time.
identity cards etC.
on i
let us
‘statement is not cleat
= consider a voltage signal shown
familiar with a signal of this It
varying. with respect 0 tim
4-dimensional signal. If we want
ave all we need to Know ists C0
1% ¥)
Fig. 1.3.2
wehy images are 2-dimension
Let us See
image shown in Fig. 1.3.3,
consider the
Fig. 1.3.4
Fig. 13.3
to locate the dot shown, we need}
rections (x and y) Fig. 134.
je are 2-dimensional fin
= inthis case
position in two di
images that we Se
‘A typical image Is TeP'
tere Gry yadare the spatial coordinates andi
Jove (tour in the case of colour image) 3
Hence grey level f varies with respect to che
coordinates.
resented as shown in Fi
= Electromag
the simult
‘These two
together as
without thi
produces m
electricity
outwards fi
meters per:
= Although
radiation is
made sour
‘tungsten fil
Lightis a ba
the human |
from 380 nm
Most ofthe
life are take
range of the
We must ni
which are ca
X-rays and ri
The electror
terms of was
and the freqt
Here cis the:\
W_imare Processing
1.4 _ The Electromagnetic Spectrum
— The apparatus shown in Fig, 1.2.2 will work only if light is incident om th
Introduction to image Processing
object. What we call light is actually a very
small section of the electromagnetic energy spectrum The entire spectrum Is shown in Fig. 1.4.1
The optical epectrum
200 900 990 455 492
tExvor Far Near|Vilet Blue Groen’
Utra-wolot
Gamma rays
Cosmie rays: xrays
a“
40°%10%10%10710°10"101010°10"" 1 19 10° 10" 10° 10" 10° 10” 10° 10° 10"°10""10"
877 Sor 622
ax
ym
=|
77 1600 6000
Rod| Noar Medium Far
Infrarod
Radio waves Audio frequencies
1
o0"10" inn)
‘1010
Fig. 14.1
= PBlectromagnetic energy, as the name suggests, exists in
the simultaneous form of electricity and magnetism.
‘These two forms of the same energy are transmitted
together as electromagnetic radiation. One cannot exist
without the other. A flow of electric current always
produces magnetism, and magnetism is used to produce
electricity. Electromagnetic radiation is propagated
outwards from its source at a velocity of 300,00,0000
‘meters per second (3 x 108 m/sec).
= Although our natural source of electromagnetic
radiation is the sun, there are also a number of man-
made sources which, among many others, include
tungsten filament lamps, gas discharge lamps and lasers.
Lightis a band of electromagnetic radiation mediated by
the human eye and is limited to a spectrum extending
from 380 nm to [Link].
= Most of the images that we encounter in our day to day
life are taken from cameras which are sensitive to this
range of the electromagnetic spectrum (380 - 760 nm),
We must not forget though, that there are cameras
which are capable of detecting infrared, ultraviolet light,
rays and radio waves too.
— The electromagnetic spectrum can be expressed in
terms of wavelength and frequency. The wavelength (1)
and the frequency (v) are related by the expression
c
hike (144)
Here cis the speed of light =3 x 10® m/sec
Ex. 1.4.1 : Calculate the frequency of oscilation of green
light.
‘Soin. : It has been known that green light has a wavelength
of approximately 500 nm (500 x 10-%m)
Its frequency of oscillations can be calculated using
Equation (1.4.1).
Ae
3x10 m/sec
500x10-?m
vy = 6x104Hz
Le, the frequency of green light is 600,000,000,000,000
eydes/sec!
Hence it is more convenient to discuss electromagnetic
radiation in terms of wavelengths (nm) rather than
frequencies (H2).
1.5 _ Units of Intensity
dle Iaput escpon)- Output (nage)
Snputervision > Input Image) - Output (Description)
a ?Vr
stig
dict
ferenc:
raphics
-visoe
pmate
Image Processit 19 Introduction to image
“The material provided in this chapter is primarily basic information which would be required in subsequent discussions.
ur study of the human visual system, though not exhaustive, provides a basic idea of the capabilities of the eye In
perceiving pictorial information.
In this chapter, preliminary concepts of digital image processing are presented. Difference between one-dimensional
‘and two-dimenstonal signals is explained, Topics such as electromagnetic spectrum and inverse square law are discussed
‘with examples, Elements ofthe human visual system are presented, Basic anatomy of the human eye is explained with afew
Iustrations, Perceptual characteristics such as brightness adaptation and logarithmic response to incident intensity in
form of Weber's ratio are also introduced.
“The concepts explained here willbe found useful in understanding image processing algorithms in subsequent,
“This chapter forms the fundamental base required to understand image processing21 Introduction _______—~
Basic elements of an image processing system =
Digital image processing is basically modification of
images on a computer. The basic components of an image
processing sysiem are shown below.
(1) Image Acquisition.
(2) Image Storage.
(3) image Processing.
(A) Display.
(5) Transmission (if require).
We shall discuss each one in detail
Fig. 20.1
1) Image Acquisition :
— Image acquisition is the first step in any image
Processing system. The general aim of image acquisition
4s to transform an optical image (real world data) into
an array of numerical data which could be later
‘manipulated on a computer,
~ Image acquistion is achleved by suitable cameras, We
= meras for diferent applications. If we
need an Xray image, we use a camera (Sin) that ls
sensitive ta Xrays. If we want an plasty
‘se cameras which are sensitive to nf
Image :
sensing and Acquisition
For normal images (Farnlly pictures etc,) we use came,
which are sensitive tothe visual spectrum, In this bo.
ine shall diseuss cameras (Sensors) which are sensi,
only tothe visual range.
photovoltaic devices : Photovoltaic devices consis
semiconductor junctions. They are solid state arra,,
‘composed of discrete silicon imaging elements known 3,
Photoses. Photovoltaic devices give a voltage outpy
signal thatis proportional to the intensity ofthe incicen,
ight No external bias is required as was in the case «
photoconductive devices.
= The technology used in solid-state imaging sensors is
based principally on charge-coupled devices, common'y
mown as Charged Coupled Devices CCDs. Hence the
Imaging sensors are called CCD sensors.
= The solid state array (CCD) can be arranged in two
different configurations.
(ba) Linearray CCD. (b2) Area array CCD.
(b4) Line Arrays:
= The line array represents the simplest form of Cco
imager and has been employed since the early 1970s
Line arrays consist of a one-dimensional array 0
Fig. 2.1.2
~ A single line of CCD pixels are clocked out into the
Parallel output register as shown in Fig. 2.1.2. The
amplifier outputs a voltage signal proportional to th¢
contents of the row of photosites, One thing to note |
that line array CCD scans only one line (hence is on*”
W imager
= In order
array CC
moving t
activity.
u
= Thistec
that you
cafe). A
element
(62) An
The pro
line, Te
mechan
= Area
dimens
to Inve
bbe ded
possi
SS Sl| in two
of cco
y 1970s
array of
{into the
1.2. The
jal to the
to note b>
ce Is one-
Image Processing
{In order to produce a two-dimensional image, the line
array CCD imager has to be used as a scanning device by
‘moving this array over the object by some mechanical
activity.
CCD (AGB) tne array
Fig. 2.13
~ This technique is used in flat bed scanners (the scanners
that you come across in your laboratory oF in a cyber
“cafe), A line array CCD can have anything from a few
elements to upto 6000 or more.
(2) Area Arrays:
~The problem with line arrays is that it scans only one
line. To get a two-dimensional image, we need to
‘mechanically move the array over the entire image.
SIRE
SSS
SS
SQ
Fig. 21.4
‘Area arrays or matrix arrays consist of a two-
dimensional array of photosites. They make it possible
to investigate static real world scenes without any
mechanical scanning, Thus much more information can
bbe deduced from a single realtime glance than would be
possible with line arrays,
Sensing and Acquisition
‘Area arrays can be seen i
the digital cameras that
‘we Use for video imaging. The area arrays are more
versatile than the line arrays, but there Is a price to be
ald for this. Area arrays are higher on cost and
complexity
— Area sensors come in different ranges. Le. 256 x 286,
490 x 380, 640 x 480, 780 x 575, CCD arrays are
typically packaged as TV cameras. A. significant
advantage of solid state array sensors is that they can be
shuttered at very high speeds (1/10,000 secs), This
makes them ideal for applications in which freezing
‘motion is required.
2) Image Storage:
~ All video signals are essentially in analog form 2
electrical signals convey luminance and colour with
continuously variable voltage. ‘The cameras are
interfaced to a computer where the processing
algorithms ae written.
— This is done by a frame grabber card. Usually a frame
grabber card is a printed circuit board (PCB) fitted to
‘the host computer with its analog entrance port
matching the impedance of the incoming video signal.
‘The A/D converter translates the video signals into
dligal values and a digital image is constructed, Frame
grabber cards usually have a A/D card with resolution of
8 - 12-bits (256 to 4096 gray levels), Hence a frame
srabber card isan interface between the camera and the
computer.
— Frame grabber card has a block of memory, separate
from the computers own memory, large enough to hold
any image. This is known a5 the frame bufer memory.‘represents black while 1 represents white. It isa black
‘and white image in the strictest sense. These images are
‘also called bit mapped images. In such images, we have
‘only black and white pixels and no other shades of grey.
Refer Fig 223.
nage Sensing and Acc
ge meeen me bees tive Be hat 2 Wars
an be generated by mixing the three prim,
een and Blue, in proper propor
frau sages rch pixel Ix composed of RGB va
mia each of these colours require @-its (one by)
fa representation. Hence each pixels represented
debits R(B-Dits}, (E-D!S), BEES)
‘A 2h-it colour image supports 16, 777, 216 di
‘combination of colours.
Colour images can be easily converted to grey sc
ffere
Images using the equation
x = 020R+0596+0.11B 22
= Ameasier formula that could achieve similar results s
ReG+B
x5 (222
MATLAB code for converting a colour image to »
2D) Grey Scale image
ere each pas is usually sored as a byte (8-its)
ed pa cas gg 8)
ine Cen wale eae 6 he same mae
hare back white and various shades of grey present In
image Refer Fig. 2.24.
WH image
gure(i)
imshow(u
figure(2)
imshow(w
figure(3)
imshowru
Matlab
4) Half Te
= itis ot
better
grey les
printer
level di
can onl
ground
bilevel
= You ha
(hopeh
But if
basicall
— Even ¢
Cinelud
a white
of see
iMusion
called!Prima
portio
3B valu
byte) §
altferes
rey sca
(221
sults'is
(222)
toa gr
nage %%
WH image Processing 26 Image Sensing and Acquisition
figure(1) ~The human eye integrates the scene that it sees
imshow(uint 8(im)) Consider a simple example. Consider two squares of say
figure(2) 0.03 x 0.03 sqinch. One of these squares contains 3 Jot
imshow(uint 8(new)) of black dots while the other square contains fewer
figure(3) black dot
fmishow(uintB(new!)) When we look at these squares from a distance, the two
squares give us a perception of 2 different grey levels
Matlab has an inbuilt command for conversion ngb2Zaray.
‘This integration property of the eye is the basis for half,
4) Half Toning toning. In this, we take a matrix of a fixed size and
depending on the grey level required, we fill the matrix
with black pixels
= It is obvious that a grey scale Image definitely looks
better than the monochrome image as it utilizes more
grey levels. But there is a problem in hand. Most of the
printers that we use (inkjet, lasers, dot matrix) are all bi
level devices, Le, they have only a black cartridge and
can only produce two levels (black on a white back-
ground). In fact, most of the printing jobs are done using
bilevel devices.
Let us take an example.
= Consider a 3x 3 matrix This matrix can generate an
fllusion of 10 different grey levels when viewed from a
distance.
= You have all read newspapers at some point of time
(hopefully). The images do look like grey level images.
But if you look closely, all the images generated are
basically using black colour. Refer Fig. 225.
Fig. 225
— Even the images that you see in most of the books
(including this one) are generated using black colour on
‘a white background, In spite of this we do get an illusion
of seeing grey levels. The technique to achieve an
iMlusion of grey levels from only black and white levels is
called half-toning.Sensing and Aca,on
Sampling
3a
Introduction.
= We know that an image is basically a 2-dimensional
representation of the 3-dimensional world. We have
also studied that images can be acquired using a Vidicon
for a CCD camera or using scanners. The basic
requirement for image processing is that {mages
obtained be in the digital format. For example, one
‘cannot work with or process photographs on identity
cards unless he/she scans itusing a scanner.
‘The scanner digitizes the photograph and stores tt on
the hard disk of the computer, Once this is done, one can
‘use image processing techniques to modify the image as
per requirement. Ina Vidicon too, the output which ts in
‘analog form needs to be digitized in order to work with
the images. To cut a Jong story short, to perform image
processing, we need to have the images on the
‘computer. This will only be possible when we digitize
the analog pictures.
Now that we have understood the importance of
digitization, let us See what this term actually means.
‘The process of digitization involves two steps
Fig. 3.1.1: Steps of process of digitization
In other words, Digitization
‘Sampling + Quantization.
We have had exposure to these terms in the lower
semesters in subjects like Principles of Communication
Engineering, Signals and Systems and Signal Processing
which dealt with I-dimensional signals. Let us take a
brief: look at these concepts and move ahead to the
2-dimensional domain.
a2
and Quantization
Sampling and Quantization
‘The Sampling process converts a continuous time
domain signal into a discrete signal which [Link] at
specific instances of time. Sampling depends on the
‘Sampling frequency of the Analog to digital converter
‘The values obtained by sampling a continuous function
usually comprise of an infinite set of real numbers
ranging from a minimum to a maximum depending upon
the sensors calibration. These values must be
represented by a finite number of bits usually used by a
computer to store oF process any data, In practice, the
sampled signal values are represented by a finite set of
integer values. This is known as quantization. Rounding
of a number {s a simple example of quantization.
With these concepts of sampling and quantization,
now need to understand what these terms mean when
‘we look at an image on the computer monitor.
‘Higher the spatial resolution ofthe image, greater is the
sampling rate Le. lower is the image area Ax Ay
represented by each sampled point. Similarly, higher the
grey level resolution (tonal resolution) more are the
‘number of quantized levels.
Hence spatial resolution gives us an indication of the
sampling while grey level resolution (tonal resolution)
gives us an indication of the quantization. i.
Spatial resolution — —> Sampling *
Greylevel resolution —+ Quantization
‘We have already stated that an image can be considered
as a 2-D array. Image f(xy) is arranged in the form of
NxMarray
10,0) 100, 1) neerrrseienerens QM
.0) (1,1), na bie 7
tiey=| 20) 12,1) M=1)
191-10)... AMA
— Hence every image that is seen on
actually this matrix. Bach element
a pixel, Never forget this.1H inage Processing
ee an image ©
sty a mat
sce Sa
ples,
rence more the poeels, mare the $I ‘
en era hence bau the spa resolution
is known asthe grey level
mber of bits, better isthe tonal
iy of picture depends om tonal
Whenever We
computer it sa
pixels and each pixel
sampling
‘The value of each pixel
similarly higher the nut
quality, Hence the quali
and spatial resolution.
xis only ones and zeros. Hence
The computer understa
tee ted in terms of
these grey levels need to be represen
zeros and ones.
If we have two bits to represent the grey Levels, only +
diferent grey levels (2?) canbe identified viz. 00,01 10,
11, where 00 is black, 11 is white and the other two are
diferent shades of grey.
‘simiary, fe have 8 bits to represent the grey Teves,
‘we wil have 256 grey level (24) Henee more the bits,
more are the grey levels and better i the tonal arity
(quantitation). The total size ofthe image is NxM x,
where mis the number ofits used
Consider an Image in Fig, 321(a} We plot the pixel
values of only the first row of this image, Tis is shown
inFig 32.100)
‘The x-axis isthe number of samples or pixels in the first
row (sampling) while the y-axis isthe gey evel or value
‘ofeach sample (quantization).
‘Now, comes the obvious question. As we kaow, more the
samples and the bits, better & the image. What then
shouldbe the dea values of sampling and quantization,
Sampling and q,
)
Fig. 2.2.1: Concept of sampling and quantization
= This answer will vary from image to image. Give:
isa table of sampling and the quantization value
sampling and the quantization increase, the num\x
bits required to store the image increases tremendo..
= The clarity increases, but storage space req
Increases too, We hence need to get a trade-of! bess
the two, For simplicity, we consider a square mai
size NN.
‘Table 3.2.1 : Number of storage bits for various values of N and m
- att z
, 1 2 3 Se eas ag 2
32 | 19% a
zo | 3072 | 4098 520 644 7,168
eo | 406 | aise | 19,
28 | 16304 | 20480 | 24576 | 2672 2.768
18 | 16305
S276 | 49152 | 65536 | siz90 | 98304 | 114688 | 1310
286 | 65536 18
85 181072 | 196608 | 262144 | 327.680 | 393216 | 459, y752 | 524,28
512 | 262,148 | 524, ;.
= 288 | 786432 | 1,048,576 | 1,310,720 1,572,864 | 1,835,008 | 2,097.1°
+ | 1048576 | 20 :,
——_ 2ov7ase | 3145728 | 4194304 | szazna0 | 6201456 7,340,032 | 8,388,
‘es of reducing quantaton levels and reducing spatial eslation| -
resol Separately.
we
WD image Pi
MATLAB code f
96% Effects ofr
clearall
de
aimread(‘zebr
a=double(a);
a=a44;
bemax{max(a))
Jinput “how m
j-b/(2"i); 696
Fefloor(a/(+1)
fgure(1)
imshowfuint8(
fgure(2)
imshow(uint(
(©) image
Comparing |
takes place as w
MATLAB code |
96% Down sam
%% To seethe
samples %%
clear all309,608
wane
Fin
¢ Processing
43
Samp)
MATLAB code for reducing quantization levels
96% Effects of reducing the quantization values %%
clearall
imread{‘zebratif)
image
Jinput (how many bits do you want 124 8°);
j-b/(2*i}; %% since total number of bits is equal to 2°1
Fefloor(a/(+1))
Fis(F1255)/max(max(F));
figure(1)
imshow(uint8(a))
figure(2)
imshow(uint8(F1))
% normalizing %
(©) Image using 2-bits
Fig, 3.2.2
(4) Image using 1-bit
Comparing the images, we see that “false contouring”
takes place as we reduce the number of grey levels.
MATLAB code for reducing spatial resolution
%9% Down sampling
‘%% To see the effects of reducing the number of
samples 6%
lear all,
de
a= imread(‘deepatif)
end
= 1; 96% This needs to be done else the value of
goes on increasing %%
fete;
end
figure(1),imshow(a)
figure(2)mshow(c)
figure(3),
imagese(a),colormap(gray)
fAgure(4)
imagese(¢),colormap(gray)
(©) Down sampled image displayed after zooming to
‘match the size of the original image
Fig, 3.2.3
~ Its clear from the images that the resolution reduces as
‘umber of samples reduce. To compare and understand
the actual effects, we plot them together. To make sure
that they appear to be of the same size, we upsample the
second image,
Comparing Fig, 32:3(a) and Fig. 3.2.3(c) we see that the
second image has a “Checker hoard’ pattern due to the
reduction of samples
weeines Processing
Tsopreference Curves —
an Nand m in
Fr ave seen the eects of reducing Wand
~ we hae 20 ed nko et
‘ feomplingresoton) and ™
abe of N
restos orimages _
in 1965 attempted
= hn ery iy by TS. Huang it 1965 sem
erientally the ef 8
sale ‘simultaneously. There,
vec by varying W and 1 :
2 shown to a group of
fiterene types of images were
ai not have Ht
people. The st image wasthat whi
oft for example a wom face
= The seca image was one which had intermediate
mount af information for example 2 small OUP
Sanding together and the hd mage wa 86 whlch
had a lot of detals example an image af 2 crow
these images, Nand m were varied. Observers wert
then asked to rank them according to thei subjective
quality, These results were summarised in the form of
Isopreference curves inthe N-m plane
~ Points ying on an isopreference curve carespended fo
image of ecual subjective quay From the
isopeference curves Huang concided that images wit
large amount of details require few grey levels Since the
isopreteence curve of the crowed & near verti ft
means that fora fixed vale of N, the image i nearly
independent of
@
o)
Fog. 33. cont.
Sampling an
Group
Fig. 33.1
Physical Resolution
= Bynow we know thata digital image, or image fo
is composed of diserete pixels. These pixels are a
{in a row and column fashion to form a rect.
picture area
= Clearly, the total number of pixels in an imay:
function of size of the image and the number 0
per unit area (example: inch) in the horizontal as
the vertical direction, The number of pixels ps
Tength is referred to as the resolution of the disp!
device (most of the deskjet printers have 670 dj
670 dots per inch).
‘Thusa3%2 inch image at a resolution of 300 pics
{Inch would have a total of 540,000 pixels!!
‘In most books as well as in this book, image size is!
4 the total number of pixels in the horizontal die:
‘mes the total number of pixels in the vertical direct
(ecample : 128 x 128, 512 x 512, 640 x 480
Although this convention makes it relatively s'™
{eget ge the total numberof pies ina im
not specify the physical size of the image °
tons defined inthe paragraph above
=
Image Proc
A 640 x 48¢
S inches wher
On the other
12 Inch whe
Inch
— Another term
ratio,
~The ratio of t
unit length 0
aspect ratio.
image have th
Aspect
Ex. 34.4 : Comp
when printed by a
Soln. :
Since we have
the mages
Ex 842: i we
that is 600 pixel
original image, w
image?
Sol
We know
Aspects
For the original in
Now for the
ratio buta width.
He
Hence the resSW image Processing
= A 640 x 480 image would measure 666 inches by
‘inches when displayed or printed at 96 pixels per inch
On the other hand, it would measure only 1.6 inch by
12 Inch when displayed or printed at 400 pixels per
inch.
Sampling and Quantization
Ex. 3.4.3 : How much storage capacity is required to store
lan image with size of 1024 x 768 and 256 gray levels ?
Soin. :
Storage capacity required = AxBxC
Here AXxB = Physical size of image = 1024768
= Number of bits required to get 256 gray
levels, we must have 8 bits (2* = 256)
Storage capacity required = 1024 x 768 x 8 = 6291456 bits
2
'
|
:
Ex. 3.4.4 : A common measure of transmission for digital
data is the baud rate, defined as the number of bits
transmitted per second. Transmission is accomplished in
packets consisting of a start bit, a byle (8-bits) of information
and a stop bit.
(@) How many minutes would it take to transmit a
1024 x 1024 image with 256 grey levels if we use &
'56 k baud modem ?
(b) What would be the time required if we use a 750 k band
transmission line ?
x
Fig. 342,
— Another term that we need to understand isthe Aspect
ratio,
~The ratio ofthe image's width to its height, measured in
unit length or number of pixels Is referred to as its
‘aspect ratio. Both, a 3 x 3 inch image and a 128 x 128 | Soln.:
image have the same aspect ratio of 1 (2)Since we have 256 grey levels, we need 8-bits for
Aspectratio = representing each pixel
‘Along with these @-bits, we also have a start bit.
Ex 9.4.1 : Compute the physical size of a 640 x 480 image | Stop bit —
‘whe printed by a printer at 240 pixole per inch, al
Soln.:
‘Since we have 240 pixels per inch, the physical size of
+ weap
ofp
i
Sl. Ex. 3.4.2: I we want fo resize @ 1024 x 768 image to one
per w |
that is 600 pile wide wih the same aspect ratio as the | tn
splay ‘orginal image, what should be the height of the resized | (b)!
api -
ie
‘ati but a width of 600 pixels,
7 NOWMt = Fepectraa * 133 “454
ae") Hence the resized image will be 600x451.
ninege Process
wim
So
jred mean 6S
as re ievtresrea mean
onaparsse © WSC CsI
Image sie © S12xS12K20
imagesize = 2621640
512x512%10
acon = Osean
vetaken = 27305sec= 45min
rhis caper desis with converting & seame O88
onunvous inte and space ino an image that ca BE
miputes. The teehnique of converting &
processed by 00
is called digitization
ontinuous signal Intw 4 discrete one I
a is explained here. Digitisation comprises of fo STEPS
‘2 sampling and quantization.
We study the effects of reducing sampling and
_quariation onthe image
chapter. Vist
importance
based oF
Sampling an
ral resolution is also 4
image #5
processing a
ing 00 8 matrix i
ii the image
pt Spatial resolution and
red using MATLAB code
yp this conce
also explaing
at
a2
as
as
as
rive equation f te convcuon tec
vain sampling and quantization.
the fects of reducing samph
Sm
plain
quantization.
Explain nopreterence curves
Explain non-unitorm sampling.
compute the physical size of a 480% 201
‘won printed by a printer at 320 dP.
4.1 __ Introduce’
— Image enhancem
processing. As th
original image is
more sultable tha
fe, the Image is er
= Image enhancem
technique. By sub
varies from pers
technique used to
‘person, but the
foranother,
= It is also import
enhancement is a
any extra inform
Improves the st
‘working with the
— Image enhanceme
1) The spatial do
2) The frequency
— Lets start with e
domain, discuss t
‘enhancement an
frequency domal
Enhancement in FImage Enhancement
in Spatial Domain
4.1 _ Introduction
= Image enhancement Is one of the first steps in image
processing. As the name suggests, in this technique, the
‘original image is pracessed so that the resultant image is
‘more suitable than the original for specific applications
{the image is enhanced.
= Image enhancement is a purely subjective processing
technique. By subjective we mean that the desired result
‘varies from person to person, An image enhancement
technique used to process images might be excellent for
‘a person, but the same result might not be good enough
for another.
= It fs also important to know at the outset that image
— ‘The term spatial domain means working in the giver
space n this ease, the image. It implies working with the
pixel values or in other words, working directly with the
raw data.
Let f(x, y) be the original image where Fis the grey level
value and (x, y) are the image coordinates. For a B-bit —
image, Fcan take values from 0 - 255 where A
black, 255 represeuts white and all the
values represent shades of grey.
In a image of size 256 x 256, x «
oP ie
tolemage Processing
— tn point processing, we work with single pixels ie. Tis
1.x 1 operator. It means that the new value fx, 9)
depends on the operator T and the present f(x, y). This
statement will be clear as we start giving some
examples
= Some ofthe common examples of point processing are
hancement in S)
WH tmage Pro:
MATLAB progr
90% MATLAB ¢
Modified dlearall
grey level de
*
imread (‘sa
10 (125
figure(t)
‘Original grey lovelr colormap (gray
Fig. 43-100) imagesc(a)
figure(2)
eolormap {gray
imagesc(b)
43.1 Digital Negative
Digital negatives are useful In a lot of applicai,
‘common example of digital negative is the display,
fan K-ray image. As the name suggests, nega
‘means inverting the grey levels 4e. Black in the ong
{mage will now look white and vice versa. The Fix ¢
{the digital negative transformation for a 8-bit imy.
‘Onginal grey levelr 255
Fig. 4.3.2
~ Before we proceed to explain the following examples, it | — The in
RCE x digital negative ae be obtained by using
‘The identity transformation is given in the Fig 43:1). 3 = 225-r(tmax=255)
- ‘n the Fig. 43.1(a), the solid line is the transformation T. ~ Hence when
‘The horizontal 5
srt a (le a tpn on ¥ = 0,5=255 and when r=255, s = 0.
sete optima aa Ingeneral s = (U-1)-r (431 !
~ Wis cled an ee
“ce ty en it docs | Mere Listhe number of grey levels.(256 in thiscase) 43.2 Con
= -
= = As seen the grey level | ~ "ataton (43.1) can be writen in terms of f(y) °°: — an
modelo 10,1250 125 sd aly 256 gee, | 8A) = =
poe Fle: £3.2(3) wil help us understand ey) = (L-1)- Ixy) (432) aa
Processing techniques better. Here (x.y) the input mage and g (,y isthe ox! Teese te
‘image. portions da
ed(431
se)
y) 30
(432
utp
MATLAB program for finding the digital negative
96% MATLAB code to calculate negative 94%
clearall
read (‘[Link]};
jouble (aa)
55; % for a 8-it image %
figure(1)
colormap(gray)
imagesc(a)
figure(2)
eolormap(eray)
imagesc(b)
Fig. 4.3.3(b) : Digital negative
43.2 Contrast Stretching
~ Many times we obtain low contrast images due to poor
‘Mluminations or due to wrong setting of the lens
aperture. The idea behind contrast stretching is to
Increase the contrast of the images by making the dark
portions darker and the bright portions brighter.
3 Image Enhancement in Spatial Do
Fig. 434 shows the transformation used to achieve
contrast stretching
In the Fig 4:3.4, the dotted line indicates the identity
transformation and the solid line is the contrast
stretching transformation.
~ Asis evident from the Fig. 4.3.4, we make the dark grey
levels darker by assigning a slope of less than one and
make the bright grey levels brighter by assigning a slope
greater than one,
~ One can assign different slopes depending on the input
Image and the application,
As was mentioned, image enhancement is a subjective
technique and hence there is no one set of slope values
that would yield the desired result
258
Identity
w transformation
Modified
grey lavet
i i
0 = > 255
Original grey level
Fig. 4.3.4 : Original grey level r
— The formulation of the contrast-stretching algorithm is
given below.
ee Oa es
Here f (& y) is the input image and g (x, y) is the outp
image
~ As mentioned earlier, image enhancement bei
subjective phenomena, the value of @ will vary (rt
Image to image and from person to person. 7
‘objective Is to identify the region that he or she
Interested in. An important thing to note is that
thresholded image has the maximum contrast asi!
9.43.5): Orginal image only black and white grey values,
‘MATLAB program for thresholding
%%6% Thresholding 49% 9%
ear all
de
‘imread(‘[Link]); sin
row coll=size(f);
pe nnaCtnter valu of Threshold)
row
forj=[Link]ol is
434 G
= What th
two part
of grey vi
X-ray or
transforr
~ The tran:
similar
select at
This can(xy) ax
(436
1 outp
being
ary fron
she
s that
as it hs
W__image Processing
if@j)sosatet) 159)
2(u)=255;
else
{ij)=0:
end
end
end
figure (1);% —-original image.
imshow (p) :
fiqure (2); % gray eve slicing without backgrow
Imshow (uint8(2))
() Gray level slicing without background
Fig. 439
6% Grey level slicing with backgoundl 9496
ear all
de
peimread(skulltif);
‘double(p}{row collsize(p);
for'=t:1:row
forjs
i eee
Feu sonente|
i) 255:
(u)-150) S
else
2Gii)=POD:
end
end
end
‘igure (1); %
imshow(P)
figure (2);
tmshow (uint8(2))
- original image
grey level slicing with back,
(b) Grey level slicing with background
Fig. 4.3.10
4.3.5 _ Bit Plane Slicing
= In this technique, we find out the contribution made
each bit to the final image. As mentioned earle
image is defined as say a 256 x 256 x 8 image. In
256 256 is the number of pixels present in thei
and 8 is the number of bits required to represen
pixel. 8-bits simply means 28 or 256 grey levels
Now each pixel will be represented by 8-its
xample black is represented as 00000000 and wh
represented as 11111111 and between them, 254
levels are accommodated. n bit plane slicing, wes
‘importance of each bit in the final image. This <
done as follows, Consider the LSB value of ect P?
and plot the image using only the LSBs.
Continue doing this for each bt till we come tothe
Note that we wil get 8 diferent images and al!
Images will be binary.
Ex. 4.31: Gv
soln,
to represer
Hencowe'
binary we
00 | om
[|
Binary mage
So NATLAB
dearall
a
10
de
asimread('wa
adouble(a);
rinput(whie
SB);
[row col}=si
forx=E:L:row
fory=1
c=dec?
Aec(r)
end
end
figure(1)
‘mshow(uint
figure(2)
‘mshow(uintQn made
earl,
ge. Int
the in
resent
B-bits.
nd whi
n, 254
-wesee
this
each pi
tothe
all
Wings nce ;
Ex. 4.3.1 : Given a 3 x3 image, plots bit planes.
Soln. :
— Since 7 \s the maximum grey level, we need only 3-bits
to represent the grey levels.
= Hence we will have 3-bit planes. Converting the image to
binary we get,
eor|ow|oo| |r{olo| |olsfo} jololo
sooforr ow} fol r}o] jolatr| |rfolo
anfirfoo} [ofr to} [+ folr] [sfrfo
‘Braryimage—“LSBplne Mile biplane SB plane
%69 MATLAB code for bit extraction %%
clearall
de
asimread('warnettif);
a=double(a);
r=input(‘which bit image do you want to see 1=MSB
8=1SB);
[row col}-size(a);
forx=1:l:row
fory=[Link]ol
~— edecZbin{a(xy),8);
% converts decimal to binary
aec(r):
w(sy)-double(d);
9696 since wis a char and cannot be plotted.
‘w(xy)==49 =
%% since double of d will be either 49 or 48.
w(xy)=255;
else
whxy)=0;
aA 3
end
figure(t) ae
‘Amshow(uint8(a)),
figure(2) ‘
Amshow(uint8(w))
Image Enhancement in Spatial Domain
Fig. 4.3.11 : Eight images, each representing contribution
of a single bit
— Observing the images we come to the conclusion that
the higher order bits contain majority of the visually
significant data, while the lower bits contain the suitable
details in the image.
= Bit plane slicing can hence be used in image
‘compression. We can transmit only the higher order bits
and remove the lower order bits. Bit plane slicing is also
used in steganography,
Steganography :
~ Steganography is the art of hiding information, With the
growth of networked multimedia systems, the need of
secure data transfer increases. Steganography is a
technique in which secret data is hidden in a carrier
signal, in this case image. An intruder on the network
sees the carrier image without realising that there is
hidden information present init.
~ Bit plane manipulation is the simplest of the various
steganography techniques available. We have seen that
the bits representing the MSB's carry a lot of
information while the bits representing the LSB's carry
information that is visually insignificant. Consider an
‘example wherein we need to hide a text message in the
image of a group.
~The image of the group is called a carrier image. We hide
a text message (which is stored as an image) into the
carrier image. What is done here is every LSB of the
carrier image is replaced by the MSB of the secret data,= The final image that we obtain is called a stego image. It
has the ae Image hidden into Ht but is visually
‘identical to the orignal mage. i
tf we replace more than 2 LSBs with 2 MSB' we get the
secret image superimposed on the original image. This
is called a Watermark,
Given below is the code for steganography and also for
Enhancement in sp
image Pre
— This techn
‘Samir Adi
“Transformves us
ano?
WD image Processing 4
{@) Carrier image
Fig. 43.13
| Workers of the World,
| Unite
Lenin
(b) Secret data
— This technique of steganography is a very simplistic one. A lot of work has been done by my students Mustensir Lehri,
Samir Adhia and Rahul Chedda from the computer department. You could send me an e-mail to view the paper
“Transform based steganography’, published by them.
(a) Stego image
Workers of the World,
Unite
Lenin
(b) Retrieved data
Fig. 43.14
4,3.5(A) Applications of Bit Plane Slicing
= Bit plane slicing is used in Steganography and
Watermarking. Steganography is the art of hiding
information, With the growth of networked multimedia
systems, the need of secure data transfer increases.
= Steganography is a technique in which secret is hidden
in a carrier signal, in this image. An intruder on the
network sees the carrier image without realising that
there is hidden information presentin it.
Dynamic Range Compression
(Log Transformation)
— At times, the dynamic range of the image exceeds the
capability of the display device. What happens fs that
some pixel values are so large that the other low value
pixels get obscured, A simple day-to-day example of
‘such a phenomena is that during daytime, we cannot see
the stars,
436
— The reason behind this is that the intensity of the sun is
so large and that of the stars is so low that the eye
cannot adjust to such a large dynamic range.
— In image processing, a classic example of such large
differences in grey levels is the Fourier spectrum (will
‘be discussed in detail in the frequency domain
enhancement technique).
Inthe Fourier spectrum only some of the values are very.
large while most of the values are too small. The
‘dynamic range of pixels is of the order of 10%. Hence,
whea we plot the Fourier spectrum, we see only small
dots, which represent the large values.
— Something needs to be done to be able to see the small
values as well. This technique of compressing the
‘dynamic range is known as dynamic range compression.
= We all know that the log operator is an excellent
compressing function, Hence the dynamic range
compression is achieved by using a log operator: C is the
normalisation constant.4-10
mage Processing
exbog(t+/")
Fig. 43.15
= Dynamic range compression Ca
fey) and els) as
atay) = ex logtt+ifexy)D
MATLAB program for dynamic range compression
‘4% Dynamic range compression %%
learall
de
aazimread(saturntif);
a=double(aa)
[row col]=size(a);
row
for y=[Link]
efxy)=a(xy)"(C-1)*Gs+y)); 96% Needed to center the
transform
end
end
azabs{ff2(0);
[Link]( 140);
696% Plotting
figure(1)
colormap(eray)
Jimagese(d)
figure(2)
colormap(gray)
‘imagesc(d_log)
for:
un be written in terms of
Enbanc
Ima
(b)
jig. 4.3.36 : Dynamic range compressio
. Law Transformation
for i=1:x«
43.7 Powel
— the basic formula for power-law transforn
sect
itcan also be weitten a Pe
stay) = exftay! end
— Here cand y are positive constants. The tran nuimg2=
is shown below for different values «i subplot(2
called the gamma correction factor. We imshow(i
changing the value of gamma, we obtai tile(Orig
transformation curves subplot(2
= Nonlineaites encountered during image « imshow(n
title Imag
printing and displaying can be corrected sing
correction. Hence gamma correction is impor
image needs to be displayed on the comp
power law transformation can also be used t
the dynamic range of an image. Given belo
MATLAB code for power transformation
mage has been normalized to 0- 255 range.
Zr
438
Spatial Res
Spatial
a
» pistes rn ants these
a Intensity Re
me = Intensity
[row,col} ) resolution or
[=size(img!) bs grey levels t
Se ‘correction factor: resolution
img=double(i img} the image a1
change in grssion
portan
d to in
yelow
1. The
W image Processing 4
W Image Enhancement in Spatial Domain
for j=1:col
snuimg(ij)=img(ij)*gamma
end
end
numax=max(max(nuimg))
numin=min(min(nuimg));
1n=255/(numax-numin}
for i=t:row
for j=1:col
nuimgi (ij)=n*(nuimg())-numin),
% Normalisation
end
end
nulmg2=uint8(nuimg1);
subplot(2,1.1)
imshow(imgi)
title(‘Original image’)
subplot(2,1,2)
imshow(nuimg2)
title('Image after power transformation’)
Fig. 43.18
4.3.8 Spatial and Intensity Resolution
‘Spatial Resolution :
Spatial resolution is indicative of the number of samples
that are present in the image. Hence Spatial resolution
depends on the Sampling. Spatial resolution can be defined
asis the smallest discernible detailin an image.
Intensity Resolution :
Intensity resolution which is also known as Gray level
resolution or Tonal resolution is indicative of the number of
‘grey levels that are present in the image. Hence Intensity
4.4 Solved Examples on Point
Processing
Ex. 44.1: Obtain the digital negative of the following
f gry lc 1
| 121 | 205 | at7 | 156 | 151
8 Bits Per Poel - BPP image
| 130 | 127 | 157 | 117 | 125
198 | 142
197 | 242
240,
Soln. :
Its known that it sa 8-bit image. Hence the number of
grey levels that this image can hold is 2°=256 «. L = 256.
Hence the minimum grey level is
hile the maximum grey level is 255
atxy) = (L-1)-ixy)
= (256-1)-fixy)
atxy) = 255-fIxy)
‘We gt the digital negative using the above equation
134 | 50 | 38 | 99 | 104
116 | 128 | 98 | 138 | 130
3 | 138 | 19 | 117 | 113
2 | 73 | 77 | s8 | 13
54 | 149 | 136 | 4 | 15
Ex. 442 : What would happen to the dynamic range of an
Image it all the slopes in the contrast stretched algorithm
(i,m, n) are tess than 1, (Answer using an example)
Soln.:
We know that contrast stretching generally Increases
the dynamic range. Butin this case since , m, and n are less
than 1. The dynamic range gets reduced.
Let us take an example, Let = 0.2, m = 0.5, n = 02. Let
the initial dynamic range of the original image be [0 - 10],
letry= 4572-8
We draw this transformation
resolution depends on the number of bits used to represent
the image and can be defined as the smallest discernible
change in gray level.
br
+ -fmieraen
nfr-1)+ 4