0% found this document useful (0 votes)
16 views

Image Processing

Uploaded by

Mai Anh Phạm
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Image Processing

Uploaded by

Mai Anh Phạm
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

Chapter 1 : Introduction to Digital Image

Processing

1. The spatial coordinates of a digital image (x,y) are proportional to:

a) Position
b) Brightness
c) Contrast
d) Noise

2. Among the following image processing techniques which is fast, precise and
flexible.
a) Optical
b) Digital
c) Electronic
d) Photographic

3. An image is considered to be a function of a(x,y), where a represents:


a) Height of image
b) Width of image
c) Amplitude of image
d) Resolution of image

4. What is pixel?
a) Pixel is the elements of a digital image
b) Pixel is the elements of an analog image
c) Pixel is the cluster of a digital image
d) Pixel is the cluster of an analog image
Explanation: An Image is a collection of individual points referred as pixel, thus a Pixel is
the element of a digital image.

5. . The range of values spanned by the gray scale is called:


a) Dynamic range
b) Band range
c) Peak range
d) Resolution range
6. Which is a colour attribute that describes a pure colour?
a) Saturation
b) Hue
c) Brightness
d) Intensity

7. Which gives a measure of the degree to which a pure colour is diluted by white light?
a) Saturation
b) Hue
c) Intensity
d) Brightness

8. Which means the assigning meaning to a recognized object.


a) Interpretation
b) Recognition
c) Acquisition
d) Segmentation
Explanation: The interpretation is called the assigning meaning to recognized object.

9. A typical size comparable in quality to monochromatic TV image is of size.


a) 256 X 256
b) 512 X 512
c) 1920 X 1080
d) 1080 X 1080

10. The number of grey values are integer powers of:


a) 4
b) 2
c) 8
d) 1
Explanation: The gray values are interpreted as the power of number of colors. In
monochromatic image the number of colors are 2.

Components of Image Processing System


1. While dealing with millions of images, why is image compression a vital standard
to be followed?
a) High quality images are not of any use, so we go for compression
b) The images are compressed to reduce data redundancy
c) Low quality images are preferred for practical use
d) Image compression is used to reduce the storage space

Explanation: An image of size 1024*1024 pixels, in which the intensity of each pixel is an 8-
bit quantity, requires one megabyte of storage, thus dealing with millions of images would
require a large storage space.

2. Which of the following is not required with reference to the light source of a
lighting system?
a) Sufficient light to provide a quality image
b) Available at low cost
c) The color of light should be pleasing to the eye
d) Provide spatial and temporal intensity to the to the sample
Explanation: The light source employed in a lighting system in an image processing system
requires a cheap light source in terms of cost, a source providing spatial and temporal
intensity to the to the sample and providing sufficient light for a quality image, but the
color of light immaterial in this context.

3. Why is incident light illumination used in a lighting system?


a) Other sources of illumination cause introduction of noise
b) Incident light illumination causes light reflection on the lenses
c) Cane and Bagasse fiber is suitable light in color for scattered light illumination and
simple to implement in environmental conditions
d) The setup of incident light illumination is cheap
Explanation: Incident light illumination gives the best response in an image processing
system, Cane and Bagasse fiber is mostly used and is simple to implement.

4. With respect to Digital Processing System which 2 elements are required to


acquire digital Images?
a) Physical device and digitizer
b) Digital device and digital camera
c) Digital camera and computer system
d) Transmitter and receiver
Explanation: A digital processing system requires a physical device, which should be
sensitive to the energy radiated by the object we wish to image and a digitizer which would
convert the output of the physical sensing device into a digital form.

5. What kind of physical device is required by a digital image processing system:


a) Sensitive to the noise in the external environment of the object we wish to image
b) Sensitive to the energy radiated by the object we wish to image
c) Sensitive to the pixel information of the object we wish to image
d) Sensitive to the light incident on the object we wish to image
Explanation: The physical device used in an image processing system should be sensitive to
the energy radiated by the object we wish to image, as greater the sensitivity, greater will
be the quality of the image.

6. What is a digitizer in an image processing system?


a) A device which can convert an electric signal from the physical device into a digital form
b) A device which can convert the incident light on it device into a digital form
c) A device which can convert the output of a physical sensing device into a digital form
d) A device which can convert a digital signal to a continuous signal
Explanation: Digitizer is a device which can convert the output of a physical sensing device
into a digital form. Ex: In a digital video camera, the electrical output from the sensors are
proportional to light intensity.

7. What is the function of ALU in a digital image processing system?


a) ALU is used in averaging images
b) ALU is used to calculate correlation between adjacent pixels
c) ALU is used add 2 images or more
d) ALU is used for image correlation
Explanation: ALU is used in averaging images, as the digitizer gives the output, i.e. gets
digitized. This is done to reduce the noise from the images at the very first step.

8. Which of the following provides the most efficient short-term storage in an image
processing system?
a) Cloud
b) Hard-disk
c) CD
d) Frame buffer
Explanation: Frame buffers can store images and those can be accessed at a faster rate,
usually at video rates (30 complete images per second).

9. Transmission bandwidth plays a key role in image transmission via the internet to
remote sites, which of the following is improving this situation to a large extent?
a) Wi-Fi
b) Li-Fi
c) Optical Fibers
d) Satellite Communication

Answer: c
Explanation: Communication to remote sites via the Internet is not efficient always. This
situation is improving quickly with the use of optical fiber and other broadband
technologies.

10. Which of the following is front-end-subsystem in an image processing system?


a) Physical device
b) Digitizer
c) ALU
d) Digitizer and the ALU

Answer: d
Explanation: Image processing system consists of a digitizer and the hardware that
performs other primitive operations such as arithmetic and logical operations (ALU). This is
called a front-end-subsystem.

11. Which of the following storage is used for frequent access in an image processing
system?
a) Archival storage
b) on-line storage
c) short-term storage
d) long-term storage

Answer: b
Explanation: On-line storage usually takes the form of magnetic disks and optical media
storage. The key factor in on-line storage is the frequent access to the stored data.

Steps in Image Processing


1. What is the first and foremost step in Image Processing?
a) Image restoration
b) Image enhancement
c) Image acquisition
d) Segmentation
Explanation: Image acquisition is the first process in image processing. Note that
acquisition could be as simple as being given an image that is already in digital form.
Generally, the image acquisition stage involves preprocessing, such as scaling.
2. In which step of processing, the images are subdivided successively into smaller
regions?
a) Image enhancement
b) Image acquisition
c) Segmentation
d) Wavelets
Explanation: Wavelets are the foundation for representing images in various degrees of
resolution. Wavelets are particularly used for image data compression and for pyramidal
representation, in which images are subdivided successively into smaller regions.

3. What is the next step in image processing after compression?


a) Wavelets
b) Segmentation
c) Representation and description
d) Morphological processing
Explanation: Steps in image processing:
Image acquisition-> Image enhancement-> Image restoration-> Color image processing->
Wavelets and multi resolution processing-> Compression-> Morphological processing->
Segmentation-> Representation & description-> Object recognition.

4. What is the step that is performed before color image processing in image
processing?
a) Wavelets and multi resolution processing
b) Image enhancement
c) Image restoration
d) Image acquisition
Explanation: Steps in image processing:
Image acquisition-> Image enhancement-> Image restoration-> Color image processing->
Wavelets and multi resolution processing-> Compression-> Morphological processing->
Segmentation-> Representation & description-> Object recognition.

5. How many number of steps are involved in image processing?


a) 10
b) 9
c) 11
d) 12

Answer: a
Explanation: Steps in image processing:
Image acquisition-> Image enhancement-> Image restoration-> Color image processing->
Wavelets and multi resolution processing-> Compression-> Morphological processing->
Segmentation-> Representation & description-> Object recognition.

6. What is the expanded form of JPEG?


a) Joint Photographic Expansion Group
b) Joint Photographic Experts Group
c) Joint Photographs Expansion Group
d) Joint Photographic Expanded Group
Explanation: Image compression is familiar (perhaps inadvertently) to most users of
computers in the form of image file extensions, such as the jpg file extension used in the
JPEG (Joint Photographic Experts Group) image compression standard.

7. Which of the following step deals with tools for extracting image components
those are useful in the representation and description of shape?
a) Segmentation
b) Representation & description
c) Compression
d) Morphological processing

Answer: d
Explanation: Morphological processing deals with tools for extracting image components
that are useful in the representation and description of shape. The material in this chapter
begins a transition from processes that output images to processes that output image
attributes.

8. In which step of the processing, assigning a label (e.g., “vehicle”) to an object based
on its descriptors is done?
a) Object recognition
b) Morphological processing
c) Segmentation
d) Representation & description
Explanation: Recognition is the process that assigns a label (e.g., “vehicle”) to an object
based on its descriptors. We conclude our coverage of digital image processing with the
development of methods for recognition of individual objects.

9. What role does the segmentation play in image processing?


a) Deals with extracting attributes that result in some quantitative information of interest
b) Deals with techniques for reducing the storage required saving an image, or the
bandwidth required transmitting it
c) Deals with partitioning an image into its constituent parts or objects
d) Deals with property in which images are subdivided successively into smaller regions

Explanation: Segmentation procedures partition an image into its constituent parts or


objects. In general, autonomous segmentation is one of the most difficult tasks in digital
image processing. A rugged segmentation procedure brings the process a long way toward
successful solution of imaging problems that require objects to be identified individually.

10. What is the correct sequence of steps in image processing?


a) Image acquisition->Image enhancement->Image restoration->Color image processing-
>Compression->Wavelets and multi resolution processing->Morphological processing-
>Segmentation->Representation & description->Object recognition
b) Image acquisition->Image enhancement->Image restoration->Color image processing-
>Wavelets and multi resolution processing->Compression->Morphological processing-
>Segmentation->Representation & description->Object recognition
c) Image acquisition->Image enhancement->Color image processing->Image restoration-
>Wavelets and multi resolution processing->Compression->Morphological processing-
>Segmentation->Representation & description->Object recognition
d) Image acquisition->Image enhancement->Image restoration->Color image processing-
>Wavelets and multi resolution processing->Compression->Morphological processing-
>Representation & description->Segmentation->Object recognition

Chapter 2: Digital Image Fundamentals


Basics Of Image Sampling & Quantization
1. To convert a continuous sensed data into Digital form, which of the following is
required?
a) Sampling
b) Quantization
c) Both Sampling and Quantization
d) Neither Sampling nor Quantization

Answer: c
Explanation: The output of the most sensor is a continuous waveform, and the
amplitude and spatial behavior of such waveform are related to the physical
phenomenon being sensed.
2. To convert a continuous image f(x, y) to digital form, we have to sample the
function in __________
a) Coordinates
b) Amplitude`
c) All of the mentioned
d) None of the mentioned
Explanation: An image may be continuous in the x- and y-coordinates or in amplitude, or in
both.

3. For a continuous image f(x, y), how could be Sampling defined?


a) Digitizing the coordinate values
b) Digitizing the amplitude values
c) All of the mentioned
d) None of the mentioned
Explanation: Sampling is the method of digitizing the coordinate values of the image.

4. For a continuous image f(x, y), Quantization is defined as


a) Digitizing the coordinate values
b) Digitizing the amplitude values
c) All of the mentioned
d) None of the mentioned
Explanation: Sampling is the method of digitizing the amplitude values of the image.

5. Validate the statement:


“For a given image in one-dimension given by function f(x, y), to sample the function we
take equally spaced samples, superimposed on the function, along a horizontal line.
However, the sample values still span (vertically) a continuous range of gray-level values.
So, to convert the given function into a digital function, the gray-level values must be
divided into various discrete levels.”
a) True
b) False
Explanation: Digital function requires both sampling and quantization of the one-
dimensional image function.

6. How is sampling been done when an image is generated by a single sensing


element combined with mechanical motion?
a) The number of sensors in the strip defines the sampling limitations in one direction and
Mechanical motion in the other direction.
b) The number of sensors in the sensing array establishes the limits of sampling in both
directions.
c) The number of mechanical increments when the sensor is activated to collect data.
d) None of the mentioned.
Explanation: When an image is generated by a single sensing element along with
mechanical motion, the output data is quantized by dividing the gray-level scale into many
discrete levels. However, sampling is done by selecting the number of individual
mechanical increments recorded at which we activate the sensor to collect data.

7. How does sampling gets accomplished with a sensing strip being used for image
acquisition?
a) The number of sensors in the strip establishes the sampling limitations in one image
direction and Mechanical motion in the other direction
b) The number of sensors in the sensing array establishes the limits of sampling in both
directions
c) The number of mechanical increments when the sensor is activated to collect data
d) None of the mentioned
Explanation: When a sensing strip is used the number of sensors in the strip defines the
sampling limitations in one direction and mechanical motion in the other direction.

8. How is sampling accomplished when a sensing array is used for image acquisition?
a) The number of sensors in the strip establishes the sampling limitations in one image
direction and Mechanical motion in the other direction
b) The number of sensors in the sensing array defines the limits of sampling in both
directions
c) The number of mechanical increments at which we activate the sensor to collect data
d) None of the mentioned
Explanation: When we use sensing array for image acquisition, there is no motion and so,
only the number of sensors in the array defines the limits of sampling in both directions
and the output of the sensor is quantized by dividing the gray-level scale into many
discrete levels.

9. The quality of a digital image is well determined by ___________


a) The number of samples
b) The discrete gray levels
c) All of the mentioned
d) None of the mentioned
Explanation: The quality of a digital image is determined mostly by the number of samples
and discrete gray levels used in sampling and quantization.

Representing Digital Images

1. Assume that an image f(x, y) is sampled so that the result has M rows and N
columns. If the values of the coordinates at the origin are (x, y) = (0, 0), then the
notation (0, 1) is used to signify :
a) Second sample along first row
b) First sample along second row
c) First sample along first row
d) Second sample along second row

Explanation: The values of the coordinates at the origin are (x, y) = (0, 0). Then, the next
coordinate values (second sample) along the first row of the image are represented as (x, y)
= (0, 1).

2. The resulting image of sampling and quantization is considered a matrix of real


numbers. By what name(s) the element of this matrix array is called __________
a) Image element or Picture element
b) Pixel or Pel
c) All of the mentioned
d) None of the mentioned
Explanation: Sampling and Quantization of an image f(x, y) forms a matrix of real numbers
and each element of this matrix array is commonly known as Image element or Picture
element or Pixel or Pel.

3. Let Z be the set of real integers and R the set of real numbers. The sampling
process may be viewed as partitioning the x-y plane into a grid, with the central
coordinates of each grid being from the Cartesian product Z2, that is a set of all
ordered pairs (zi, zj), with zi and zj being integers from Z. Then, f(x, y) is said a digital
image if:
a) (x, y) are integers from Z2 and f is a function that assigns a gray-level value (from Z) to
each distinct pair of coordinates (x, y)
b) (x, y) are integers from R2 and f is a function that assigns a gray-level value (from R) to
each distinct pair of coordinates (x, y)
c) (x, y) are integers from R2 and f is a function that assigns a gray-level value (from Z) to
each distinct pair of coordinates (x, y)
d) (x, y) are integers from Z2 and f is a function that assigns a gray-level value (from R) to
each distinct pair of coordinates (x, y)
Explanation: In the given condition, f(x, y) is a digital image if (x, y) are integers from Z2 and
f a function that assigns a gray-level value (that is, a real number from the set R) to each
distinct coordinate pair (x, y).

4. Let Z be the set of real integers and R the set of real numbers. The sampling
process may be viewed as partitioning the x-y plane into a grid, with the central
coordinates of each grid being from the Cartesian product Z2, that is a set of all
ordered pairs (zi, zj), with zi and zj being integers from Z. Then, f(x, y) is a digital
image if (x, y) are integers from Z2 and f is a function that assigns a gray-level value
(that is, a real number from the set R) to each distinct coordinate pair (x, y). What
happens to the digital image if the gray levels also are integers?
a) The Digital image then becomes a 2-D function whose coordinates and amplitude
values are integers
b) The Digital image then becomes a 1-D function whose coordinates and amplitude values
are integers
c) The gray level can never be integer
d) None of the mentioned
Explanation: In Quantization Process if the gray levels also are integers the Digital image
then becomes a 2-D function whose coordinates and amplitude values are integers.

5. The digitization process i.e. the digital image has M rows and N columns, requires
decisions about values for M, N, and for the number, L, of gray levels allowed for
each pixel. The value M and N have to be:
a) M and N have to be positive integer
b) M and N have to be negative integer
c) M have to be negative and N have to be positive integer
d) M have to be positive and N have to be negative integer
Explanation: The digitization process i.e. the digital image has M rows and N columns,
requires decisions about values for M, N, and for the number, L, of max gray level. There
are no requirements on M and N, other than that M and N have to be positive integer.
6. The digitization process i.e. the digital image has M rows and N columns, requires
decisions about values for M, N, and for the number, L, of max gray levels. There are
no requirements on M and N, other than that M and N have to be positive integer.
However, the number of gray levels typically is
a) An integer power of 2 i.e. L = 2k
b) A Real power of 2 i.e. L = 2k
c) Two times the integer value i.e. L = 2k
d) None of the mentioned
Explanation: Due to processing, storage, and considering the sampling hardware, the
number of gray levels typically is an integer power of 2 i.e. L = 2k.

7. The digitization process i.e. the digital image has M rows and N columns, requires
decisions about values for M, N, and for the number, L, of max gray levels is an
integer power of 2 i.e. L = 2k, allowed for each pixel. If we assume that the discrete
levels are equally spaced and that they are integers then they are in the interval
__________ and Sometimes the range of values spanned by the gray scale is called the
________ of an image.
a) [0, L – 1] and static range respectively
b) [0, L / 2] and dynamic range respectively
c) [0, L / 2] and static range respectively
d) [0, L – 1] and dynamic range respectively
Explanation: In digitization process M rows and N columns have to be positive and for the
number, L, of discrete gray levels typically an integer power of 2 for each pixel. If we
assume that the discrete levels are equally spaced and that they are integers then they lie
in the interval [0, L-1] and Sometimes the range of values spanned by the gray scale is
called the dynamic range of an image.

8. After digitization process a digital image with M rows and N columns have to be
positive and for the number, L, max gray levels i.e. an integer power of 2 for each
pixel. Then, the number b, of bits required to store a digitized image is:
a) b=M*N*k
b) b=M*N*L
c) b=M*L*k
d) b=L*N*k
Explanation: In digital image of M rows and N columns and L max gray levels an integer
power of 2 for each pixel. The number, b, of bits required to store a digitized image is:
b=M*N*k.
9. An image whose gray-levels span a significant portion of gray scale have __________
dynamic range while an image with dull, washed out gray look have __________
dynamic range.
a) Low and High respectively
b) High and Low respectively
c) Both have High dynamic range, irrespective of gray levels span significance on gray scale
d) Both have Low dynamic range, irrespective of gray levels span significance on gray scale

Explanation: An image whose gray-levels signifies a large portion of gray scale have High
dynamic range, while that with dull, washed out gray look have Low dynamic range.

10. Validate the statement “When in an Image an appreciable number of pixels


exhibit high dynamic range, the image will have high contrast.”
a) True
b) False
Explanation: In an Image if an appreciable number of pixels exhibit high dynamic range
property, the image will have high contrast.

11. In digital image of M rows and N columns and L discrete gray levels, calculate the
bits required to store a digitized image for M=N=32 and L=16.
a) 16384
b) 4096
c) 8192
d) 512
Explanation: In digital image of M rows and N columns and L max gray levels i.e. an integer
power of 2 for each pixel. The number, b, of bits required to store a digitized image is:
b=M*N*k.
For L=16, k=4.
i.e. b=4096.

Image Sampling and Quantization

1. A continuous image is digitised at _______ points.


a) random
b) vertex
c) contour
d) sampling

Answer: d
Explanation: The sampling points are ordered in the plane and their relation is called a
Grid.

2. The transition between continuous values of the image function and its digital equivalent
is called ______________
a) Quantisation
b) Sampling
c) Rasterisation
d) None of the Mentioned

Answer: a
Explanation: The transition between continuous values of the image function and its
digital equivalent is called Quantisation.

3. Images quantised with insufficient brightness levels will lead to the occurrence of
____________
a) Pixillation
b) Blurring
c) False Contours
d) None of the Mentioned

Answer: c
Explanation: This effect arises when the number brightness levels is lower that which the
human eye can distinguish.

4. The smallest discernible change in intensity level is called ____________


a) Intensity Resolution
b) Contour
c) Saturation
d) Contrast
Answer: a
Explanation: Number of bits used to quantise intensity of an image is called intensity
resolution.

5. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
a) Sampling
b) Interpolation
c) Filters
d) None of the Mentioned

Answer: b
Explanation: Interpolation is the basic tool used for zooming, shrinking, rotating, etc.

6. The type of Interpolation where for each new location the intensity of the immediate
pixel is assigned is ___________
a) bicubic interpolation
b) cubic interpolation
c) bilinear interpolation
d) nearest neighbour interpolation

Answer: d
Explanation: Its called as Nearest Neighbour Interpolation since for each new location
the intensity of the next neighbouring pixel is assigned.

7. The type of Interpolation where the intensity of the FOUR neighbouring pixels is used to
obtain intensity a new location is called ___________
a) cubic interpolation
b) nearest neighbour interpolation
c) bilinear interpolation
d) bicubic interpolation

Answer: c
Explanation: Bilinear interpolation is where the FOUR neighbouring pixels is used to
estimate intensity for a new location.
8. Dynamic range of imaging system is a ratio where the upper limit is determined by
a) Saturation
b) Noise
c) Brightness
d) Contrast

Answer: a
Explanation: Saturation is taken as the Numerator.

9. For Dynamic range ratio the lower limit is determined by


a) Saturation
b) Brightness
c) Noise
d) Contrast

Answer: c
Explanation: Noise is taken as the Denominator.

10. Quantitatively, spatial resolution cannot be represented in which of the following ways
a) line pairs
b) pixels
c) dots
d) none of the Mentioned

Answer: d
Explanation: All the options can be used to represent spatial resolution.

Image Sensing and Acquisition

1. The most familiar single sensor used for Image Acquisition is


a) Microdensitometer
b) Photodiode
c) CMOS
d) None of the Mentioned
Answer: b
Explanation: Photodiode is the most commonly used single sensor made up of silicon
materials.

2. A geometry consisting of in-line arrangement of sensors for image acquisition


a) A photodiode
b) Sensor strips
c) Sensor arrays
d) CMOS

Answer: b
Explanation: Sensor strips are very common next to single sensor and use in-line
arrangement.

3. CAT in imaging stands for


a) Computer Aided Telegraphy
b) Computer Aided Tomography
c) Computerised Axial Telegraphy
d) Computerised Axial Tomography

Answer: d
Explanation: Industrial Computerised Axial Tomography is based on image acquisition
using sensor strips.

4. The section of the real plane spanned by the coordinates of an image is called the
_____________
a) Spacial Domain
b) Coordinate Axes
c) Plane of Symmetry
d) None of the Mentioned

Answer: a
Explanation: The section of the real plane spanned by the coordinates of an image is
called the Spacial Domain, with the x and y coordinates referred to as Spacial
coordinates.
5. The difference is intensity between the highest and the lowest intensity levels in an
image is ___________
a) Noise
b) Saturation
c) Contrast
d) Brightness

Answer: c
Explanation: Contrast is the measure of the difference is intensity between the highest and
the lowest intensity levels in an image.

6. _____________ is the effect caused by the use of an insufficient number of intensity levels
in smooth areas of a digital image.
a) Gaussian smooth
b) Contouring
c) False Contouring
d) Interpolation

Answer: c
Explanation: It is called so because the ridges resemble the contours of a map.

7. The process of using known data to estimate values at unknown locations is called
a) Acquisition
b) Interpolation
c) Pixelation
d) None of the Mentioned

Answer: b
Explanation: Interpolation is the process used to estimate unknown locations. It is
applied in all image resampling methods.

8. Which of the following is NOT an application of Image Multiplication?


a) Shading Correction
b) Masking
c) Pixelation
d) Region of Interest operations
Answer: c
Explanation: Because Pixelation deals with enlargement of pixels.

9. The procedure done on a digital image to alter the values of its individual pixels is
a) Neighbourhood Operations
b) Image Registration
c) Geometric Spacial Transformation
d) Single Pixel Operation

Answer: d
Explanation: It is expressed as a transformation function T, of the form s=T(z) , where z is
the intensity.

10. In Geometric Spacial Transformation, points whose locations are known precisely in
input and reference images.
a) Tie points
b) Réseau points
c) Known points
d) Key-points

Answer: a
Explanation: Tie points, also called Control points are points whose locations are known
precisely in input and reference images.

Light and the Electromagnetic Spectrum

1. Of the following, _________ has the maximum frequency.


a) UV Rays
b) Gamma Rays
c) Microwaves
d) Radio Waves
Answer: b
Explanation: Gamma Rays come first in the electromagnetic spectrum sorted in the
decreasing order of frequency.
2. In the Visible spectrum the ______ colour has the maximum wavelength.
a) Violet
b) Blue
c) Red
d) Yellow

Answer: c
Explanation: Red is towards the right in the electromagnetic spectrum sorted in the
increasing order of wavelength.

3. Wavelength and frequency are related as : (c = speed of light)


a) c = wavelength / frequency
b) frequency = wavelength / c
c) wavelength = c * frequency
d) c = wavelength * frequency
Answer: d
Explanation: It is usually written as wavelength = c / frequency.

4. Electromagnetic waves can be visualised as a


a) sine wave
b) cosine wave
c) tangential wave
d) None of the mentioned
Answer: a
Explanation: Electromagnetic waves are visualised as sinusoidal wave.

5. How is radiance measured?


a) lumens
b) watts
c) armstrong
d) hertz
Answer: b
Explanation: Radiance is the total amount of energy that flows from the light source and is
measured in Watts.

6. Which of the following is used for chest and dental scans?


a) Hard X-Rays
b) Soft X-Rays
c) Radio waves
d) Infrared Rays
Answer: b
Explanation: Soft X-Rays (low energy) are used for dental and chest scans.

7. Which of the following is impractical to measure?


a) Frequency
b) Radiance
c) Luminance
d) Brightness
Answer: d
Explanation: Brightness is subjective descriptor of light perception that is impossible to
measure.

8. Massless particle containing a certain amount of energy is called


a) Photon
b) Shell
c) Electron
d) None of the mentioned
Answer: a
Explanation: Each bundle of massless energy is called a Photon.

9. What do you mean by achromatic light?


a) Chromatic light
b) Monochromatic light
c) Infrared light
d) Invisible light
Answer: b
Explanation: Achromatic light is also called monochromatic light.(Light void of color)

10. Which of the following embodies the achromatic notion of intensity?


a) Luminance
b) Brightness
c) Frequency
d) Radiance
Answer: b
Explanation: Brightness embodies the achromatic notion of intensity and is a key factor in
describing color sensation.

Mathematical Tools in Digital Image Processing

1. How is array operation carried out involving one or more images?


a) array by array
b) pixel by pixel
c) column by column
d) row by row
Answer: b
Explanation: Any array operation is carried out on a pixel by pixel basis.

2. The property indicating that the output of a linear operation due to the sum of two
inputs is same as performing the operation on the inputs individually and then summing
the results is called ___________
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned

Answer: a
Explanation: This property is called additivity .

3. The property indicating that the output of a linear operation to a constant times as input
is the same as the output of operation due to original input multiplied by that constant is
called _________
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned
Answer: c
Explanation: This property is called homogeneity

4. Enhancement of differences between images is based on the principle of ____________


a) Additivity
b) Homogeneity
c) Subtraction
d) None of the Mentioned
Answer: c
Explanation: A frequent application of image subtraction is in the enhancement of
differences between images .

5. A commercial use of Image Subtraction is ___________


a) Mask mode radiography
b) MRI scan
c) CT scan
d) None of the Mentioned
Answer: a
Explanation: Mask mode radiography is an important medical imaging area based on
Image Subtraction.

6. Region of Interest (ROI) operations is commonly called as ___________


a) Shading correction
b) Masking
c) Dilation
d) None of the Mentioned
Answer: b
Explanation: A common use of image multiplication is Masking, also called ROI operation.

7. If every element of a set A is also an element of a set B, then A is said to be a _________ of


set B.
a) Disjoint set
b) Union
c) Subset
d) Complement set
Answer: c
Explanation: A is called the subset of B.

8. Consider two regions A and B composed of foreground pixels. The ________ of these two
sets is the set of elements belonging to set A or set B or both.
a) OR
b) AND
c) NOT
d) XOR
Answer: a
Explanation: This is called an OR operation.

9. Imaging systems having physical artefacts embedded in the imaging sensors produce a
set of points called __________
a) Tie Points
b) Control Points
c) Reseau Marks
d) None of the Mentioned
Answer: c
Explanation: These points are called “known” points or “Reseau marks”

10. Image processing approaches operating directly on pixels of input image work directly
in ____________
a) Transform domain
b) Spatial domain
c) Inverse transformation
d) None of the Mentioned
Answer: b
Explanation: Operations directly on pixels of input image work directly in Spatial Domain.

Basic Relationships between Pixels

1. A pixel p (x, y) has two vertical neighbors and two horizontal neighbors. The neighbors of
(x, y) are _____________

a) (x+1, y+1), (x-1, y-1), (x+1, y-1), (x-1, y+1)


b) (x+1, y), (x-1, y+1), (x-1, y-1), (x, y-1)
c) (x, y), (x-1, y-1), (x+1, y+1), (x+1, y-1)
d) (x+1, y), (x-1, y), (x, y+1), (x, y-1)
Answer: d
Explanation: Since p has 2 vertical and 2 horizontal neighbors and 2 vertical
neighbors,(keeping the center as (x,y), each of the neighbors coordinates can be calculated
by just changing the abscissa and ordinate by 1 accordingly) the vertical neighbors are (x,
y+1) and (x, y-1) and the horizontal neighbors are (x+1, y) and (x-1, y).
2. The 4 neighbors of pixel p are denoted by N4(P). Each of them are at what distance?
a) 0.5 units from P
b) 0.707 from P
c) unit distance from P
d) 1.414 units from P

Answer: c
Explanation: The four neighbors of P is denoted by N4(P) : {(x+1, y), (x-1, y), (x, y+1), (x, y-1)},
this shows each pixel is at unit distance from P. This can be calculated by using the distance
formula for 2 points. Distance formula: d = √[(x2-x1)2+(y2-y1)2], where (x1,y1) and (x2,y2)
are the 2 points and d is the distance between them.

3. A pixel p (x, y) has 4 diagonal neighbors. The diagonal neighbors of (x, y) are _____________
a) (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
b) (x+1, y), (x-1, y+1), (x-1, y-1), (x, y-1)
c) (x, y), (x-1, y-1), (x+1, y+1), (x+1, y-1)
d) (x+1, y), (x-1, y), (x, y+1), (x, y-1)
Answer: a
Explanation: Since p has 4 diagonal neighbors,(considering a diamond shape with (x,y) as
center, there would be 4 diagonals neighbors on the 4 sides of the diamond) each of x and
y co-ordinates will change by 1 thus ND(P) is given by: (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-
1).

4. The 4 diagonal neighbors of pixel p are denoted by ND(P). Each of them are at what
distance?
a) 0.5 units from P
b) 0.707 from P
c) unit distance from P
d) 1.414 units from P
Answer: d
Explanation: The four neighbors of P is denoted by ND(P): {(x+1, y), (x-1, y), (x, y+1), (x, y-1)},
so each of them is at a distance √2 from P [√(12+12)= √2=1.414 units]. The above
calculation is done by using the distance formula for 2 points. Distance formula: d = √[(x2-
x1)2+(y2-y1)2], where (x1,y1) and (x2,y2) are the 2 points and d is the distance between
them.
5. The union of 2 regions which form a connected set are called _____________
a) Connected
b) Joined
c) Disjoint
d) Adjacent
Answer: d
Explanation: The regions are said to be Adjacent if their union forms a connected set. In
other words, two pixels a and b are connected if there is a path from a and b on which
every pixel is 4-connected to the next pixel. A set of pixels in an image where all are
connected to each other is called a connected component.

6. In a binary image, two pixels are connected if they are 4-neighbors and have same value
0 or 1. State whether the statement is true or false.
a) True
b) False
Answer: a
Explanation: Condition for 2 pixels of a binary image to be connected: They should be 4-
neighbors and have same value either 0 or 1 and there should be a connected path
between them.

7. For the diagram below, select the best option.

a) Region R1 and R2 are adjacent


b) Region R1 and R2 are connected
c) Region R1 and R2 are disjoint
d) Region R1 and R2 are joined
Answer: a
Explanation: Two image subsets S1 and S2 are adjacent if few pixels in the region S1 is
adjacent to few pixels in region S2. In the diagram, the region close to R1 and R2 form a
boundary and the set of pixels there show that R1 is adjacent to R2.

8. The subset of pixels is given by s. For the pixels p and q to be connected, which of the
following must be satisfied?
a) There exists a path between p and q, which lies outside of the subset s
b) The pixels are 4-adjacent
c) There exists a path between p and q, which lies inside of the subset s
d) The pixels are 8-adjacent
Answer: c
Explanation: Pixels p and q are said to be connected if there exists a path between p and q,
which lies inside of the subset s. Two pixels p and q are connected if there is a path from p
and q on which every pixel is 4-connected to the next pixel.

9. Which of the following is not done using neighborhood processing?


a) Smoothing and averaging
b) Noise removal and filtering
c) Image encryption and decryption
d) Edge detection and contrast enhancement
Answer: c
Explanation: An image can be modified by applying a particular function to each pixel
value. Neighbourhood processing is an extension of this, where a function is applied to a
neighbourhood of each pixel. Smoothing and averaging, Noise removal and filtering, Edge
detection and contrast enhancement are techniques done using neighborhood processing.

10. Which of the following is the correct distance measure?


a) D8(p, q) = max [(x-s)2 + (y-t)2]
b) D4(p, q) = [(x-s)2 + (y-t)2]
c) De (p, q) = |x-s| + |y-t|
d) D8(p, q) = max (|x-s|, |y-t|)
Answer: d
Explanation: The distance measures are given below:
Euclidean Distance:
De (p, q) = [(x-s)2 + (y-t)2]
City Block Distance:
D4 (p, q) = |x-s| + |y-t|
Chess Board Distance:
D8 (p, q) = max (|x-s|, |y-t|)

Chapter 3 : Intensity Transformation and


Spatial Filtering
Smoothing Spatial Filters
1.Noise reduction is obtained by blurring the image using smoothing filter.
a) True
b) False

Answer: a
Explanation: Noise reduction is obtained by blurring the image using smoothing filter.
Blurring is used in pre-processing steps, such as removal of small details from an
image prior to object extraction and, bridging of small gaps in lines or curves

2. What is the output of a smoothing, linear spatial filter?


a) Median of pixels
b) Maximum of pixels
c) Minimum of pixels
d) Average of pixels

Answer: d
Explanation: The output or response of a smoothing, linear spatial filter is simply the
average of the pixels contained in the neighbourhood of the filter mask.

3. Smoothing linear filter is also known as median filter.


a) True
b) False

Answer: b
Explanation: Since the smoothing spatial filter performs the average of the pixels, it is
also called as averaging filter.

4. Which of the following in an image can be removed by using smoothing filter?


a) Smooth transitions of gray levels
b) Smooth transitions of brightness levels
c) Sharp transitions of gray levels
d) Sharp transitions of brightness levels

Answer: c
Explanation: Smoothing filter replaces the value of every pixel in an image by the
average value of the gray levels. So, this helps in removing the sharp transitions in the
gray levels between the pixels. This is done because, random noise typically consists of
sharp transitions in gray levels.

5. Which of the following is the disadvantage of using smoothing filter?


a) Blur edges
b) Blur inner pixels
c) Remove sharp transitions
d) Sharp edges

Answer: a

Explanation: Edges, which almost always are desirable features of an image, also are
characterized by sharp transitions in gray level. So, averaging filters have an
undesirable side effect that they blur these edges.

6. Smoothing spatial filters doesn’t smooth the false contours.

a) True

b) False

Answer: b
Explanation: One of the application of smoothing spatial filters is that, they help in
smoothing the false contours that result from using an insufficient number of gray
levels.

7. The mask shown in the figure below belongs to which type of filter?

a) Sharpening spatial filter


b) Median filter
c) Sharpening frequency filter
d) Smoothing spatial filter

Answer: d
Explanation: This is a smoothing spatial filter. This mask yields a so called weighted
average, which means that different pixels are multiplied with different coefficient
values. This helps in giving much importance to the some pixels at the expense of
others.

8. The mask shown in the figure below belongs to which type of filter?

a) Sharpening spatial filter


b) Median filter
c) Smoothing spatial filter
d) Sharpening frequency filter

Answer: c
Explanation: The mask shown in the figure represents a 3×3 smoothing filter. Use of
this filter yields the standard average of the pixels under the mask.

9. Box filter is a type of smoothing filter.


a) True
b) False

Answer: a
Explanation: A spatial averaging filter or spatial smoothening filter in which all the
coefficients are equal is also called as box filter.

10. If the size of the averaging filter used to smooth the original image to first
image is 9, then what would be the size of the averaging filter used in smoothing
the same original picture to second in second image?
a) 3
b) 5
c) 9
d) 15

Answer: d
Explanation: We know that, as the size of the filter used in smoothening the original
image that is averaging filter increases then the blurring of the image. Since the
second image is more blurred than the first image, the window size should be more
than 9.

11. Which of the following comes under the application of image blurring?

a) Object detection

b) Gross representation

c) Object motion

d) Image segmentation

Answer: b
Explanation: An important application of spatial averaging is to blur an image for the
purpose of getting a gross representation of interested objects, such that the intensity of
the small objects blends with the background and large objects become easy to detect.

12. Which of the following filters response is based on ranking of pixels?


a) Nonlinear smoothing filters
b) Linear smoothing filters
c) Sharpening filters
d) Geometric mean filter

Answer: a
Explanation: Order static filters are nonlinear smoothing spatial filters whose response is
based on the ordering or ranking the pixels contained in the image area encompassed by
the filter, and then replacing the value of the central pixel with the value determined by the
ranking result.

13. Median filter belongs to which category of filters?

a) Linear spatial filter

b) Frequency domain filter


c) Order static filter

d) Sharpening filter

Answer: c
Explanation: The median filter belongs to order static filters, which, as the name implies,
replaces the value of the pixel by the median of the gray levels that are present in the
neighbourhood of the pixels.

14. Median filters are effective in the presence of impulse noise.


a) True
b) False

Answer: a
Explanation: Median filters are used to remove impulse noises, also called as salt-and-
pepper noise because of its appearance as white and black dots in the image.

15. What is the maximum area of the cluster that can be eliminated by using an n×n
median filter?
a) n2
b) n2/2
c) 2*n2
d) n

Answer: b
Explanation: Isolated clusters of pixels that are light or dark with respect to their
neighbours, and whose area is less than n2/2, i.e., half the area of the filter, can be
eliminated by using an n×n median filter.

Basic Intensity Transformation Functions


1. Which of the following expression is used to denote spatial domain process?

a) g(x,y)=T[f(x,y)]
b) f(x+y)=T[g(x+y)]
c) g(xy)=T[f(xy)]
d) g(x-y)=T[f(x-y)]

Answer: a
Explanation: Spatial domain processes will be denoted by the expression g(x,y)=T[f(x,y)],
where f(x,y) is the input image, g(x,y) is the processed image, and T is an operator on f,
defined over some neighborhood of (x, y). In addition, T can operate on a set of input
images, such as performing the pixel-by-pixel sum of K images for noise reduction

2. Which of the following shows three basic types of functions used frequently for
image enhancement?
a) Linear, logarithmic and inverse law
b) Power law, logarithmic and inverse law
c) Linear, logarithmic and power law
d) Linear, exponential and inverse law

Answer: b
Explanation: In introduction to gray-level transformations, which shows three basic types
of functions used frequently for image enhancement: linear (negative and identity
transformations), logarithmic (log and inverse-log transformations), and power-law (nth
power and nth root transformations).The identity function is the trivial case in which output
intensities are identical to input intensities. It is included in the graph only for
completeness.

3. Which expression is obtained by performing the negative transformation on the


negative of an image with gray levels in the range[0,L-1] ?
a) s=L+1-r
b) s=L+1+r
c) s=L-1-r
d) s=L-1+r

Answer: c
Explanation: The negative of an image with gray levels in the range[0,L-1] is obtained by
using the negative transformation, which is given by the expression: s=L-1-r.

4. What is the general form of representation of log transformation?


a) s=clog10(1/r)
b) s=clog10(1+r)
c) s=clog10(1*r)
d) s=clog10(1-r)

Answer: b
Explanation: The general form of the log transformation: s=clog10(1+r), where c is a
constant, and it is assumed that r ≥ 0.

5. What is the general form of representation of power transformation?


a) s=crγ
b) c=srγ
c) s=rc
d) s=rcγ

Answer: a
Explanation: Power-law transformations have the basic form: s=crγ where c and g are
positive constants. Sometimes s=crγ is written as s=c.(r+ε)γ to account for an offset (that is,
a measurable output when the input is zero).

6. What is the name of process used to correct the power-law response phenomena?
a) Beta correction
b) Alpha correction
c) Gamma correction
d) Pie correction

7. Which of the following transformation function requires much information to be


specified at the time of input?
a) Log transformation
b) Power transformation
c) Piece-wise transformation
d) Linear transformation

Answer: c
Explanation: The practical implementation of some important transformations can be
formulated only as piecewise functions. The principal disadvantage of piecewise functions
is that their specification requires considerably more user input.

8. In contrast stretching, if r1=s1 and r2=s2 then which of the following is true?
a) The transformation is not a linear function that produces no changes in gray levels
b) The transformation is a linear function that produces no changes in gray levels
c) The transformation is a linear function that produces changes in gray levels
d) The transformation is not a linear function that produces changes in gray levels

Answer: b
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the
transformation function. If r1=s1 and r2=s2 then the transformation is a linear function that
produces no changes in gray levels.

9. In contrast stretching, if r1=r2, s1=0 and s2=L-1 then which of the following is true?
a) The transformation becomes a thresholding function that creates an octal image
b) The transformation becomes a override function that creates an octal image
c) The transformation becomes a thresholding function that creates a binary image
d) The transformation becomes a thresholding function that do not create an octal image
Answer: c
Explanation: If r1=r2, s1=0 and s2=L-1,the transformation becomes a thresholding function
that creates a binary image.

10. In contrast stretching, if r1≤r2 and s1≤s2 then which of the following is true?
a) The transformation function is double valued and exponentially increasing
b) The transformation function is double valued and monotonically increasing
c) The transformation function is single valued and exponentially increasing
d) The transformation function is single valued and monotonically increasing

Answer: d
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the
transformation function. If r1≤r2 and s1≤s2 then the function is single valued and
monotonically increasing

11. In which type of slicing, highlighting a specific range of gray levels in an image often is
desired?
a) Gray-level slicing
b) Bit-plane slicing
c) Contrast stretching
d) Byte-level slicing

Answer: a
Explanation: Highlighting a specific range of gray levels in an image often is desired in gray-
level slicing. Applications include enhancing features such as masses of water in satellite
imagery and enhancing flaws in X-ray images.

12. Which of the following depicts the main functionality of the Bit-plane slicing?
a) Highlighting a specific range of gray levels in an image
b) Highlighting the contribution made to total image appearance by specific bits
c) Highlighting the contribution made to total image appearance by specific byte
d) Highlighting the contribution made to total image appearance by specific pixels

Answer: b
Explanation: Instead of highlighting gray-level ranges, highlighting the contribution made to
total image appearance by specific bits might be desired. Suppose , each pixel in an image
is represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging
from bit-plane 0 for the least significant bit to bit-plane 7 for the most significant bit. In
terms of 8-bit bytes, plane 0 contains all the lowest order bits in the bytes comprising the
pixels in the image and plane 7 contains all the high-order bits.
Sharpening Spatial Filters
1.Which of the following is the primary objective of sharpening of an image?
a) Blurring the image
b) Highlight fine details in the image
c) Increase the brightness of the image
d) Decrease the brightness of the image

2. Image sharpening process is used in electronic printing.


a) True
b) False
Answer: a
Explanation: The applications of image sharpening is present in various fields like
electronic printing, autonomous guidance in military systems, medical imaging and
industrial inspection.

3. In spatial domain, which of the following operation is done on the pixels in


sharpening the image?
a) Integration
b) Average
c) Median
d) Differentiation

Answer: d

Explanation: We know that, in blurring the image, we perform the average of pixels
which can be considered as integration. As sharpening is the opposite process of
blurring, logically we can tell that we perform differentiation on the pixels to sharpen
the image.

4. Image differentiation enhances the edges, discontinuities and deemphasizes the


pixels with slow varying gray levels.

a) True

b) False

Answer: a

Explanation: Fundamentally, the strength of the response of the derivative operative is


proportional to the degree of discontinuity in the image. So, we can state that image
differentiation enhances the edges, discontinuities and deemphasizes the pixels with
slow varying gray levels.

5. In which of the following cases, we wouldn’t worry about the behaviour of sharpening
filter?

a) Flat segments

b) Step discontinuities

c) Ramp discontinuities

d) Slow varying gray values

Answer: d
Explanation: We are interested in the behaviour of derivatives used in sharpening in the
constant gray level areas i.e., flat segments, and at the onset and end of discontinuities,
i.e., step and ramp discontinuities.

6. Which of the following is the valid response when we apply a first derivative?

a) Non-zero at flat segments

b) Zero at the onset of gray level step

c) Zero in flat segments

d) Zero along ramps

Answer: c
Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for first derivative should be zero in flat segments, nonzero at the
onset of a gray level step or ramp and nonzero along the ramps.

7. Which of the following is not a valid response when we apply a second


derivative?
a) Zero response at onset of gray level step
b) Nonzero response at onset of gray level step
c) Zero response at flat segments
d) Nonzero response along the ramps

Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for second derivative should be zero in flat segments, zero at the
onset of a gray level step or ramp and nonzero along the ramps.
8. If f(x,y) is an image function of two variables, then the first order derivative of a
one dimensional function, f(x) is:
a) f(x+1)-f(x)
b) f(x)-f(x+1)
c) f(x-1)-f(x+1)
d) f(x)+f(x-1)

Explanation: The first order derivative of a single dimensional function f(x) is the
difference between f(x) and f(x+1).
That is, ∂f/∂x=f(x+1)-f(x).

9. Isolated point is also called as noise point.


a) True
b) False

Explanation: The point which has very high or very low gray level value compared to its
neighbours, then that point is called as isolated point or noise point. The noise point of
is of one pixel size.

10. What is the thickness of the edges produced by first order derivatives when
compared to that of second order derivatives?
a) Finer
b) Equal
c) Thicker
d) Independent

Explanation: We know that, the first order derivative is nonzero along the entire ramp
while the second order is zero along the ramp. So, we can conclude that the first order
derivatives produce thicker edges and the second order derivatives produce much finer
edges

11. First order derivative can enhance the fine detail in the image compared to that
of second order derivative.
a) True
b) False

Explanation: The response at and around the noise point is much stronger for the
second order derivative than for the first order derivative. So, we can state that the
second order derivative is better to enhance the fine details in the image including
noise when compared to that of first order derivative.
12. Which of the following derivatives produce a double response at step changes
in gray level?
a) First order derivative
b) Third order derivative
c) Second order derivative
d) First and second order derivatives
Explanation: Second order derivatives produce a double line response for the step
changes in the gray level. We also note of second-order derivatives that, for similar
changes in gray-level values in an image, their response is stronger to a line than to a
step, and to a point than to a line.

Sharpening Spatial Filters-2


1. The objective of sharpening spatial filters is/are to ___________
a) Highlight fine detail in an image
b) Enhance detail that has been blurred because of some error
c) Enhance detail that has been blurred because of some natural effect of some
method of image acquisition
d) All of the mentioned
Explanation: Highlighting the fine detail in an image or Enhancing detail that has been
blurred because of some error or some natural effect of some method of image
acquisition, is the principal objective of sharpening spatial filters.

2. Sharpening is analogous to which of the following operations?


a) To spatial integration
b) To spatial differentiation
c) All of the mentioned
d) None of the mentioned
Explanation: Smoothing is analogous to integration and so, sharpening to spatial
differentiation.

3. Which of the following fact(s) is/are true about sharpening spatial filters using
digital differentiation?
a) Sharpening spatial filter response is proportional to the discontinuity of the image at
the point where the derivative operation is applied
b) Sharpening spatial filters enhances edges and discontinuities like noise
c) Sharpening spatial filters deemphasizes areas that have slowly varying gray-level
values
d) All of the mentioned
Explanation: Derivative operator’s response is proportional to the discontinuity of the
image at the point where the derivative operation is applied.
Image differentiation enhances edges and discontinuities like noise and deemphasizes
areas that have slowly varying gray-level values.
Since a sharpening spatial filters are analogous to differentiation, so, all the above
mentioned facts are true for sharpening spatial filters.

4. Which of the facts(s) is/are true for the first order derivative of a digital function?
a) Must be nonzero in the areas of constant grey values
b) Must be zero at the onset of a gray-level step or ramp discontinuities
c) Must be nonzero along the gray-level ramps
d) None of the mentioned

Explanation: The first order derivative of a digital function is defined as:


Must be zero in the areas of constant grey values.
Must be nonzero at the onset of a gray-level step or ramp discontinuities.
Must be nonzero along the gray-level ramps.

5. Which of the facts(s) is/are true for the second order derivative of a digital
function?
a) Must be zero in the flat areas
b) Must be nonzero at the onset and end of a gray-level step or ramp discontinuities
c) Must be zero along the ramps of constant slope
d) All of the mentioned
Explanation: The second order derivative of a digital function is defined as:
Must be zero in the flat areas i.e. areas of constant grey values.
Must be nonzero at the onset of a gray-level step or ramp discontinuities.
Must be zero along the gray-level ramps of constant slope.

6. The derivative of digital function is defined in terms of difference. Then, which of


the following defines the first order derivative ∂f/∂x= ___________ of a one-
dimensional function f(x)?
a) f(x+1)-f(x)
b) f(x+1)+ f(x-1)-2f(x)
c) All of the mentioned depending upon the time when partial derivative will be dealt
along two spatial axes
d) None of the mentioned
Explanation: The definition of a first order derivative of a one dimensional image f(x) is:
∂f/∂x= f(x+1)-f(x), where the partial derivative is used to keep notation same even for f(x,
y) when partial derivative will be dealt along two spatial axes.
7. The derivative of digital function is defined in terms of difference. Then, which of
the following defines the second order derivative ∂2 f/∂x2 = ___________ of a one-
dimensional function f(x)?
a) f(x+1)-f(x)
b) f(x+1)+ f(x-1)-2f(x)
c) All of the mentioned depending upon the time when partial derivative will be dealt
along two spatial axes
d) None of the mentioned

Explanation: The definition of a second order derivative of a one dimensional image f(x)
is:
(∂2 f)/∂x2 =f(x+1)+ f(x-1)-2f(x), where the partial derivative is used to keep notation same
even for f(x, y) when partial derivative will be dealt along two spatial axes.

8. What kind of relation can be obtained between first order derivative and second
order derivative of an image having a on the basis of edge productions that shows a
transition like a ramp of constant slope?
a) First order derivative produces thick edge while second order produces a very
fine edge
b) Second order derivative produces thick edge while first order produces a very fine
edge
c) Both first and second order produces thick edge
d) Both first and second order produces a very fine edge

Explanation: the first order derivative remains nonzero along the entire ramp of
constant slope, while the second order derivative remain nonzero only at onset and end
of such ramps.
If an edge in an image shows transition like the ramp of constant slope, the first order
and second order derivative values shows the production of thick and finer edge
respectively.

9. What kind of relation can be obtained between first order derivative and second
order derivative of an image on the response obtained by encountering an isolated
noise point in the image?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both enhances the same and so the response is same for both first and second order
derivative
d) None of the mentioned
Explanation: This is because a second order derivative is more aggressive toward
enhancing sharp changes than a first order.

10. What kind of relation can be obtained between the response of first order
derivative and second order derivative of an image having a transition into gray-
level step from zero?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both first and second order derivative has the same response
d) None of the mentioned

Explanation: This is because a first order derivative has stronger response to a gray-
level step than a second order, but, the response becomes same if transition into gray-
level step is from zero.

11. If in an image there exist similar change in gray-level values in the image, which of
the following shows a stronger response using second order derivative operator for
sharpening?
a) A line
b) A step
c) A point
d) None of the mentioned

Explanation: second order derivative shows a stronger response to a line than a step
and to a point than a line, if there is similar changes in gray-level values in an image.

Sharpening Spatial Filters-3


1. The principle objective of Sharpening, to highlight transitions is ________
a) Pixel density
b) Composure
c) Intensity
d) Brightness

Explanation: The principle objective of Sharpening, to highlight transitions is Intensity.

2. How can Sharpening be achieved?


a) Pixel averaging
b) Slicing
c) Correlation
d) None of the mentioned

Explanation: Sharpening is achieved using Spatial Differentiation.

3. What does Image Differentiation enhance?


a) Edges
b) Pixel Density
c) Contours
d) None of the mentioned

Explanation: Image Differentiation enhances Edges and other discontinuities.

4. What does Image Differentiation de-emphasize?


a) Pixel Density
b) Contours
c) Areas with slowly varying intensities
d) None of the mentioned

Explanation: Image Differentiation de-emphasizes areas with slowly varying intensities.

5. The requirements of the First Derivative of a digital function:


a) Must be zero in areas of constant intensity
b) Must be non-zero at the onset of an intensity step
c) Must be non-zero along ramps
d) All of the Mentioned

Explanation: All the three conditions must be satisfied.

6. What is the Second Derivative of Image Sharpening called?


a) Gaussian
b) Laplacian
c) Canny
d) None of the mentioned

Explanation: It is also called Laplacian.

7. The ability that rotating the image and applying the filter gives the same result,
as applying the filter to the image first, and then rotating it, is called _____________
a) Isotropic filtering
b) Laplacian
c) Rotation Invariant
d) None of the mentioned
Explanation: It is called Rotation Invariant, although the process used is Isotropic
filtering.

8. For a function f(x,y), the gradient of ‘f’ at coordinates (x,y) is defined as a


___________
a) 3-D row vector
b) 3-D column vector
c) 2-D row vector
d) 2-D column vector

Explanation: The gradient is a 2-D column vector.

9. Where do you find frequent use of Gradient?


a) Industrial inspection
b) MRI Imaging
c) PET Scan
d) None of the mentioned

Explanation: Gradient is used in Industrial inspection, to aid humans, in detection of


defects.

10. Which of the following occurs in Unsharp Masking?


a) Blurring original image
b) Adding a mask to original image
c) Subtracting blurred image from original
d) All of the mentioned

Explanation: In Unsharp Masking, all of the above occurs in the order: Blurring,
Subtracting the blurred image and then Adding the mask.

Combining Spatial Enhancements Methods


1. Which of the following make an image difficult to enhance?
a) Narrow range of intensity levels
b) Dynamic range of intensity levels
c) High noise
d) All of the mentioned
Explanation: All the mentioned options make it difficult to enhance an image.

2. Which of the following is a second-order derivative operator?


a) Histogram
b) Laplacian
c) Gaussian
d) None of the mentioned
View Answer

3. Response of the gradient to noise and fine detail is _____________ the Laplacian’s.
a) equal to
b) lower than
c) greater than
d) has no relation with

Explanation: Response of the gradient to noise and fine detail is lower than the
Laplacian’s and can further be lowered by smoothing.

4. Dark characteristics in an image are better solved using ___________


a) Laplacian Transform
b) Gaussian Transform
c) Histogram Specification
d) Power-law Transformation
Explanation: It can be solved by Histogram Specification but it is better handled by
Power-law Transformation.

5. What is the smallest possible value of a gradient image?


a) e
b) 1
c) 0
d) -e

Explanation: The smallest possible value of a gradient image is 0.

6. Which of the following fails to work on dark intensity distributions?


a) Laplacian Transform
b) Gaussian Transform
c) Histogram Equalization
d) Power-law Transformation

Explanation: Histogram Equalization fails to work on dark intensity distributions.

7. _____________ is used to detect diseases such as bone infection and tumors.


a) MRI Scan
b) PET Scan
c) Nuclear Whole Body Scan
d) X-Ray
Explanation: Nuclear Whole Body Scan is used to detect diseases such as bone
infection and tumors

8. How do you bring out more of the skeletal detail from a Nuclear Whole Body
Bone Scan?
a) Sharpening
b) Enhancing
c) Transformation
d) None of the mentioned

Explanation: Sharpening is used to bring out more of the skeletal detail.

9. An alternate approach to median filtering is ______________


a) Use a mask
b) Gaussian filter
c) Sharpening
d) Laplacian filter

Explanation: Using a mask, formed from the smoothed version of the gradient image,
can be used for median filtering.

10. Final step of enhancement lies in _____________ of the sharpened image.


a) Increase range of contrast
b) Increase range of brightness
c) Increase dynamic range
d) None of the mentioned

Explanation: Increasing the dynamic range of the sharpened image is the final step in
enhancement.

Fundamentals of Spatial Filtering


1. What is accepting or rejecting certain frequency components called as?
a) Filtering
b) Eliminating
c) Slicing
d) None of the Mentioned
Explanation: Filtering is the process of accepting or rejecting certain frequency
components.

2. A filter that passes low frequencies is _____________


a) Band pass filter
b) High pass filter
c) Low pass filter
d) None of the Mentioned

Explanation: Low pass filter passes low frequencies.

3. What is the process of moving a filter mask over the image and computing the
sum of products at each location called as?
a) Convolution
b) Correlation
c) Linear spatial filtering
d) Non linear spatial filtering

Explanation: The process is called as Correlation.

4. The standard deviation controls ___________ of the bell (2-D Gaussian function of
bell shape).
a) Size
b) Curve
c) Tightness
d) None of the Mentioned

Explanation: The standard deviation controls “tightness” of the bell.

5. What is required to generate an M X N linear spatial filter?


a) MN mask coefficients
b) M+N coordinates
c) MN spatial coefficients
d) None of the Mentioned

Explanation: To generate an M X N linear spatial filter MN mask coefficients must be


specified.

6. What is the difference between Convolution and Correlation?


a) Image is pre-rotated by 180 degree for Correlation
b) Image is pre-rotated by 180 degree for Convolution
c) Image is pre-rotated by 90 degree for Correlation
d) Image is pre-rotated by 90 degree for Convolution

Explanation: Convolution is the same as Correlation except that the image must be
rotated by 180 degrees initially.
7. Convolution and Correlation are functions of _____________
a) Distance
b) Time
c) Intensity
d) Displacement

Explanation: Convolution and Correlation are functions of displacement.

8. The function that contains a single 1 with the rest being 0s is called
______________
a) Identity function
b) Inverse function
c) Discrete unit impulse
d) None of the Mentioned

Explanation: It is called Discrete unit impulse.

9. Which of the following involves Correlation?


a) Matching
b) Key-points
c) Blobs
d) None of the Mentioned.

Explanation: Correlation is applied in finding matches.

10. An example of a continuous function of two variables is __________


a) Identity function
b) Intensity function
c) Contrast stretching
d) Gaussian function

Explanation: Gaussian function has two variables and is an exponential continuous


function.

1. What is the basis for numerous spatial domain processing techniques?


a) Transformations
b) Scaling
c) Histogram
d) None of the Mentioned
Explanation: Histogram is the basis for numerous spatial domain processing
techniques.

2. In _______ image we notice that the components of histogram are concentrated


on the low side on intensity scale.
a) bright
b) dark
c) colourful
d) All of the Mentioned

Explanation: Only in dark images, we notice that the components of histogram are
concentrated on the low side on intensity scale.

3. What is Histogram Equalisation also called as?


a) Histogram Matching
b) Image Enhancement
c) Histogram linearisation
d) None of the Mentioned

Explanation: Histogram Linearisation is also known as Histogram Equalisation.

4. What is Histogram Matching also called as?


a) Histogram Equalisation
b) Histogram Specification
c) Histogram linearisation
d) None of the Mentioned

Explanation: Histogram Specification is also known as Histogram Matching.

5. Histogram Equalisation is mainly used for ________________


a) Image enhancement
b) Blurring
c) Contrast adjustment
d) None of the Mentioned

Explanation: It is mainly used for Enhancement of usually dark images.

6. To reduce computation if one utilises non-overlapping regions, it usually


produces ______ effect.
a) Dimming
b) Blurred
c) Blocky
d) None of the Mentioned
Explanation: Utilising non-overlapping regions usually produces “Blocky” effect.

7. What does SEM stands for?


a) Scanning Electronic Machine
b) Self Electronic Machine
c) Scanning Electron Microscope
d) Scanning Electric Machine

Explanation: SEM stands for Scanning Electron Microscope.

8. The type of Histogram Processing in which pixels are modified based on the
intensity distribution of the image is called _______________.
a) Intensive
b) Local
c) Global
d) Random

Explanation: It is called Global Histogram Processing.

9. Which type of Histogram Processing is suited for minute detailed


enhancements?
a) Intensive
b) Local
c) Global
d) Random

Explanation: Local Histogram Processing is used.

10. In uniform PDF, the expansion of PDF is ________________


a) Portable Document Format
b) Post Derivation Function
c) Previously Derived Function
d) Probability Density Function
View Answer

Answer: d
Explanation: PDF stands for Probability Density Function.

Histogram Processing – 2
1. The histogram of a digital image with gray levels in the range [0, L-1] is
represented by a discrete function:
a) h(r_k)=n_k
b) h(r_k )=n/n_k
c) p(r_k )=n_k
d) h(r_k )=n_k/n

Explanation: The histogram of a digital image with gray levels in the range [0, L-1] is a
discrete function h(rk )=nk, where rk is the kth gray level and nkis the number of pixels in
the image having gray level rk.

2. How is the expression represented for the normalized histogram?


a) p(r_k )=n_k
b) p(r_k )=n_k/n
c) p(r_k)=nn_k
d) p(r_k )=n/n_k

Explanation: It is common practice to normalize a histogram by dividing each of its


values by the total number of pixels in the image, denoted by n. Thus, a normalized
histogram is given by p(rk )=nk/n, for k=0,1,2…..L-1. Loosely speaking, p(rk ) gives an
estimate of the probability of occurrence of gray-level rk. Note that the sum of all
components of a normalized histogram is equal to 1.

3. Which of the following conditions does the T(r) must satisfy?


a) T(r) is double-valued and monotonically decreasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1
b) T(r) is double-valued and monotonically increasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1
c) T(r) is single-valued and monotonically decreasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1
d) T(r) is single-valued and monotonically increasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1

Explanation: For any r satisfying the aforementioned conditions, we focus attention on


transformations of the form
s=T(r) For 0≤r≤1
That produces a level s for every pixel value r in the original image.
For reasons that will become obvious shortly, we assume that the transformation
function T(r) satisfies the following conditions:
T(r) is single-valued and monotonically increasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1.
4. The inverse transformation from s back to r is denoted as:
a) s=T-1(r) for 0≤s≤1
b) r=T-1(s) for 0≤r≤1
c) r=T-1(s) for 0≤s≤1
d) r=T-1(s) for 0≥s≥1

Explanation: The inverse transformation from s back to r is denoted by:


r=T-1(s) for 0≤s≤1.

5. The probability density function p_s (s) of the transformed variable s can be
obtained by using which of the following formula?
a) p_s (s)=p_r (r)|dr/ds|
b) p_s (s)=p_r (r)|ds/dr|
c) p_r (r)=p_s (s)|dr/ds|
d) p_s (s)=p_r (r)|dr/dr|

Explanation: The probability density function p_s (s) of the transformed variable s can be
obtained using a basic formula: p_s (s)=p_r (r)|dr/ds|
Thus, the probability density function of the transformed variable, s, is determined by
the gray-level PDF of the input image and by the chosen transformation function.

6. A transformation function of particular importance in image processing is


represented in which of the following form?
a) s=T(r)=∫0 (2r)pr (ω)dω
b) s=T(r)=∫0 (r-1)pr (ω)dω
c) s=T(r)=∫0 (r/2)pr (ω)dω
d) s=T(r)=∫0 pr (ω)dω

Explanation: A transformation function of particular importance in image processing


has the form: s=T(r)=∫0 r pr(ω)dw, where ω is a dummy variable of integration. The right
side of is recognized as the cumulative distribution function (CDF) of random variable r.

7. Histogram equalization or Histogram linearization is represented by of the


following equation:
a) sk =∑k j =1 nj/n k=0,1,2,……,L-1
b) sk =∑k j =0 nj/n k=0,1,2,……,L-1
c) sk =∑k j =0 n/nj k=0,1,2,……,L-1
d) sk =∑k j =n nj/n k=0,1,2,……,L-1

Explanation: A plot of pk_ (rk) versus r_k is called a histogram .The transformation
(mapping) given in sk =∑k j =0)k nj/n k=0,1,2,……,L-1 is called histogram equalization or
histogram linearization.
8. What is the method that is used to generate a processed image that have a
specified histogram?
a) Histogram linearization
b) Histogram equalization
c) Histogram matching
d) Histogram processing

Explanation: In particular, it is useful sometimes to be able to specify the shape of the


histogram that we wish the processed image to have. The method used to generate a
processed image that has a specified histogram is called histogram matching or
histogram specification.

9. Histograms are the basis for numerous spatial domain processing techniques.
a) True
b) False

Explanation: Histograms are the basis for numerous spatial domain processing
techniques. Histogram manipulation can be used effectively for image enhancement.

10. In a dark image, the components of histogram are concentrated on which side
of the grey scale?
a) High
b) Medium
c) Low
d) Evenly distributed

Explanation: We know that in the dark image, the components of histogram are
concentrated mostly on the low i.e., dark side of the grey scale. Similarly, the
components of histogram of the bright image are biased towards the high side of the
grey scale.

Smoothing Spacial Filters


1. The output of a smoothing, linear spatial filtering is a ____________ of the pixels
contained in the neighbourhood of the filter mask.
a) Sum
b) Product
c) Average
d) Dot Product
Explanation: Smoothing is simply the average of the pixels contained in the
neighbourhood.

2. Averaging filters is also known as ____________ filter.


a) Low pass
b) High pass
c) Band pass
d) None of the Mentioned

Explanation: Averaging filters is also known as Low pass filters.

3. What is the undesirable side effects of Averaging filters?


a) No side effects
b) Blurred image
c) Blurred edges
d) Loss of sharp transitions

Explanation: Blue edges is the undesirable side effect of Averaging filters.

4. A spatial averaging filter in which all coefficients are equal is called


_______________.
a) Square filter
b) Neighbourhood
c) Box filter
d) Zero filter

Explanation: It is called a Box filter.

5. Which term is used to indicate that pixels are multiplied by different


coefficients?
a) Weighted average
b) Squared average
c) Spatial average
d) None of the Mentioned

Explanation: It is called weighted average since more importance(weight) is given to


some pixels.

6. The non linear spacial filters whose response is based on ordering of the pixels
contained is called _____________.
a) Box filter
b) Square filter
c) Gaussian filter
d) Order-statistic filter

Explanation: It is called Order-statistic filter.

7. Impulse noise in Order-statistic filter is also called as _______________


a) Median noise
b) Bilinear noise
c) Salt and pepper noise
d) None of the Mentioned

Explanation: It is called salt-and-pepper noise because of its appearance as white and


black dots superimposed on an image.

8. Best example for a Order-statistic filter is ____________________


a) Impulse filter
b) Averaging filter
c) Median filter
d) None of the Mentioned

Explanation: Median filter is the best known Order-statistic filter.

9. What does “eliminated” refer to in median filter?


a) Force to average intensity of neighbours
b) Force to median intensity of neighbours
c) Eliminate median value of pixels
d) None of the Mentioned

Explanation: It refers to forcing to median intensity of neighbours.

10. Which of the following is best suited for salt-and-pepper noise elimination?
a) Average filter
b) Box filter
c) Max filter
d) Median filter

Explanation: Median filter is better suited than average filter for salt-and-pepper noise
elimination.

Smoothing Linear Spatial Filters


1. Smoothing filter is used for which of the following work(s)?
a) Blurring
b) Noise reduction
c) All of the mentioned
d) None of the mentioned

Explanation: Smoothing filter is used for blurring and noise reduction.

2. The response of the smoothing linear spatial filter is/are __________


a) Sum of image pixel in the neighborhood filter mask
b) Difference of image in the neighborhood filter mask
c) Product of pixel in the neighborhood filter mask
d) Average of pixels in the neighborhood of filter mask

Explanation: The average of pixels in the neighborhood of filter mask is simply the
output of the smoothing linear spatial filter.

3. Which of the following filter(s) results in a value as average of pixels in the


neighborhood of filter mask.
a) Smoothing linear spatial filter
b) Averaging filter
c) Lowpass filter
d) All of the mentioned

Explanation: The output as an average of pixels in the neighborhood of filter mask is


simply the output of the smoothing linear spatial filter also known as averaging filter
and lowpass filter.

4. What is/are the resultant image of a smoothing filter?


a) Image with high sharp transitions in gray levels
b) Image with reduced sharp transitions in gray levels
c) All of the mentioned
d) None of the mentioned

Explanation: Random noise has sharp transitions in gray levels and smoothing filters
does noise reduction.

5. At which of the following scenarios averaging filters is/are used?


a) In the reduction of irrelevant details in an image
b) For smoothing of false contours
c) For noise reductions
d) All of the mentioned

Explanation: Averaging filter or smoothing linear spatial filter is used: for noise
reduction by reducing the sharp transitions in gray level, for smoothing false contours
that arises because of use of insufficient number of gray values and for reduction of
irrelevant data i.e. the pixels regions that are small in comparison of filter mask.

6. A spatial averaging filter having all the coefficients equal is termed _________
a) A box filter
b) A weighted average filter
c) A standard average filter
d) A median filter

Explanation: An averaging filter is termed as box filter if all the coefficients of spatial
averaging filter are equal.

7. What does using a mask having central coefficient maximum and then the
coefficients reducing as a function of increasing distance from origin results?
a) It results in increasing blurring in smoothing process
b) It results to reduce blurring in smoothing process
c) Nothing with blurring occurs as mask coefficient relation has no effect on smoothing
process
d) None of the mentioned

Explanation: Use of a mask having central coefficient maximum and then the
coefficients reducing as a function of increasing distance from origin is a strategy to
reduce blurring in smoothing process.

8. What is the relation between blurring effect with change in filter size?
a) Blurring increases with decrease of the size of filter size
b) Blurring decrease with decrease of the size of filter size
c) Blurring decrease with increase of the size of filter size
d) Blurring increases with increase of the size of filter size

Explanation: Using a size 3 filter 3*3 and 5*5 size squares and other objects shows a
significant blurring with respect to object of larger size.
The blurring gets more pronounced while using filter size 5, 9 and so on.

Smoonthing Nonlinear Spatial Filter


1. Which of the following filter(s) has the response in which the central pixel value
is replaced by value defined by ranking the pixel in the image encompassed by
filter?
a) Order-Statistic filters
b) Non-linear spatial filters
c) Median filter
d) All of the mentioned

Explanation: An Order-Statistic filters also called non-linear spatial filters, response is


based on ranking the pixel in the image encompassed by filter that replaces the central
pixel value. A Median filter is an example of such filters.

2. Is it true or false that “the original pixel value is included while computing the
median using gray-levels in the neighborhood of the original pixel in median filter
case”?
a) True
b) False

Explanation: A median filter the pixel value is replaced by median of the gray-level in the
neighborhood of that pixel and also the original pixel value is included while computing
the median.

3. Two filters of similar size are used for smoothing image having impulse noise.
One is median filter while the other is a linear spatial filter. Which would the
blurring effect of both?
a) Median filter effects in considerably less blurring than the linear spatial filters
b) Median filter effects in considerably more blurring than the linear spatial filters
c) Both have the same blurring effect
d) All of the mentioned

Explanation: For impulse noise, median filter is much effective for noise reduction and
causes considerably less blurring than the linear spatial filters.

4. An image contains noise having appearance as black and white dots


superimposed on the image. Which of the following noise(s) has the same
appearance?
a) Salt-and-pepper noise
b) Gaussian noise
c) All of the mentioned
d) None of the mentioned

Explanation: An impulse noise has an appearance as black and white dots


superimposed on the image. This is also known as Salt-and-pepper noise.

5. While performing the median filtering, suppose a 3*3 neighborhood has value
(10, 20, 20, 20, 15, 20, 20, 25, 100), then what is the median value to be given to the
pixel under filter?
a) 15
b) 20
c) 100
d) 25

Explanation: The values are first sorted and so turns out to (10, 15, 20, 20, 20, 20, 20,
25, and 100). For a 3*3 neighborhood the 5th largest value is the median, and so is 20.

6. Which of the following are forced to the median intensity of the neighbors by n*n
median filter?
a) Isolated cluster of pixels that are light or dark in comparison to their neighbors
b) Isolated cluster of pixels whose area is less than one-half the filter area
c) All of the mentioned
d) None of the mentioned

Explanation: The isolated cluster pixel value doesn’t come as a median value and since
are either are light or dark as compared to neighbors, so are forced with median
intensity of neighbors that aren’t even close to their original value and so are sometimes
termed “eliminated”.
If the area of such isolated pixels are < n2/2, that is again the pixel value won’t be a
median value and so are eliminated.
Larger cluster pixels value are more pronounced to be a median value, so are
considerably less forced to median intensity.

7. Which filter(s) used to find the brightest point in the image?


a) Median filter
b) Max filter
c) Mean filter
d) All of the mentioned

Explanation: A max filter gives the brightest point in an image and so is used.

8. The median filter also represents which of the following ranked set of numbers?
a) 100th percentile
b) 0th percentile
c) 50th percentile
d) None of the mentioned

Explanation: Since the median filter forces median intensity to the pixel which is almost
the largest value in the middle of the list of values as per the ranking, so represents a
50th percentile ranked set of numbers.
9. Which of the following filter represents a 0th percentile set of numbers?
a) Max filter
b) Mean filter
c) Median filter
d) None of the mentioned

Explanation: A min filter since provides the minimum value in the image, so represents
a 0th percentile set of numbers.

Spatial Filtering
1. In neighborhood operations working is being done with the value of image pixel
in the neighborhood and the corresponding value of a subimage that has same
dimension as neighborhood. The subimage is referred as _________
a) Filter
b) Mask
c) Template
d) All of the mentioned

Explanation: Working in neighborhood operations is done with the value of a subimage


having same dimension as neighborhood corresponding to the value in the image pixel.
The subimage is called as filter, mask, template, kernel or window.

2. The response for linear spatial filtering is given by the relationship __________
a) Sum of filter coefficient’s product and corresponding image pixel under filter
mask
b) Difference of filter coefficient’s product and corresponding image pixel under filter
mask
c) Product of filter coefficient’s product and corresponding image pixel under filter
mask
d) None of the mentioned

Explanation: In spatial filtering the mask is moved from point to point and at each point
the response is calculated using a predefined relationship. The relationship in linear
spatial filtering is given by: the Sum of filter coefficient’s product and corresponding
image pixel in area under filter mask.

3. In linear spatial filtering, what is the pixel of the image under mask
corresponding to the mask coefficient w (1, -1), assuming a 3*3 mask?
a) f (x, -y)
b) f (x + 1, y)
c) f (x, y – 1)
d) f (x + 1, y – 1)

Explanation: The pixel corresponding to mask coefficient (a 3*3 mask) w (0, 0) is f (x, y),
and so for w (1, -1) is f (x + 1, y – 1).

4. Which of the following is/are a nonlinear operation?


a) Computation of variance
b) Computation of median
c) All of the mentioned
d) None of the mentioned

Explanation: Computation of variance as well as median comes under nonlinear


operation.

5. Which of the following is/are used as basic function in nonlinear filter for noise
reduction?
a) Computation of variance
b) Computation of median
c) All of the mentioned
d) None of the mentioned

Explanation: Computation of median gray-level value in the neighborhood is the basic


function of nonlinear filter for noise reduction.

6. In neighborhood operation for spatial filtering if a square mask of size n*n is used
it is restricted that the center of mask must be at a distance ≥ (n – 1)/2 pixels from
border of image, what happens to the resultant image?
a) The resultant image will be of same size as original image
b) The resultant image will be a little larger size than original image
c) The resultant image will be a little smaller size than original image
d) None of the mentioned

If the center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image, the
border pixels won’t get processed under mask and so the resultant image would be of
smaller size.

7. Which of the following method is/are used for padding the image?
a) Adding rows and column of 0 or other constant gray level
b) Simply replicating the rows or columns
c) All of the mentioned
d) None of the mentioned
Explanation: In neighborhood operation for spatial filtering using square mask, padding
of original image is done to obtain filtered image of same size as of original image done,
by adding rows and column of 0 or other constant gray level or by replicating the rows or
columns of the original image.

8. In neighborhood operation for spatial filtering using square mask of n*n, which
of the following approach is/are used to obtain a perfectly filtered result
irrespective of the size?
a) By padding the image
b) By filtering all the pixels only with the mask section that is fully contained in the
image
c) By ensuring that center of mask must be at a distance ≥ (n – 1)/2 pixels from
border of image
d) None of the mentioned

Explanation: By ensuring that center of mask must be at a distance ≥ (n – 1)/2 pixels


from border of image, the resultant image would be of smaller size but all the pixels
would be the result of the filter processing and so is a fully filtered result.
In the other approach like padding affect the values near the edges that gets more
prevalent with mask size increase, while the another approach results in the band of
pixels near border that gets processed with partial filter mask. So, not a fully filtered
case.

3.14. Filtering in Frequency Domain


1. Which of the following fact(s) is/are true for the relationship between low
frequency component of Fourier transform and the rate of change of gray levels?
a) Moving away from the origin of transform the low frequency corresponds to smooth
gray level variation
b) Moving away from the origin of transform the low frequencies corresponds to abrupt
change in gray level
c) All of the mentioned
d) None of the mentioned

Answer: c
Explanation: Moving away from the origin of transform the low frequency corresponds to
the slowly varying components in an image. Moving further away from origin the higher
frequencies correspond to faster gray level changes.

2. Which of the following fact(s) is/are true for the relationship between high
frequency component of Fourier transform and the rate of change of gray levels?
a) Moving away from the origin of transform the high frequency corresponds to smooth
gray level variation
b) Moving away from the origin of transform the higher frequencies corresponds to
abrupt change in gray level
c) All of the mentioned
d) None of the mentioned

Answer: b
Explanation: Moving away from the origin of transform the low frequency corresponds to
the slowly varying components in an image. Moving further away from origin, the higher
frequencies correspond to faster gray level changes.

3. What is the name of the filter that multiplies two functions F(u, v) and H(u, v),
where F has complex components too since is Fourier transformed function of f(x, y),
in an order that each component of H multiplies both real and complex part of
corresponding component in F?
a) Unsharp mask filter
b) High-boost filter
c) Zero-phase-shift-filter
d) None of the mentioned

Answer: c
Explanation: Zero-phase-shift-filter multiplies two functions F(u, v) and H(u, v), where F has
complex components too since is Fourier transformed function of f(x, y), in an order that
each component of H multiplies both real and complex part of corresponding component
in F.

4. To set the average value of an image zero, which of the following term would be
set 0 in the frequency domain and the inverse transformation is done, where F(u, v)
is Fourier transformed function of f(x, y)?
a) F(0, 0)
b) F(0, 1)
c) F(1, 0)
d) None of the mentioned
Answer: a
Explanation: For an image f(x, y), the Fourier transform at origin of an image, F(0, 0), is
equal to the average value of the image.

5. What is the name of the filter that is used to turn the average value of a processed
image zero?
a) Unsharp mask filter
b) Notch filter
c) Zero-phase-shift-filter
d) None of the mentioned

Answer: b
Explanation: Notch filter sets F (0, 0), to zero, hence setting up the average value of image
zero. The filter is named so, because it is a constant function with a notch at origin and so is
able to set F (0, 0) to zero leaving out other values.

6. Which of the following filter(s) attenuates high frequency while passing low
frequencies of an image?
a) Unsharp mask filter
b) Lowpass filter
c) Zero-phase-shift filter
d) All of the mentioned

Answer: b
Explanation: A lowpass filter attenuates high frequency while passing low frequencies.

7. Which of the following filter(s) attenuates low frequency while passing high
frequencies of an image?
a) Unsharp mask filter
b) Highpass filter
c) Zero-phase-shift filter
d) All of the mentioned

Answer: b
Explanation: A highpass filter attenuates low frequency while passing high frequencies.

8. Which of the following filters has a less sharp detail than the original image
because of attenuation of high frequencies?
a) Highpass filter
b) Lowpass filter
c) Zero-phase-shift filter
d) None of the mentioned

Answer: b
Explanation: A lowpass filter attenuates high so the image has fewer sharp details.

9. The feature(s) of a highpass filtered image is/are ___________


a) Have less gray-level variation in smooth areas
b) Emphasized transitional gray-level details
c) An overall sharper image
d) All of the mentioned

Answer: d
Explanation: A highpass filter attenuates low frequency so have less gray-level variation in
smooth areas and allows high frequencies so to have emphasized transitional gray-level
details, resulting in a sharper image.

10. A spatial domain filter of the corresponding filter in frequency domain can be
obtained by applying which of the following operation(s) on filter in frequency
domain?
a) Fourier transform
b) Inverse Fourier transform
c) None of the mentioned
d) All of the mentioned

Answer: b
Explanation: Filters in spatial domain and frequency domain has a Fourier transform pair
relation. A spatial domain filter of the corresponding filter in frequency domain can be
obtained by applying inverse Fourier transform on frequency domain filter.

11. A frequency domain filter of the corresponding filter in spatial domain can be
obtained by applying which of the following operation(s) on filter in spatial domain?
a) Fourier transform
b) Inverse Fourier transform
c) None of the mentioned
d) All of the mentioned

Answer: a
Explanation: Filters in spatial domain and frequency domain has a Fourier transform pair
relation. A frequency domain filter of the corresponding filter in spatial domain can be
obtained by applying inverse Fourier transform on spatial domain filter.
12. Which of the following filtering is done in frequency domain in correspondence to
lowpass filtering in spatial domain?
a) Gaussian filtering
b) Unsharp mask filtering
c) High-boost filtering
d) None of the mentioned

Answer: a
Explanation: A plot of Gaussian filter in frequency domain can be recognized similar to
lowpass filter in spatial domain.

13. Using the feature of reciprocal relationship of filter in spatial domain and
corresponding filter in frequency domain, which of the following fact is true?
a) The narrower the frequency domain filter results in increased blurring
b) The wider the frequency domain filter results in increased blurring
c) The narrower the frequency domain filter results in decreased blurring
d) None of the mentioned

Answer: a
Explanation: The characteristics feature of reciprocal relationship says that the narrower
the frequency domain filter becomes it attenuates more low frequency component and so
increases blurring.

3.15. Smoothing Frequency-Domain Filters


1. Smoothing in frequency domain is achieved by attenuating which of the following
components in the transform of a given image?
a) Attenuating a range of high-frequency components
b) Attenuating a range of low-frequency components
c) All of the mentioned
d) None of the mentioned

Answer: a
Explanation: Since edges and sharp transitions contribute significantly to high-frequency
contents in the gray level of an image. So, smoothing is done by attenuating a range of
high-frequency components.

2. Which of the following is/are considered as type(s) of lowpass filters?


a) Ideal
b) Butterworth
c) Gaussian
d) All of the mentioned

Answer: d
Explanation: Lowpass filters are considered of three types: Ideal, Butterworth, and
Gaussian.

3. Which of the following lowpass filters is/are covers the range of very sharp filter
function?
a) Ideal lowpass filters
b) Butterworth lowpass filter
c) Gaussian lowpass filter
d) All of the mentioned

Answer: a
Explanation: Ideal lowpass filter covers the range of very sharp filter functioning of lowpass
filters.

4. Which of the following lowpass filters is/are covers the range of very smooth filter
function?
a) Ideal lowpass filters
b) Butterworth lowpass filter
c) Gaussian lowpass filter
d) All of the mentioned

Answer: a
Explanation: Gaussian lowpass filter covers the range of very smooth filter functioning of
lowpass filters.

5. Butterworth lowpass filter has a parameter, filter order, determining its


functionality as very sharp or very smooth filter function or an intermediate filter
function. If the parameter value is very high, the filter approaches which of the
following filter(s)?
a) Ideal lowpass filter
b) Gaussian lowpass filter
c) All of the mentioned
d) None of the mentioned

Answer: a
Explanation: For high value of filter order Butterworth lowpass filter behaves as Ideal
lowpass filter, while for lower order value it has a smoother form behaving like Gaussian
lowpass filter.

6. Butterworth lowpass filter has a parameter, filter order, determining its


functionality as very sharp or very smooth filter function or an intermediate filter
function. If the parameter value is of lower order, the filter approaches to which of
the following filter(s)?
a) Ideal lowpass filter
b) Gaussian lowpass filter
c) All of the mentioned
d) None of the mentioned

Answer: b
Explanation: For high value of filter order Butterworth lowpass filter behaves as Ideal
lowpass filter, while for lower order value it has a smoother form behaving like Gaussian
lowpass filter.

7. In a filter, all the frequencies inside a circle of radius D0 are not attenuated while
all frequencies outside circle are completely attenuated. The D0 is the specified
nonnegative distance from origin of the Fourier transform. Which of the following
filter(s) characterizes the same?
a) Ideal filter
b) Butterworth filter
c) Gaussian filter
d) All of the mentioned

Answer: a
Explanation: In ideal filter all the frequencies inside a circle of radius D0 are not attenuated
while all frequencies outside the circle are completely attenuated.

8. In an ideal lowpass filter case, what is the relation between the filter radius and
the blurring effect caused because of the filter?
a) Filter size is directly proportional to blurring caused because of filter
b) Filter size is inversely proportional to blurring caused because of filter
c) There is no relation between filter size and blurring caused because of it
d) None of the mentioned

Answer: b
Explanation: Increase in filter size, removes less power from the image and so less severe
blurring occurs
9. The characteristics of the lowpass filter h(x, y) is/are_________
a) Has a dominant component at origin
b) Has a concentric, circular components about the center component
c) All of the mentioned
d) None of the mentioned

Answer: c
Explanation: the lowpass filter has two different characteristics: one is a dominant
component at origin and other one is a concentric, circular component about the center
component.

10. What is the relation for the components of ideal lowpass filter and the image
enhancement?
a) The concentric component is primarily responsible for blurring
b) The center component is primarily for the ringing characteristic of ideal filter
c) All of the mentioned
d) None of the mentioned

Answer: d
Explanation: The center component of ideal lowpass filter is primarily responsible for
blurring while, concentric component is primarily for the ringing characteristic of ideal
filter.

11. Using the feature of reciprocal relationship of filter in spatial domain and
corresponding filter in frequency domain along with convolution, which of the
following fact is true?
a) The narrower the frequency domain filter more severe is the ringing
b) The wider the frequency domain filter more severe is the ringing
c) The narrower the frequency domain filter less severe is the ringing

Answer: a
Explanation: The characteristics feature of reciprocal relationship says that the narrower
the frequency domain filter becomes it attenuates more low frequency component and so
increases blurring and more severe becomes the ringing.

12. Which of the following defines the expression for BLPF H(u, v) of order n, where
D(u, v) is the distance from point (u, v), D0 is the distance defining cutoff frequency?

a)
b)
c) All of the mentioned
d) None of the mentioned

Answer: a
Explanation: BLPF is the Butterworth lowpass filter and is defined as:

13. Which of the following defines the expression for ILPF H(u, v) of order n, where
D(u, v) is the distance from point (u, v), D0 is the distance defining cutoff frequency?

a)

b)
c) All of the mentioned
d) None of the mentioned

Answer: a
Explanation: ILPF is the Ideal lowpass filter and is defined as:

14. State the statement true or false: “BLPF has sharp discontinuity and ILPF doesn’t,
and so ILPF establishes a clear cutoff b/w passed and filtered frequencies”.
a) True
b) False

Answer: b
Explanation: ILPF has sharp discontinuity and BLPF doesn’t, so BLPF establishes a clear
cutoff b/w passed and filtered frequencies.

15. A Butterworth filter of what order has no ringing?


a) 1
b) 2
c) 3
d) 4
Answer: a
Explanation: A Butterworth filter of order 1 has no ringing and ringing exists for order 2
although is imperceptible. A Butterworth filter of higher order shows significant factor of
ringing.

3.16. Unsharp Masking, High-boost filtering and


Emphasis Filtering
1. In frequency domain terminology, which of the following is defined as “obtaining a
highpass filtered image by subtracting from the given image a lowpass filtered
version of itself”?
a) Emphasis filtering
b) Unsharp masking
c) Butterworth filtering
d) None of the mentioned

Answer: b
Explanation: In frequency domain terminology unsharp masking is defined as “obtaining
a highpass filtered image by subtracting from the given image a lowpass filtered version
of itself”.

2. Which of the following is/ are a generalized form of unsharp masking?


a) Lowpass filtering
b) High-boost filtering
c) Emphasis filtering
d) All of the mentioned

Answer: b
Explanation: Unsharp masking is defined as “obtaining a highpass filtered image by
subtracting from the given image a lowpass filtered version of itself” while high-boost
filtering generalizes it by multiplying the input image by a constant, say A≥1.

3. High boost filtered image is expressed as: fhb = A f(x, y) – flp(x, y), where f(x, y) the
input image, A is a constant and flp(x, y) is the lowpass filtered version of f(x, y).
Which of the following facts validates if A=1?
a) High-boost filtering reduces to regular Highpass filtering
b) High-boost filtering reduces to regular Lowpass filtering
c) All of the mentioned
d) None of the mentioned

Answer: a
Explanation: High boost filtered image is modified as: fhb = (A-1) f(x, y) +f(x, y) – flp(x, y)
i.e. fhb = (A-1) f(x, y) + fhp(x, y). So, when A=1, High-boost filtering reduces to regular Highpass
filtering.

4. High boost filtered image is expressed as: fhb = A f(x, y) – flp(x, y), where f(x, y) the
input image, A is a constant and flp(x, y) is the lowpass filtered version of f(x, y).
Which of the following fact(s) validates if A increases past 1?
a) The contribution of the image itself becomes more dominant
b) The contribution of the highpass filtered version of image becomes less dominant
c) All of the mentioned
d) None of the mentioned

Answer: c
Explanation: High boost filtered image is modified as: fhb = (A-1) f(x, y) +f(x, y) – flp(x, y)
i.e. fhb = (A-1) f(x, y) + fhp(x, y). So, when A>1, the contribution of the image itself becomes
more dominant over the highpass filtered version of image.

5. If, Fhp(u, v)=F(u, v) – Flp(u, v) and Flp(u, v) = Hlp(u, v)F(u, v), where F(u, v) is the image in
frequency domain with Fhp(u, v) its highpass filtered version, Flp(u, v) its lowpass
filtered component and Hlp(u, v) the transfer function of a lowpass filter. Then,
unsharp masking can be implemented directly in frequency domain by using a filter.
Which of the following is the required filter?
a) Hhp(u, v) = Hlp(u, v)
b) Hhp(u, v) = 1 + Hlp(u, v)
c) Hhp(u, v) = – Hlp(u, v)
d) Hhp(u, v) = 1 – Hlp(u, v)
Answer: d
Explanation: Unsharp masking can be implemented directly in frequency domain by using
a composite filter: Hhp(u, v) = 1 – Hlp(u, v).

6. Unsharp masking can be implemented directly in frequency domain by using a


filter: Hhp(u, v) = 1 – Hlp(u, v), where Hlp(u, v) the transfer function of a lowpass filter.
What kind of filter is Hhp(u, v)?
a) Composite filter
b) M-derived filter
c) Constant k filter
d) None of the mentioned

Answer: a
Explanation: Unsharp masking can be implemented directly in frequency domain by using
a composite filter: Hhp(u, v) = 1 – Hlp(u, v).

7. If unsharp masking can be implemented directly in frequency domain by using a


composite filter: Hhp(u, v) = 1 – Hlp(u, v), where Hlp(u, v) the transfer function of a
lowpass filter. Then, the composite filter for High-boost filtering is __________
a) Hhb(u, v) = 1 – Hhp(u, v)
b) Hhb(u, v) = 1 + Hhp(u, v)
c) Hhb(u, v) = (A-1) – Hhp(u, v), A is a constant
d) Hhb(u, v) = (A-1) + Hhp(u, v), A is a constant

Answer: d
Explanation: For given composite filter of unsharp masking Hhp(u, v) = 1 – Hlp(u, v), the
composite filter for High-boost filtering is Hhb(u, v) = (A-1) + Hhp(u, v).

8. The frequency domain Laplacian is closer to which of the following mask?


a) Mask that excludes the diagonal neighbors
b) Mask that excludes neighbors in 4-adjacancy
c) Mask that excludes neighbors in 8-adjacancy
d) None of the mentioned
Answer: a
Explanation: The frequency domain Laplacian is closer to mask that excludes the diagonal
neighbors.

9. To accentuate the contribution to enhancement made by high-frequency


components, which of the following method(s) should be more appropriate to apply?
a) Multiply the highpass filter by a constant
b) Add an offset to the highpass filter to prevent eliminating zero frequency term by filter
c) All of the mentioned combined and applied
d) None of the mentioned

Answer: c
Explanation: To accentuate the contribution to enhancement made by high-frequency
components, we have to multiply the highpass filter by a constant and add an offset to the
highpass filter to prevent eliminating zero frequency term by filter.

10. A process that accentuate the contribution to enhancement made by high-


frequency components, by multiplying the highpass filter by a constant and adding
an offset to the highpass filter to prevent eliminating zero frequency term by filter is
known as _______
a) Unsharp masking
b) High-boost filtering
c) High frequency emphasis
d) None of the mentioned

Answer: c
Explanation: High frequency emphasis is the method that accentuates the contribution to
enhancement made by high-frequency component. In this we multiply the highpass filter
by a constant and add an offset to the highpass filter to prevent eliminating zero frequency
term by filter.

11. Which of the following transfer functions of High frequency emphasis {Hhfe(u, v)}
for Hhp(u, v) being the highpass filtered version of image?
a) Hhfe(u, v) = 1 – Hhp(u, v)
b) Hhfe(u, v) = a – Hhp(u, v), a≥0
c) Hhfe(u, v) = 1 – b Hhp(u, v), a≥0 and b>a
d) Hhfe(u, v) = a + b Hhp(u, v), a≥0 and b>a

Answer: d
Explanation: The transfer function of High frequency emphasis is given as:Hhfe(u, v) = a + b
Hhp(u, v), a≥0 and b>a.

12. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image,a≥0 and b>a. for
certain values of a and b it reduces to High-boost filtering. Which of the following is
the required value?
a) a = (A-1) and b = 0,A is some constant
b) a = 0 and b = (A-1),A is some constant
c) a = 1 and b = 1
d) a = (A-1) and b =1,A is some constant

Answer: d
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v) and the transfer function for High-boost filtering is Hhb(u, v) = (A-1) + Hhp(u, v), A
being some constant. So, for a = (A-1) and b =1, Hhfe(u, v) = Hhb(u, v).

13. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. What
happens when b increases past 1?
a) The high frequency is emphasized
b) The low frequency is emphasized
c) All frequency is emphasized
d) None of the mentioned

Answer: a
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. When b
increases past 1, the high frequency is emphasized.
14. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. When
b increases past 1 the filtering process is specifically termed as__________
a) Unsharp masking
b) High-boost filtering
c) Emphasized filtering
d) None of the mentioned

Answer: c
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. When b
increases past 1, the high frequency is emphasized and so the filtering process is better
known as Emphasized filtering.

15. Validate the statement “Because of High frequency emphasis the gray-level
tonality due to low frequency components is not lost”.
a) True
b) False

Answer: a
Explanation: Because of High frequency emphasis the gray-level tonality due to low
frequency components is not lost.

3.17. Homomorphic filtering


1. Which of the following facts is true for an image?
a) An image is the addition of illumination and reflectance component
b) An image is the subtraction of illumination component from reflectance component
c) An image is the subtraction of reflectance component from illumination component
d) An image is the multiplication of illumination and reflectance component

Answer: d
Explanation: An image is expressed as the multiplication of illumination and reflectance
component.

2. If an image is expressed as the multiplication of illumination and reflectance


component i.e. f(x, y)= i(x, y) * r(x, y), then Validate the statement “We can directly
use the equation f(x, y)= i(x, y) * r(x, y) to operate separately on the frequency
component of illumination and reflectance” .
a) True
b) False

Answer: b
Explanation: For an image is expressed as the multiplication of illumination and reflectance
component i.e. f(x, y)= i(x, y) * r(x, y), the equation can’t be used directly to operate
separately on the frequency component of illumination and reflectance because the
Fourier transform of the product of two function is not separable.

3. In Homomorphic filtering which of the following operations is used to convert


input image to discrete Fourier transformed function?
a) Logarithmic operation
b) Exponential operation
c) Negative transformation
d) None of the mentioned

Answer: a
Explanation: For an image is expressed as the multiplication of illumination and reflectance
component i.e. f(x, y) = i(x, y) * r(x, y), the equation can’t be used directly to operate
separately on the frequency component of illumination and reflectance because the
Fourier transform of the product of two function is not separable. So, logarithmic operation
is used. I{z(x,y)} =I{ln⁡(f(x,y))} =I{ln⁡(i(x,y))} +I{ln⁡(r(x,y))}.

4. A class of system that achieves the separation of illumination and reflectance


component of an image is termed as __________
a) Base class system
b) Homomorphic system
c) Base separation system
d) All of the mentioned

Answer: b
Explanation: Homomorphic system is a class of system that achieves the separation of
illumination and reflectance component of an image.

5. Which of the following image components is characterized by slow spatial


variation?
a) Illumination component
b) Reflectance component
c) All of the mentioned
d) None of the mentioned

Answer: a
Explanation: The illumination component of an image is characterized by a slow spatial
variation.

6. Which of the following image component varies abruptly particularly at the


junction of dissimilar objects?
a) Illumination component
b) Reflectance component
c) All of the mentioned
d) None of the mentioned

Answer: b
Explanation: The reflectance component of an image varies abruptly particularly at the
junction of dissimilar objects.

7. The reflectance component of an image varies abruptly particularly at the junction


of dissimilar objects. The characteristic lead to associate illumination with __________
a) The low frequency of Fourier transform of logarithm of the image
b) The high frequency of Fourier transform of logarithm of the image
c) All of the mentioned
d) None of the mentioned

Answer: b
Explanation: The reflectance component of an image varies abruptly, so, is associated with
the high frequency of Fourier transform of logarithm of the image.

8. The illumination component of an image is characterized by a slow spatial


variation. The characteristic lead to associate illumination with __________
a) The low frequency of Fourier transform of logarithm of the image
b) The high frequency of Fourier transform of logarithm of the image
c) All of the mentioned
d) None of the mentioned

Answer: a
Explanation: The illumination component of an image is characterized by a slow spatial
variation, so, is associated with the low frequency of Fourier transform of logarithm of the
image.
9. If the contribution made by illumination component of image is decreased and the
contribution of reflectance component is amplified, what will be the net result?
a) Dynamic range compression
b) Contrast enhancement
c) All of the mentioned
d) None of the mentioned

Answer: c
Explanation: The illumination component of an image is characterized by a slow spatial
variation and the reflectance component of an image varies abruptly particularly at the
junction of dissimilar objects, so, if the contribution made by illumination component of
image is decreased and the contribution of reflectance component is amplified then there
is simultaneous dynamic range compression and contrast stretching.

3.18. Intensity Transformation Functions


1. How is negative of an image obtained with intensity levels [0, L-1] with “r” and “s”
being pixel values?
a) s = L – 1 + r
b) s = L – 1 – r
c) s = L + 1 + r
d) s = L + 1 + r

Answer: b
Explanation: The negative is obtained using s = L – 1 + r.

2. The general form of log transformations is ____________________


a) s = c.log (1 + r)
b) s = c+log(1 + r)
c) s = c.log (1 – r)
d) s = c-log (1 – r)

Answer: a
Explanation: s = c.log (1 + r) is the log transformation.

3. Power-law transformations has the basic form of ________________ where c and ∆ are
constants.
a) s = c + r∆
b) s = c – r∆
c) s = c * r∆
d) s = c / r.∆

Answer: c
Explanation: s = c * r∆ is called the Power-law transformation.

4. For what value of the output must the Power-law transformation account for
offset?
a) No offset needed
b) All values
c) One
d) Zero

Answer: d
Explanation: When the output is Zero, an offset is necessary.

5. What is Gamma Correction?


a) A Power-law response phenomenon
b) Inverted Intensity curve
c) Light brightness variation
d) None of the Mentioned

Answer: a
Explanation: The exponent in Power-law is called gamma and the process used to correct
the response of Power-law transformation is called Gamma Correction.

6. Which process expands the range of intensity levels in an image so that it spans
the full intensity range of the display?
a) Shading correction
b) Contrast sketching
c) Gamma correction
d) None of the Mentioned
Answer: b
Explanation: Contrast sketching is the process used to expand intensity levels in an image.

7. Highlighting a specific range of intensities of an image is called _______________


a) Intensity Matching
b) Intensity Highlighting
c) Intensity Slicing
d) None of the Mentioned

Answer: c
Explanation: Highlighting a specific range of intensities of an image is called Intensity
Slicing.

8. Highlighting the contribution made to total image by specific bits instead of


highlighting intensity-level changes is called ____________________
a) Intensity Highlighting
b) Byte-Slicing
c) Bit-plane slicing
d) None of the Mentioned

Answer: c
Explanation: It is called Bit-plane slicing.

9. Which of the following involves reversing the intensity levels of an image?


a) Log Transformations
b) Piecewise Linear Transformations
c) Image Negatives
d) None of the Mentioned

Answer: c
Explanation: Image negatives use reversing intensity levels.

10. Piecewise Linear Transformation function involves which of the following?


a) Bit-plane slicing
b) Intensity level slicing
c) Contrast stretching
d) All of the Mentioned

Answer: d
Explanation: Piecewise Linear Transformation function involves all the mentioned
functions.
3.19. Fuzzy Techniques – Transformations and
Filtering
1. What is the set generated using infinite-value membership functions, called?
a) Crisp set
b) Boolean set
c) Fuzzy set
d) All of the mentioned

Answer: c
Explanation: It is called fuzzy set.

2. Which is the set, whose membership only can be true or false, in bi-values Boolean
logic?
a) Boolean set
b) Crisp set
c) Null set
d) None of the mentioned

Answer: b
Explanation: The so-called Crisp set is the one in which membership only can be true or
false, in bi-values Boolean logic.

3. If Z is a set of elements with a generic element z, i.e. Z = {z}, then this set is called
_____________
a) Universe set
b) Universe of discourse
c) Derived set
d) None of the mentioned

Answer: b
Explanation: It is called the universe of discourse.

4. A fuzzy set ‘A’ in Z is characterized by a ____________ that associates with element of


Z, a real number in the interval [0, 1].
a) Grade of membership
b) Generic element
c) Membership function
d) None of the mentioned
Answer: c
Explanation: A fuzzy set is characterized by a membership function.

5. A fuzzy set is ________ if and only if membership function is identically zero in Z.


a) Empty
b) Subset
c) Complement
d) None of the mentioned

Answer: a
Explanation: It is called an Empty set.

6. Which of the following is a type of Membership function?


a) Triangular
b) Trapezoidal
c) Sigma
d) All of the mentioned

Answer: d
Explanation: All of them are types of Membership functions.

7. Which of the following is not a type of Membership function?


a) S-shape
b) Bell shape
c) Truncated Gaussian
d) None of the mentioned

Answer: d
Explanation: All the mentioned above are types of Membership functions.

8. Using the IF-THEN rule to create the output of fuzzy system is called _______________.
a) Inference
b) Implication
c) Both the mentioned
d) None of the mentioned

Answer: c
Explanation: It is called Inference or Implication.

9. What is the independent variable of fuzzy output?


a) Maturity
b) Membership
c) Generic Element
d) None of the mentioned

Answer: a
Explanation: Maturity is the independent variable of fuzzy output.

10. Which of the following is not a principal step in fuzzy technique?


a) Fuzzify input
b) Apply implication method
c) Defuzzify final output
d) None of the mentioned

Answer: d
Explanation: All the mentioned above are key steps in fuzzy technique.

3.20. Piecewise-Linear Transformation Functions


1. Which gray-level transformation increases the dynamic range of gray-level in the
image?
a) Power-law transformations
b) Negative transformations
c) Contrast stretching
d) None of the mentioned

Answer: c
Explanation: Increasing the dynamic range of gray-levels in the image is the basic idea
behind contrast stretching.

2. When is the contrast stretching transformation a linear function, for r and s as


gray-value of image before and after processing respectively?
a) r1 = s1 and r2 = s2
b) r1 = r2, s1 = 0 and s2 = L – 1, L is the max gray value allowed
c) r1 = 1 and r2 = 0
d) None of the mentioned

Answer: a
Explanation: If r1 = s1 and r2 = s2 the contrast stretching transformation is a linear function.

3. When is the contrast stretching transformation a thresholding function, for r and s


as gray-value of image before and after processing respectively?
a) r1 = s1 and r2 = s2
b) r1 = r2, s1 = 0 and s2 = L – 1, L is the max gray value allowed
c) r1 = 1 and r2 = 0
d) None of the mentioned

Answer: b
Explanation: If r1 = r2, s1 = 0 and s2 = L – 1, the contrast stretching transformation is a
thresholding function.

4. What condition prevents the intensity artifacts to be created while processing


with contrast stretching, if r and s are gray-values of image before and after
processing respectively?
a) r1 = s1 and r2 = s2
b) r1 = r2, s1 = 0 and s2 = L – 1, L is the max gray value allowed
c) r1 = 1 and r2 = 0
d) r1 ≤ r2 and s1 ≤ s2

Answer: d
Explanation: While processing through contrast stretching, if r1 ≤ r2 and s1 ≤ s2 is maintained,
the function remains single valued and so monotonically increasing. This helps in the
prevention of creation of intensity artifacts.

5. A contrast stretching result been obtained by setting (r1, s1) = (rmin, 0) and (r2, s2) =
(rmax, L – 1), where, r and s are gray-values of image before and after processing
respectively, L is the max gray value allowed and rmax and rmin are maximum and
minimum gray-values in image respectively. What should we term the
transformation function if r1 = r2 = m, some mean gray-value.
a) Linear function
b) Thresholding function
c) Intermediate function
d) None of the mentioned

Answer: b
Explanation: From (r1, s1) = (rmin, 0) and (r2, s2) = (rmax, L – 1), we have s1 = 0 and s2 = L – 1 and if
r1 = r2 = m is set then the result becomes r1 = r2, s1 = 0 and s2 = L – 1, i.e. a thresholding
function.

6. A specific range of gray-levels highlighting is the basic idea of __________


a) Contrast stretching
b) Bit –plane slicing
c) Thresholding
d) Gray-level slicing

Answer: d
Explanation: gray-level slicing is being done by two approaches: One approach is to give all
gray levels of a specific range high value and a low value to all other gray levels.
Second approach is to brighten the pixels gray-value of interest and preserve the
background.
I.e. in both highlighting of a specific range of gray-level has been done.

7. What is/are the approach(s) of gray-level slicing?


a) To give all gray level of a specific range high value and a low value to all other gray levels
b) To brighten the pixels gray-value of interest and preserve the background
c) All of the mentioned
d) None of the mentioned

Answer: c
Explanation: There are basically two approaches of gray-level slicing:
One approach is to give all gray level of a specific range high value and a low value to all
other gray levels.
Second approach is to brighten the pixels gray-value of interest and preserve the
background.

8. Which of the following transforms produces a binary image after processing?


a) Contrast stretching
b) Gray-level slicing
c) All of the mentioned
d) None of the mentioned

Answer: c
Explanation: The approach of gray-level slicing “to give all gray level of a specific range high
value and a low value to all other gray levels” produces a binary image.
One of the transformations in Contrast stretching darkens the value of r (input image gray-
level) below m (some predefined gray-value) and brightens the value of r above m, giving a
binary image as result.

9. Specific bit contribution in the image highlighting is the basic idea of __________
a) Contrast stretching
b) Bit –plane slicing
c) Thresholding
d) Gray-level slicing

Answer: b
Explanation: Bit-plane slicing highlights the contribution of specific bits made to total
image, instead of highlighting a specific gray-level range.

10. In bit-plane slicing if an image is represented by 8 bits and is composed of eight 1-


bit planes, with plane 0 showing least significant bit and plane 7 showing most
significant bit. Then, which plane(s) contain the majority of visually significant data.
a) Plane 4, 5, 6, 7
b) Plane 0, 1, 2, 3
c) Plane 0
d) Plane 2, 3, 4, 5

Answer: a
Explanation: In bit-plane slicing, for the given data, the higher-ordered bits (mostly top four)
contain most of the data visually signified.

11. Which of the following helps to obtain the number of bits to be used to quantize
each pixel.
a) Gray-level slicing
b) Contrast stretching
c) Contouring
d) Bit-plane slicing

Answer: d
Explanation: Bits-plane slicing helps in obtaining the importance played by each bit in the
image by separating the image into bit-planes.

3.21. Noise Reduction by Spatial Filtering


1. Spatial domain methods operate on image pixels given by: g (x, y) = T [f (x, y)]. What
does g, T and f represent?
a) g represents output image, T represents noise matrix, f represents input image
b) g represents noise matrix, T represents input image, f represents output image
c) g represents output image, T represents input image, f represents noise matrix
d) g represents input image, T represents operator defined over a neighborhood
point(x,y), f represents output image
Answer: d
Explanation: Consider a 3X3 matrix A with the first position as (1,1), and (x,y) be a point
(2,2). Thus, A is the 3X3 neighbor of (x,y). The smallest possible neighbor would be 1X1, i.e T
is the intensity transformation function which operates of the input image f(x,y) to produce
the output image g(x,y).

2. If the mask filter is given by the following matrix, find T[f(x,y)].


Mask filter M:

M00 M01 M02


M10 M11 M12
M20 M21 M22

a) T[f(x,y)] = f(x,y)xM01 + f(x,y)xM02 + f(x,y)xM03 + f(x,y)xM10 + f(x,y)xM11 + f(x,y)xM12 + f(x,y)xM20 +


f(x,y)xM21 + f(x,y)xM22
b) T[f(x,y)] = f(x+1,y+1)xM01 + f(x-1,y-1)xM02 + f(x,y)xM03 + f(x+1,y)xM10 + f(x,y+1)xM11 + f(x-
1,y)xM12 + f(x,y-1)xM20 + f(x+1,y-1)xM21 + f(x-1,y+1)xM22
c) T[f(x,y)] = f(x-1,y-1)xM01 + f(x-1,y)xM02 + f(x-1,y+1)xM03 + f(x,y-1)xM10 + f(x,y)xM11 +
f(x,y+1)xM12 + f(x+1,y-1)xM20 + f(x+1,y)xM21+ f(x+1,y+1)xM22
d) T[f(x,y)] = f(x+1,y)xM01 + f(x,y+1)xM02 + f(x-1,y-1)xM03 + f(x+1,y+1)xM10+ f(x,y)xM11 + f(x-
1,y+1)xM12 + f(x+1,y+1)xM20 + f(x-1,y-1)xM21+ f(x,y)xM22

Answer: c
Explanation: Since the matrix is 3X3, the image f will have pixels at f(x,y) at M11 Similarly, all
the other pixels would be at their corresponding position and multiplication of image f with
the mask M will be T[f(x,y)] :
T[f(x,y)] = f(x-1,y-1)xM01 + f(x-1,y)xM02 + f(x-1,y+1)xM03 + f(x,y-1)xM10 + f(x,y)xM11 + f(x,y+1)xM12 +
f(x+1,y-1)xM20 + f(x+1,y)xM21 + f(x+1,y+1)xM22

3. Which of the following represents the gray level transformation for image
negative?

a) s=(L-1) -r
b) s=(L+1) +r
c) s=(L-1) *r
d) s=(L-1) /r
Answer: a
Explanation: In negative image transformations, each value of input image is subtracted
from (Level 1). This is then mapped on the output image. For an image 8 bpp image, there
will be 28 levels = 256. Putting the L = 256 in (d) we get, s = (256 –1) - r, s = 255 – r.

4. Which of the following represents the gray level transformation for log
transformation?
a) s=c+log(1+r)
b) s=c-log(1+r)
c) s=c/log(1+r)
d) s=c*log(1+r)

Answer: d
Explanation: In log transformation, s and r represents the pixel values of the input and the
output images and c are an arbitrary constant. We know that log (0) = infinity, so to make
the value finite, the value 1 is added to the input image pixel value, which makes the value
as 1, since log (1) is defined with the value = 0.

5. Which of the following represents the gray level transformation for power-law
transformation?
a) s=c+rγ
b) s=c+log(rγ)
c) s=c-rγ
d) s=c*rγ

Answer: d
Explanation: This transformation is used to enhance the image for different devices. The
gamma of different devices are different. The higher value of gamma corresponds to a
darker image and a lower value of gamma corresponds to a brighter image. Gamma for
CRT lies in between 1.8 to 2.5.

6. Smoothing filters are used for blurring and noise reduction. (True / False)
a) True
b) False

Answer: a
Explanation: Smoothing filters are used to reduce noise of an image or to produce a less
pixelated image. Most smoothing filters are low pass filters. Smoothing filters are also
known as average and low pass filters.

7. What is contrast stretching?


a) Normalization of the image to improve the contrast.
b) Stretching the image from 50% to 150%.
c) Stretching a part of the image.
d) Stretching the color of an image from point A to point B.

Answer: a
Explanation: Contrast stretching is also called normalization of an image. It is a simple
image enhancement technique to improve the contrast in an image. It is done by stretching
the intensity values to a desired range of values.

8. What is the formulation of gray-level slicing?


a) s=L*L
b) s=log(L+1)
c) s=L-1
d) s=L+1

Answer: c
Explanation: Gray level slicing is also called intensity level slicing. As the name suggests,
gray level slicing is used for highlighting the different parts of the image. This is done in two
types: just highlighting the part of an image and highlighting and preserving the other
intensities as well. Thus L-1 gives the gray level slicing where L is the number of levels, for
8-bit L=256.

9. Which is not a goal of bit-plane slicing?


a) Converting to a binary image from gray level image
b) Represent an image with few bits and convert the image to a small size
c) Dividing images into slices
d) Enhance the image by focusing

Answer: c
Explanation: Bit plane slicing is a method to represent images with one or more bits of the
byte for each pixel. Only MSB is used to represent the pixel. It reduces the original gray
level to a binary image. The three main goals of bit plane slicing are: Converting to a binary
image from gray level image, represent an image with few bits and convert the image to a
small size, Enhance the image by focusing.

10. Which of the following is not an image enhancement using arithmetic/logical


operations?
a) AND, OR
b) NOT
c) SUBTRACTION AND AVERAGING
d) XOR

Answer: d
Explanation: For image enhancement, the techniques used are Arithmetic and Logical
Operations. For Logical operations for image enhancement the operations are: AND, OR,
NOT. Arithmetic operations for image enhancement are Subtraction and Averaging. XOR
operation is not used in image enhancement.

CHAPTER 4: Filtering in Frequency Domain

4.1. Gaussian Lowpass and Sharpening Frequency


Domain Filters
1. If the Gaussian filter is expressed as H(u, v) = e(-D2 (u,v)/2D
0 ,where D(u, v) is the distance
2)

from point(u, v), D0 is the distance defining cutoff frequency, then for what value of
D(u, v) the filter is down to 0.607 of its maximum value?
a) D(u, v) = D0
b) D(u, v) = D02
c) D(u, v) = D03
d) D(u, v) = 0

Answer: a
Explanation: For the given Gaussian filter of 2-D image, the value D(u, v) = D0 gives the filter
a down to 0.607 of its maximum value.

2. State the statement as true or false. “The GLPF did produce as much smoothing as
the BLPF of order 2 for the same value of cutoff frequency”.
a) True
b) False
Answer: b
Explanation: For the same value of cutoff frequency, the GLPF did not produce as much
smoothing as the BLPF of order 2, because the profile of GLPF is not as tight as BLPF of
order 2.

3. In general, which of the following assures of no ringing in the output?


a) Gaussian Lowpass Filter
b) Ideal Lowpass Filter
c) Butterworth Lowpass Filter
d) All of the mentioned

Answer: a
Explanation: Using Gaussian Lowpass Filter no ringing is assured, but Ideal Lowpass Filter
and Butterworth Lowpass Filter of order 2and more produces significant ringing.

4. The lowpass filtering process can be applied in which of the following area(s)?
a) The field of machine perception, with application of character recognition
b) In field of printing and publishing industry
c) In field of processing satellite and aerial images
d) All of the mentioned

Answer: d
Explanation: In case of broken characters recognition system, LPF is used. LPF is used as
preprocessing system in printing and publishing industry, and in case of remote sensed
images LPF is used to blur out as much detail as possible leaving the large feature
recognizable.

5. The edges and other abrupt changes in gray-level of an image are associated
with_________
a) High frequency components
b) Low frequency components
c) Edges with high frequency and other abrupt changes in gray-level with low frequency
components
d) Edges with low frequency and other abrupt changes in gray-level with high frequency
components

Answer: a
Explanation: High frequency components are related with the edges and other abrupt
changes in gray-level of an image.
6. A type of Image is called VHRR image. What is the definition of VHRR image?
a) Very High Range Resolution image
b) Very High-Resolution Range image
c) Very High-Resolution Radiometer image
d) Very High Range Radiometer Image

Answer: c
Explanation: A VHRR image is a Very High-Resolution Radiometer Image.

7. The Image sharpening in frequency domain can be achieved by which of the


following method(s)?
a) Attenuating the high frequency components
b) Attenuating the low-frequency components
c) All of the mentioned
d) None of the mentioned

Answer: b
Explanation: The Image sharpening in frequency domain is achieved by attenuating the
low-frequency components without disturbing the high-frequency components.

8. The function of filters in Image sharpening in frequency domain is to perform


reverse operation of which of the following Lowpass filter?
a) Gaussian Lowpass filter
b) Butterworth Lowpass filter
c) Ideal Lowpass filter
d) None of the Mentioned

Answer: c
Explanation: The function of filters in Image sharpening in frequency domain is to perform
precisely reverse operation of Ideal Lowpass filter.
The transfer function of Highpass filter is obtained by relation: Hhp(u, v) = 1 – Hlp(u, v), where
Hlp(u, v) is transfer function of corresponding lowpass filter.

9. If D0 is the cutoff distance measured from origin of frequency rectangle and D(u, v)
is the distance from point (u, v). Then what value does an Ideal Highpass filter will
give if D(u, v) ≤ D0 and if D(u, v) >D0?
a) 0 and 1 respectively
b) 1 and 0 respectively
c) 1 in both case
d) 0 in both case
Answer: a
Explanation: Unlike Ideal lowpass filter, an Ideal highpass filter attenuates the low-
frequency components and so gives 0 for D(u, v) ≤ D0 and 1 for D(u, v) >D0.

10. What is the relation of the frequencies to a circle of radius D0, where D0 is the
cutoff distance measured from origin of frequency rectangle, for an Ideal Highpass
filter?
a) IHPF sets all frequencies inside circle to zero
b) IHPF allows all frequencies, without attenuating, outside the circle
c) All of the mentioned
d) None of the mentioned

Answer: c
Explanation: An Ideal high pass filter gives 0 for D(u, v) ≤ D0 and 1 for D(u, v) >D0.

11. Which of the following is the transfer function of the Butterworth Highpass Filter,
of order n, D0 is the cutoff distance measured from origin of frequency rectangle and
D(u, v) is the distance from point (u, v)?

a)

b)

c)
d) none of the mentioned

Answer: a
Explanation: The transfer function of Butterworth highpass filter of order n, D0 is the cutoff
distance measured from origin of frequency rectangle and D(u, v) is the distance from point

(u, v) is given by: .

12. Which of the following is the transfer function of the Ideal Highpass Filter? Given
D0 is the cutoff distance measured from origin of frequency rectangle and D(u, v) is
the distance from point (u, v).

a)

b)
c)
d) none of the mentioned

Answer: b
Explanation: The transfer function of Ideal highpass filter, whereD0 is the cutoff distance
measured from origin of frequency rectangle and D(u, v) is the distance from point (u, v) is

given by:

13. Which of the following is the transfer function of the Gaussian Highpass Filter?
Given D0 is the cutoff distance measured from origin of frequency rectangle and D(u,
v) is the distance from point (u, v).

a)

b)

c)
d) none of the mentioned

Answer: c
Explanation: The transfer function of Gaussian highpass filter, where D0 is the cutoff
distance measured from origin of frequency rectangle and D(u, v) is the distance from point

(u, v) is given by: .

14. For a given image having smaller objects, which of the following filter(s), having
D0 as the cutoff distance measured from origin of frequency rectangle, would you
prefer for a comparably smoother result?
a) IHLF with D0 15
b) BHPF with D0 15 and order 2
c) GHPF with D0 15 and order 2
d) All of the mentioned

Answer: c
Explanation: For the same format as for BHPF, GHPF gives a result comparably smoother
than BHPF. However, BHPF performance for filtering smaller objects is comparable with
IHPF.
15. Which of the following statement(s) is true for the given fact that “Applying
Highpass filters has an effect on the background of the output image”?
a) The average background intensity increases to near white
b) The average background intensity reduces to near black
c) The average background intensity changes to a value average of black and white
d) All of the mentioned

Answer: b
Explanation: The Highpass filter eliminates the zero frequency components of the Fourier
transformed image HPFs are applied on. So, the average background intensity reduces to
near black.

Chapter 5 : Image Restoration and Reconstruction

Elements of Visual Perception

1. Which of the following is a receptor in the retina of human eye?


a) Rods
b) Cones
c) Rods and Cones
d) Neither Rods nor Cones

Explanation: Rods are long slender receptors while cones are shorter and thicker receptors.

2. How is image formation in the eye different from that in a photographic camera
a) No difference
b) Variable focal length
c) Varying distance between lens and imaging plane
d) Fixed focal length

Explanation: Fibers in ciliary body vary shape of the lens thereby varying its focal length.
3. Range of light intensity levels to which the human eye can adapt (in Log of Intensity-mL)
a) 10-6 to 10-4
b) 104 to 106
c) 10-6 to 104
d) 10-5 to 105

Explanation: Range of light intensity to which human eye can adapt is enormous
and about the order 1010 from 10-6 to 10

4. What is subjective brightness?


a) Related to intensity
b) Related to brightness
c) Related to image perception
d) Related to image formation

Explanation: It is the intensity as perceived by the human eye.

5. What is brightness adaptation?


a) Changing the eye’s overall sensitivity
b) Changing the eye’s imaging ability
c) Adjusting the focal length
d) Transition from scotopic to photopic vision

Explanation: The human eye a wide dynamic range by changing the eye’s overall sensitivity
and this is called brightness adaptation.

6. The inner most membrane of the human eye is


a) Blind Spot
b) Sclera
c) Choroid
d) Retina

Explanation: Retina is the innermost membrane of the human eye.


7. What is the function of Iris?
a) Source of nutrition
b) Detect color
c) Varies focal length
d) Control amount of light

Explanation: Iris is responsible for controlling the amount of light that enters the human
eye.

8. ________ serve to a general, overall picture of the field of view.


a) Cones
b) Rods
c) Retina
d) All of the Mentioned

Explanation: Rods produce an overall picture of the field of view.

9. Ratio of number of rods to the number of cones is _______


a) 1:20
b) 1:2
c) 1:1
d) 1:5

Explanation: No of rods: 6 to 7 million, No of rods: 75 to 150.

10. The absence of receptors is in the retinal area called _____________


a) Lens
b) Ciliary body
c) Blind spot
d) Fovea

Explanation: Except the blind spot, receptors are radially distributed.

Relationships between Pixels


1. In 4-neighbours of a pixel p, how far are each of the neighbours located from p?
a) one pixel apart
b) four pixels apart
c) alternating pixels
d) none of the Mentioned

Explanation: Each pixel is a unit distance apart from the pixel p.

2. If S is a subset of pixels, pixels p and q are said to be ____________ if there exists a path
between them consisting of pixels entirely in S.
a) continuous
b) ambiguous
c) connected
d) none of the Mentioned

Explanation: Pixels p and q are said to be connected if there exists a path between them
consisting of pixels entirely in S.

3. If R is a subset of pixels, we call R a _________ of the image if R is a connected set.


a) Disjoint
b) Region
c) Closed
d) Adjacent

Explanation: R is called a Region of the image.

4. Two regions are said to be ___________ if their union forms a connected set.
a) Adjacent
b) Disjoint
c) Closed
d) None of the Mentioned

Explanation: The regions are said to be Adjacent to each other.

5. If an image contains K disjoint regions, what does the union of all the regions represent?
a) Background
b) Foreground
c) Outer Border
d) Inner Border

Explanation: The union of all regions is called Foreground and its complement is called the
Background.

6. For a region R, the set of points that are adjacent to the complement of R is called as
________
a) Boundary
b) Border
c) Contour
d) All of the Mentioned

Explanation: The words boundary, border and contour mean the same set.

7. The distance between pixels p and q, the pixels have a distance less than or equal to
some value of radius r centred at (x,y) is called :
a) Euclidean distance
b) City-Block distance
c) Chessboard distance
d) None of the Mentioned

Explanation: Euclidean distance is measured using a radius from a defined centre.

8. The distance between pixels p and q, the pixels have a distance less than or equal to
some value of radius r, form a diamond centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned

Explanation: Formation of a diamond is measured as City-Block distance.

9. The distance between pixels p and q, the pixels have a distance less than or equal to
some value of radius r, form a square centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned

Explanation: Distance measured by forming a square around the centre is called


Chessboard distance.

10. Which of the following is NOT is not a type of Adjacency?


a) 4-Adjacency
b) 8-Adjacency
c) m-Adjacency
d) None of the Mentioned

Explanation: All the mentioned adjacency types are valid.

Chapter 6: Color Image Processing

Color Fundamentals
1. How many categories does the color image processing is basically divided into?
a) 4
b) 2
c) 3
d) 5

Explanation: Color image processing is divided into two major areas: full-color and pseudo-
color processing.

2. What are the names of categories of color image processing?


a) Full-color and pseudo-color processing
b) Half-color and full-color processing
c) Half-color and pseudo-color processing
d) Pseudo-color and Multi-color processing

Explanation: Color image processing is divided into two major areas: full-color and pseudo-
color processing. In the first category, the images are acquired with a full-color sensor like
color TV or color scanner. In the second category, there is a problem of assigning a color to
a particular monochrome intensity or range of intensities.

3. What are the basic quantities that are used to describe the quality of a chromatic light
source?
a) Radiance, brightness and wavelength
b) Brightness and luminence
c) Radiance, brightness and luminence
d) Luminence and radiance

Explanation: Three quantities are used to describe the quality of a chromatic light source:
radiance, luminance and brightness.

4. What is the quantity that is used to measure the total amount of energy flowing from the
light source?
a) Brightness
b) Intensity
c) Luminence
d) Radiance

Explanation: Three quantities are used to describe the quality of a chromatic light source:
radiance, luminance and brightness. Radiance is used to measure the total amount of
energy flows from the light source and is generally measured in watts (W).

5. What are the characteristics that are used to distinguish one color from the other?
a) Brightness, Hue and Saturation
b) Hue, Brightness and Intensity
c) Saturation, Hue
d) Brightness, Saturation and Intensity
Explanation: The characteristics generally used to distinguish one color from another are
brightness, hue and saturation. Brightness embodies the chromatic notion of intensity.
Hue is an attribute associated with dominant wavelength in a mixture of light waves.
Saturation refers to the relative purity or the amount of white light mixed with a hue.

6. What are the characteristics that are taken together in chromaticity?


a) Saturation and Brightness
b) Hue and Saturation
c) Hue and Brightness
d) Saturation, Hue and Brightness

Explanation: Hue and saturation are taken together are called chromaticity and therefore, a
color may be characterized by its brightness and chromaticity.

7. Which of the following represent the correct equations for trichromatic coefficients?
a) x=X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z)
b) x=(Y+Z)/(X+Y+Z), y=(X+Z)/(X+Y+Z), z=(X+Y)/(X+Y+Z)
c) x=X/(X-Y+Z), y=Y/(X-Y+Z), z=Z/(X-Y+Z)
d) x=(-X)/(X+Y+Z), y=(-Y)/(X+Y+Z), z=(-Z)/(X+Y+Z)

Explanation: Tri-stimulus values are the amounts of red, green and blue needed to form
any particular color and they are denoted as X,Y and Z respectively. A colors the specified
by its trichromatic coefficients x, y & z: =X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z).

8. What do you mean by tri-stimulus values?


a) It is the amount of red, green and yellow needed to form any particular color
b) It is the amount of red, green and indigo needed to form any particular color
c) It is the amount of red, yellow and blue needed to form any particular color
d) It is the amount of red, green and blue needed to form any particular color

Explanation: The amounts of red, green and blue needed to form any particular color are
called the tri-stimulus values and are denoted by X, Y and Z respectively. A color is then
specified by its trichromatic coefficients, whose equations are formed from tri-stimulus
values.
9. What is the value obtained by the sum of the three trichromatic coefficients?
a) 0
b)-1
c) 1
d) Null

Explanation: From the equations: x=X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z) it is the noted that
sum of the coefficients is x+y+z≅1.

10. What is the name of area of the triangle in C.I E chromatic diagram that shows a typical
range of colors produced by RGB monitors?
a) Color gamut
b) Tricolor
c) Color game
d) Chromatic colors

Explanation: The triangle in C.I.E chromatic diagram shows a typical range of colors called
the color gamut produced by RGB monitors. The irregular region inside the triangle is
representative of the color gamut of today’s high-quality color printing devices.

Color Models

1. Color model is also named as (another name):


a) Color space
b) Color gap
c) Color space & color system
d) Color system

Explanation: A color model is also called as color space or color system .Its purpose is to
facilitate the specification of colors in some standard, generally accepted way.

2. What do you mean by the term pixel depth?


a) It is the number of bits used to represent each pixel in RGB space
b) It is the number of bytes used to represent each pixel in RGB space
c) It is the number of units used to represent each pixel in RGB space
d) It is the number of mm used to represent each pixel in RGB space

Explanation: Images are represented in the RGB color model consist of three component
images one for each primary color. When fed into RGB monitor, these three images
combine on the phosphor screen to produce a composite color image. The number of bits
used to represent each pixel in RGB space is called the pixel depth.

3. How many bit RGB color image is represented by full-color image?


a) 32-bit RGB color image
b) 24-bit RGB color image
c) 16-bit RGB color image
d) 8-bit RGB color image

Explanation: The term full-color image is used often to denote a 24-bit RGB color image.
The total number of colors in a 24-bit RGB color image is (28)3=16777216.

4. What is the equation used to obtain S component of each RGB pixel in RGB color format?
a) S=1+3/(R+G+B) [min⁡(R,G,B)].
b) S=1+3/(R+G+B) [max⁡(R,G,B)].
c) S=1-3/(R+G+B) [max⁡(R,G,B)].
d) S=1-3/(R+G+B) [min⁡(R,G,B)].

Explanation: If an image is given in RGB format then the saturation component is obtained
by the equation.

5. What is the equation used to obtain I(Intensity) component of each RGB pixel in RGB
color format?
a) I=1/2(R+G+B)
b) I=1/3(R+G+B)
c) I=1/3(R-G-B)
d) I=1/3(R-G+B)

Explanation: If an image is given in RGB format then the intensity (I) component is obtained
by the equation, I=1/3 (R+G+B).
6. What is the equation used for obtaining R value in terms of HSI components?
a) R=I[1-(S cos⁡H)/cos⁡(60°-H) ].
b) R=I[1+(S cos⁡H)/cos(120°-H)].
c) R=I[1+(S cos⁡H)/cos⁡(60°-H) ].
d) R=I[1+(S cos⁡H)/cos(30°-H) ].

Explanation: Given values of HSI in the interval [0, 1], the R value in the RGB components is
given by the equation:

7. What is the equation used for calculating B value in terms of HSI components?
a) B=I(1+S)
b) B=S(1-I)
c) B=S(1+I)
d) B=I(1-S)

Explanation: Given values of HSI in the interval [0, 1], the B value in the RGB components is
given by the equation: B=I(1-S).

8. What is the equation used for calculating G value in terms of HSI components?
a) G=3I-(R+B)
b) G=3I+(R+B)
c) G=3I-(R-B)
d) G=2I-(R+B)

Explanation: Given values of HSI in the interval [0, 1], the B value in the RGB components is
given by the equation: G=3I-(R+B).

9. Which of the following color models are used for color printing?
a) RGB
b) CMY
c) CMYK
d) CMY and CMYK
Explanation: The hardware oriented models which are prominently used in the color
printing process are CMY (cyan, magenta and yellow) and CMYK (cyan, magenta, yellow and
black).

You might also like