Image Processing
Image Processing
Processing
a) Position
b) Brightness
c) Contrast
d) Noise
2. Among the following image processing techniques which is fast, precise and
flexible.
a) Optical
b) Digital
c) Electronic
d) Photographic
4. What is pixel?
a) Pixel is the elements of a digital image
b) Pixel is the elements of an analog image
c) Pixel is the cluster of a digital image
d) Pixel is the cluster of an analog image
Explanation: An Image is a collection of individual points referred as pixel, thus a Pixel is
the element of a digital image.
7. Which gives a measure of the degree to which a pure colour is diluted by white light?
a) Saturation
b) Hue
c) Intensity
d) Brightness
Explanation: An image of size 1024*1024 pixels, in which the intensity of each pixel is an 8-
bit quantity, requires one megabyte of storage, thus dealing with millions of images would
require a large storage space.
2. Which of the following is not required with reference to the light source of a
lighting system?
a) Sufficient light to provide a quality image
b) Available at low cost
c) The color of light should be pleasing to the eye
d) Provide spatial and temporal intensity to the to the sample
Explanation: The light source employed in a lighting system in an image processing system
requires a cheap light source in terms of cost, a source providing spatial and temporal
intensity to the to the sample and providing sufficient light for a quality image, but the
color of light immaterial in this context.
8. Which of the following provides the most efficient short-term storage in an image
processing system?
a) Cloud
b) Hard-disk
c) CD
d) Frame buffer
Explanation: Frame buffers can store images and those can be accessed at a faster rate,
usually at video rates (30 complete images per second).
9. Transmission bandwidth plays a key role in image transmission via the internet to
remote sites, which of the following is improving this situation to a large extent?
a) Wi-Fi
b) Li-Fi
c) Optical Fibers
d) Satellite Communication
Answer: c
Explanation: Communication to remote sites via the Internet is not efficient always. This
situation is improving quickly with the use of optical fiber and other broadband
technologies.
Answer: d
Explanation: Image processing system consists of a digitizer and the hardware that
performs other primitive operations such as arithmetic and logical operations (ALU). This is
called a front-end-subsystem.
11. Which of the following storage is used for frequent access in an image processing
system?
a) Archival storage
b) on-line storage
c) short-term storage
d) long-term storage
Answer: b
Explanation: On-line storage usually takes the form of magnetic disks and optical media
storage. The key factor in on-line storage is the frequent access to the stored data.
4. What is the step that is performed before color image processing in image
processing?
a) Wavelets and multi resolution processing
b) Image enhancement
c) Image restoration
d) Image acquisition
Explanation: Steps in image processing:
Image acquisition-> Image enhancement-> Image restoration-> Color image processing->
Wavelets and multi resolution processing-> Compression-> Morphological processing->
Segmentation-> Representation & description-> Object recognition.
Answer: a
Explanation: Steps in image processing:
Image acquisition-> Image enhancement-> Image restoration-> Color image processing->
Wavelets and multi resolution processing-> Compression-> Morphological processing->
Segmentation-> Representation & description-> Object recognition.
7. Which of the following step deals with tools for extracting image components
those are useful in the representation and description of shape?
a) Segmentation
b) Representation & description
c) Compression
d) Morphological processing
Answer: d
Explanation: Morphological processing deals with tools for extracting image components
that are useful in the representation and description of shape. The material in this chapter
begins a transition from processes that output images to processes that output image
attributes.
8. In which step of the processing, assigning a label (e.g., “vehicle”) to an object based
on its descriptors is done?
a) Object recognition
b) Morphological processing
c) Segmentation
d) Representation & description
Explanation: Recognition is the process that assigns a label (e.g., “vehicle”) to an object
based on its descriptors. We conclude our coverage of digital image processing with the
development of methods for recognition of individual objects.
Answer: c
Explanation: The output of the most sensor is a continuous waveform, and the
amplitude and spatial behavior of such waveform are related to the physical
phenomenon being sensed.
2. To convert a continuous image f(x, y) to digital form, we have to sample the
function in __________
a) Coordinates
b) Amplitude`
c) All of the mentioned
d) None of the mentioned
Explanation: An image may be continuous in the x- and y-coordinates or in amplitude, or in
both.
7. How does sampling gets accomplished with a sensing strip being used for image
acquisition?
a) The number of sensors in the strip establishes the sampling limitations in one image
direction and Mechanical motion in the other direction
b) The number of sensors in the sensing array establishes the limits of sampling in both
directions
c) The number of mechanical increments when the sensor is activated to collect data
d) None of the mentioned
Explanation: When a sensing strip is used the number of sensors in the strip defines the
sampling limitations in one direction and mechanical motion in the other direction.
8. How is sampling accomplished when a sensing array is used for image acquisition?
a) The number of sensors in the strip establishes the sampling limitations in one image
direction and Mechanical motion in the other direction
b) The number of sensors in the sensing array defines the limits of sampling in both
directions
c) The number of mechanical increments at which we activate the sensor to collect data
d) None of the mentioned
Explanation: When we use sensing array for image acquisition, there is no motion and so,
only the number of sensors in the array defines the limits of sampling in both directions
and the output of the sensor is quantized by dividing the gray-level scale into many
discrete levels.
1. Assume that an image f(x, y) is sampled so that the result has M rows and N
columns. If the values of the coordinates at the origin are (x, y) = (0, 0), then the
notation (0, 1) is used to signify :
a) Second sample along first row
b) First sample along second row
c) First sample along first row
d) Second sample along second row
Explanation: The values of the coordinates at the origin are (x, y) = (0, 0). Then, the next
coordinate values (second sample) along the first row of the image are represented as (x, y)
= (0, 1).
3. Let Z be the set of real integers and R the set of real numbers. The sampling
process may be viewed as partitioning the x-y plane into a grid, with the central
coordinates of each grid being from the Cartesian product Z2, that is a set of all
ordered pairs (zi, zj), with zi and zj being integers from Z. Then, f(x, y) is said a digital
image if:
a) (x, y) are integers from Z2 and f is a function that assigns a gray-level value (from Z) to
each distinct pair of coordinates (x, y)
b) (x, y) are integers from R2 and f is a function that assigns a gray-level value (from R) to
each distinct pair of coordinates (x, y)
c) (x, y) are integers from R2 and f is a function that assigns a gray-level value (from Z) to
each distinct pair of coordinates (x, y)
d) (x, y) are integers from Z2 and f is a function that assigns a gray-level value (from R) to
each distinct pair of coordinates (x, y)
Explanation: In the given condition, f(x, y) is a digital image if (x, y) are integers from Z2 and
f a function that assigns a gray-level value (that is, a real number from the set R) to each
distinct coordinate pair (x, y).
4. Let Z be the set of real integers and R the set of real numbers. The sampling
process may be viewed as partitioning the x-y plane into a grid, with the central
coordinates of each grid being from the Cartesian product Z2, that is a set of all
ordered pairs (zi, zj), with zi and zj being integers from Z. Then, f(x, y) is a digital
image if (x, y) are integers from Z2 and f is a function that assigns a gray-level value
(that is, a real number from the set R) to each distinct coordinate pair (x, y). What
happens to the digital image if the gray levels also are integers?
a) The Digital image then becomes a 2-D function whose coordinates and amplitude
values are integers
b) The Digital image then becomes a 1-D function whose coordinates and amplitude values
are integers
c) The gray level can never be integer
d) None of the mentioned
Explanation: In Quantization Process if the gray levels also are integers the Digital image
then becomes a 2-D function whose coordinates and amplitude values are integers.
5. The digitization process i.e. the digital image has M rows and N columns, requires
decisions about values for M, N, and for the number, L, of gray levels allowed for
each pixel. The value M and N have to be:
a) M and N have to be positive integer
b) M and N have to be negative integer
c) M have to be negative and N have to be positive integer
d) M have to be positive and N have to be negative integer
Explanation: The digitization process i.e. the digital image has M rows and N columns,
requires decisions about values for M, N, and for the number, L, of max gray level. There
are no requirements on M and N, other than that M and N have to be positive integer.
6. The digitization process i.e. the digital image has M rows and N columns, requires
decisions about values for M, N, and for the number, L, of max gray levels. There are
no requirements on M and N, other than that M and N have to be positive integer.
However, the number of gray levels typically is
a) An integer power of 2 i.e. L = 2k
b) A Real power of 2 i.e. L = 2k
c) Two times the integer value i.e. L = 2k
d) None of the mentioned
Explanation: Due to processing, storage, and considering the sampling hardware, the
number of gray levels typically is an integer power of 2 i.e. L = 2k.
7. The digitization process i.e. the digital image has M rows and N columns, requires
decisions about values for M, N, and for the number, L, of max gray levels is an
integer power of 2 i.e. L = 2k, allowed for each pixel. If we assume that the discrete
levels are equally spaced and that they are integers then they are in the interval
__________ and Sometimes the range of values spanned by the gray scale is called the
________ of an image.
a) [0, L – 1] and static range respectively
b) [0, L / 2] and dynamic range respectively
c) [0, L / 2] and static range respectively
d) [0, L – 1] and dynamic range respectively
Explanation: In digitization process M rows and N columns have to be positive and for the
number, L, of discrete gray levels typically an integer power of 2 for each pixel. If we
assume that the discrete levels are equally spaced and that they are integers then they lie
in the interval [0, L-1] and Sometimes the range of values spanned by the gray scale is
called the dynamic range of an image.
8. After digitization process a digital image with M rows and N columns have to be
positive and for the number, L, max gray levels i.e. an integer power of 2 for each
pixel. Then, the number b, of bits required to store a digitized image is:
a) b=M*N*k
b) b=M*N*L
c) b=M*L*k
d) b=L*N*k
Explanation: In digital image of M rows and N columns and L max gray levels an integer
power of 2 for each pixel. The number, b, of bits required to store a digitized image is:
b=M*N*k.
9. An image whose gray-levels span a significant portion of gray scale have __________
dynamic range while an image with dull, washed out gray look have __________
dynamic range.
a) Low and High respectively
b) High and Low respectively
c) Both have High dynamic range, irrespective of gray levels span significance on gray scale
d) Both have Low dynamic range, irrespective of gray levels span significance on gray scale
Explanation: An image whose gray-levels signifies a large portion of gray scale have High
dynamic range, while that with dull, washed out gray look have Low dynamic range.
11. In digital image of M rows and N columns and L discrete gray levels, calculate the
bits required to store a digitized image for M=N=32 and L=16.
a) 16384
b) 4096
c) 8192
d) 512
Explanation: In digital image of M rows and N columns and L max gray levels i.e. an integer
power of 2 for each pixel. The number, b, of bits required to store a digitized image is:
b=M*N*k.
For L=16, k=4.
i.e. b=4096.
Answer: d
Explanation: The sampling points are ordered in the plane and their relation is called a
Grid.
2. The transition between continuous values of the image function and its digital equivalent
is called ______________
a) Quantisation
b) Sampling
c) Rasterisation
d) None of the Mentioned
Answer: a
Explanation: The transition between continuous values of the image function and its
digital equivalent is called Quantisation.
3. Images quantised with insufficient brightness levels will lead to the occurrence of
____________
a) Pixillation
b) Blurring
c) False Contours
d) None of the Mentioned
Answer: c
Explanation: This effect arises when the number brightness levels is lower that which the
human eye can distinguish.
5. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
a) Sampling
b) Interpolation
c) Filters
d) None of the Mentioned
Answer: b
Explanation: Interpolation is the basic tool used for zooming, shrinking, rotating, etc.
6. The type of Interpolation where for each new location the intensity of the immediate
pixel is assigned is ___________
a) bicubic interpolation
b) cubic interpolation
c) bilinear interpolation
d) nearest neighbour interpolation
Answer: d
Explanation: Its called as Nearest Neighbour Interpolation since for each new location
the intensity of the next neighbouring pixel is assigned.
7. The type of Interpolation where the intensity of the FOUR neighbouring pixels is used to
obtain intensity a new location is called ___________
a) cubic interpolation
b) nearest neighbour interpolation
c) bilinear interpolation
d) bicubic interpolation
Answer: c
Explanation: Bilinear interpolation is where the FOUR neighbouring pixels is used to
estimate intensity for a new location.
8. Dynamic range of imaging system is a ratio where the upper limit is determined by
a) Saturation
b) Noise
c) Brightness
d) Contrast
Answer: a
Explanation: Saturation is taken as the Numerator.
Answer: c
Explanation: Noise is taken as the Denominator.
10. Quantitatively, spatial resolution cannot be represented in which of the following ways
a) line pairs
b) pixels
c) dots
d) none of the Mentioned
Answer: d
Explanation: All the options can be used to represent spatial resolution.
Answer: b
Explanation: Sensor strips are very common next to single sensor and use in-line
arrangement.
Answer: d
Explanation: Industrial Computerised Axial Tomography is based on image acquisition
using sensor strips.
4. The section of the real plane spanned by the coordinates of an image is called the
_____________
a) Spacial Domain
b) Coordinate Axes
c) Plane of Symmetry
d) None of the Mentioned
Answer: a
Explanation: The section of the real plane spanned by the coordinates of an image is
called the Spacial Domain, with the x and y coordinates referred to as Spacial
coordinates.
5. The difference is intensity between the highest and the lowest intensity levels in an
image is ___________
a) Noise
b) Saturation
c) Contrast
d) Brightness
Answer: c
Explanation: Contrast is the measure of the difference is intensity between the highest and
the lowest intensity levels in an image.
6. _____________ is the effect caused by the use of an insufficient number of intensity levels
in smooth areas of a digital image.
a) Gaussian smooth
b) Contouring
c) False Contouring
d) Interpolation
Answer: c
Explanation: It is called so because the ridges resemble the contours of a map.
7. The process of using known data to estimate values at unknown locations is called
a) Acquisition
b) Interpolation
c) Pixelation
d) None of the Mentioned
Answer: b
Explanation: Interpolation is the process used to estimate unknown locations. It is
applied in all image resampling methods.
9. The procedure done on a digital image to alter the values of its individual pixels is
a) Neighbourhood Operations
b) Image Registration
c) Geometric Spacial Transformation
d) Single Pixel Operation
Answer: d
Explanation: It is expressed as a transformation function T, of the form s=T(z) , where z is
the intensity.
10. In Geometric Spacial Transformation, points whose locations are known precisely in
input and reference images.
a) Tie points
b) Réseau points
c) Known points
d) Key-points
Answer: a
Explanation: Tie points, also called Control points are points whose locations are known
precisely in input and reference images.
Answer: c
Explanation: Red is towards the right in the electromagnetic spectrum sorted in the
increasing order of wavelength.
2. The property indicating that the output of a linear operation due to the sum of two
inputs is same as performing the operation on the inputs individually and then summing
the results is called ___________
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned
Answer: a
Explanation: This property is called additivity .
3. The property indicating that the output of a linear operation to a constant times as input
is the same as the output of operation due to original input multiplied by that constant is
called _________
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned
Answer: c
Explanation: This property is called homogeneity
8. Consider two regions A and B composed of foreground pixels. The ________ of these two
sets is the set of elements belonging to set A or set B or both.
a) OR
b) AND
c) NOT
d) XOR
Answer: a
Explanation: This is called an OR operation.
9. Imaging systems having physical artefacts embedded in the imaging sensors produce a
set of points called __________
a) Tie Points
b) Control Points
c) Reseau Marks
d) None of the Mentioned
Answer: c
Explanation: These points are called “known” points or “Reseau marks”
10. Image processing approaches operating directly on pixels of input image work directly
in ____________
a) Transform domain
b) Spatial domain
c) Inverse transformation
d) None of the Mentioned
Answer: b
Explanation: Operations directly on pixels of input image work directly in Spatial Domain.
1. A pixel p (x, y) has two vertical neighbors and two horizontal neighbors. The neighbors of
(x, y) are _____________
Answer: c
Explanation: The four neighbors of P is denoted by N4(P) : {(x+1, y), (x-1, y), (x, y+1), (x, y-1)},
this shows each pixel is at unit distance from P. This can be calculated by using the distance
formula for 2 points. Distance formula: d = √[(x2-x1)2+(y2-y1)2], where (x1,y1) and (x2,y2)
are the 2 points and d is the distance between them.
3. A pixel p (x, y) has 4 diagonal neighbors. The diagonal neighbors of (x, y) are _____________
a) (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
b) (x+1, y), (x-1, y+1), (x-1, y-1), (x, y-1)
c) (x, y), (x-1, y-1), (x+1, y+1), (x+1, y-1)
d) (x+1, y), (x-1, y), (x, y+1), (x, y-1)
Answer: a
Explanation: Since p has 4 diagonal neighbors,(considering a diamond shape with (x,y) as
center, there would be 4 diagonals neighbors on the 4 sides of the diamond) each of x and
y co-ordinates will change by 1 thus ND(P) is given by: (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-
1).
4. The 4 diagonal neighbors of pixel p are denoted by ND(P). Each of them are at what
distance?
a) 0.5 units from P
b) 0.707 from P
c) unit distance from P
d) 1.414 units from P
Answer: d
Explanation: The four neighbors of P is denoted by ND(P): {(x+1, y), (x-1, y), (x, y+1), (x, y-1)},
so each of them is at a distance √2 from P [√(12+12)= √2=1.414 units]. The above
calculation is done by using the distance formula for 2 points. Distance formula: d = √[(x2-
x1)2+(y2-y1)2], where (x1,y1) and (x2,y2) are the 2 points and d is the distance between
them.
5. The union of 2 regions which form a connected set are called _____________
a) Connected
b) Joined
c) Disjoint
d) Adjacent
Answer: d
Explanation: The regions are said to be Adjacent if their union forms a connected set. In
other words, two pixels a and b are connected if there is a path from a and b on which
every pixel is 4-connected to the next pixel. A set of pixels in an image where all are
connected to each other is called a connected component.
6. In a binary image, two pixels are connected if they are 4-neighbors and have same value
0 or 1. State whether the statement is true or false.
a) True
b) False
Answer: a
Explanation: Condition for 2 pixels of a binary image to be connected: They should be 4-
neighbors and have same value either 0 or 1 and there should be a connected path
between them.
8. The subset of pixels is given by s. For the pixels p and q to be connected, which of the
following must be satisfied?
a) There exists a path between p and q, which lies outside of the subset s
b) The pixels are 4-adjacent
c) There exists a path between p and q, which lies inside of the subset s
d) The pixels are 8-adjacent
Answer: c
Explanation: Pixels p and q are said to be connected if there exists a path between p and q,
which lies inside of the subset s. Two pixels p and q are connected if there is a path from p
and q on which every pixel is 4-connected to the next pixel.
Answer: a
Explanation: Noise reduction is obtained by blurring the image using smoothing filter.
Blurring is used in pre-processing steps, such as removal of small details from an
image prior to object extraction and, bridging of small gaps in lines or curves
Answer: d
Explanation: The output or response of a smoothing, linear spatial filter is simply the
average of the pixels contained in the neighbourhood of the filter mask.
Answer: b
Explanation: Since the smoothing spatial filter performs the average of the pixels, it is
also called as averaging filter.
Answer: c
Explanation: Smoothing filter replaces the value of every pixel in an image by the
average value of the gray levels. So, this helps in removing the sharp transitions in the
gray levels between the pixels. This is done because, random noise typically consists of
sharp transitions in gray levels.
Answer: a
Explanation: Edges, which almost always are desirable features of an image, also are
characterized by sharp transitions in gray level. So, averaging filters have an
undesirable side effect that they blur these edges.
a) True
b) False
Answer: b
Explanation: One of the application of smoothing spatial filters is that, they help in
smoothing the false contours that result from using an insufficient number of gray
levels.
7. The mask shown in the figure below belongs to which type of filter?
Answer: d
Explanation: This is a smoothing spatial filter. This mask yields a so called weighted
average, which means that different pixels are multiplied with different coefficient
values. This helps in giving much importance to the some pixels at the expense of
others.
8. The mask shown in the figure below belongs to which type of filter?
Answer: c
Explanation: The mask shown in the figure represents a 3×3 smoothing filter. Use of
this filter yields the standard average of the pixels under the mask.
Answer: a
Explanation: A spatial averaging filter or spatial smoothening filter in which all the
coefficients are equal is also called as box filter.
10. If the size of the averaging filter used to smooth the original image to first
image is 9, then what would be the size of the averaging filter used in smoothing
the same original picture to second in second image?
a) 3
b) 5
c) 9
d) 15
Answer: d
Explanation: We know that, as the size of the filter used in smoothening the original
image that is averaging filter increases then the blurring of the image. Since the
second image is more blurred than the first image, the window size should be more
than 9.
11. Which of the following comes under the application of image blurring?
a) Object detection
b) Gross representation
c) Object motion
d) Image segmentation
Answer: b
Explanation: An important application of spatial averaging is to blur an image for the
purpose of getting a gross representation of interested objects, such that the intensity of
the small objects blends with the background and large objects become easy to detect.
Answer: a
Explanation: Order static filters are nonlinear smoothing spatial filters whose response is
based on the ordering or ranking the pixels contained in the image area encompassed by
the filter, and then replacing the value of the central pixel with the value determined by the
ranking result.
d) Sharpening filter
Answer: c
Explanation: The median filter belongs to order static filters, which, as the name implies,
replaces the value of the pixel by the median of the gray levels that are present in the
neighbourhood of the pixels.
Answer: a
Explanation: Median filters are used to remove impulse noises, also called as salt-and-
pepper noise because of its appearance as white and black dots in the image.
15. What is the maximum area of the cluster that can be eliminated by using an n×n
median filter?
a) n2
b) n2/2
c) 2*n2
d) n
Answer: b
Explanation: Isolated clusters of pixels that are light or dark with respect to their
neighbours, and whose area is less than n2/2, i.e., half the area of the filter, can be
eliminated by using an n×n median filter.
a) g(x,y)=T[f(x,y)]
b) f(x+y)=T[g(x+y)]
c) g(xy)=T[f(xy)]
d) g(x-y)=T[f(x-y)]
Answer: a
Explanation: Spatial domain processes will be denoted by the expression g(x,y)=T[f(x,y)],
where f(x,y) is the input image, g(x,y) is the processed image, and T is an operator on f,
defined over some neighborhood of (x, y). In addition, T can operate on a set of input
images, such as performing the pixel-by-pixel sum of K images for noise reduction
2. Which of the following shows three basic types of functions used frequently for
image enhancement?
a) Linear, logarithmic and inverse law
b) Power law, logarithmic and inverse law
c) Linear, logarithmic and power law
d) Linear, exponential and inverse law
Answer: b
Explanation: In introduction to gray-level transformations, which shows three basic types
of functions used frequently for image enhancement: linear (negative and identity
transformations), logarithmic (log and inverse-log transformations), and power-law (nth
power and nth root transformations).The identity function is the trivial case in which output
intensities are identical to input intensities. It is included in the graph only for
completeness.
Answer: c
Explanation: The negative of an image with gray levels in the range[0,L-1] is obtained by
using the negative transformation, which is given by the expression: s=L-1-r.
Answer: b
Explanation: The general form of the log transformation: s=clog10(1+r), where c is a
constant, and it is assumed that r ≥ 0.
Answer: a
Explanation: Power-law transformations have the basic form: s=crγ where c and g are
positive constants. Sometimes s=crγ is written as s=c.(r+ε)γ to account for an offset (that is,
a measurable output when the input is zero).
6. What is the name of process used to correct the power-law response phenomena?
a) Beta correction
b) Alpha correction
c) Gamma correction
d) Pie correction
Answer: c
Explanation: The practical implementation of some important transformations can be
formulated only as piecewise functions. The principal disadvantage of piecewise functions
is that their specification requires considerably more user input.
8. In contrast stretching, if r1=s1 and r2=s2 then which of the following is true?
a) The transformation is not a linear function that produces no changes in gray levels
b) The transformation is a linear function that produces no changes in gray levels
c) The transformation is a linear function that produces changes in gray levels
d) The transformation is not a linear function that produces changes in gray levels
Answer: b
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the
transformation function. If r1=s1 and r2=s2 then the transformation is a linear function that
produces no changes in gray levels.
9. In contrast stretching, if r1=r2, s1=0 and s2=L-1 then which of the following is true?
a) The transformation becomes a thresholding function that creates an octal image
b) The transformation becomes a override function that creates an octal image
c) The transformation becomes a thresholding function that creates a binary image
d) The transformation becomes a thresholding function that do not create an octal image
Answer: c
Explanation: If r1=r2, s1=0 and s2=L-1,the transformation becomes a thresholding function
that creates a binary image.
10. In contrast stretching, if r1≤r2 and s1≤s2 then which of the following is true?
a) The transformation function is double valued and exponentially increasing
b) The transformation function is double valued and monotonically increasing
c) The transformation function is single valued and exponentially increasing
d) The transformation function is single valued and monotonically increasing
Answer: d
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the
transformation function. If r1≤r2 and s1≤s2 then the function is single valued and
monotonically increasing
11. In which type of slicing, highlighting a specific range of gray levels in an image often is
desired?
a) Gray-level slicing
b) Bit-plane slicing
c) Contrast stretching
d) Byte-level slicing
Answer: a
Explanation: Highlighting a specific range of gray levels in an image often is desired in gray-
level slicing. Applications include enhancing features such as masses of water in satellite
imagery and enhancing flaws in X-ray images.
12. Which of the following depicts the main functionality of the Bit-plane slicing?
a) Highlighting a specific range of gray levels in an image
b) Highlighting the contribution made to total image appearance by specific bits
c) Highlighting the contribution made to total image appearance by specific byte
d) Highlighting the contribution made to total image appearance by specific pixels
Answer: b
Explanation: Instead of highlighting gray-level ranges, highlighting the contribution made to
total image appearance by specific bits might be desired. Suppose , each pixel in an image
is represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging
from bit-plane 0 for the least significant bit to bit-plane 7 for the most significant bit. In
terms of 8-bit bytes, plane 0 contains all the lowest order bits in the bytes comprising the
pixels in the image and plane 7 contains all the high-order bits.
Sharpening Spatial Filters
1.Which of the following is the primary objective of sharpening of an image?
a) Blurring the image
b) Highlight fine details in the image
c) Increase the brightness of the image
d) Decrease the brightness of the image
Answer: d
Explanation: We know that, in blurring the image, we perform the average of pixels
which can be considered as integration. As sharpening is the opposite process of
blurring, logically we can tell that we perform differentiation on the pixels to sharpen
the image.
a) True
b) False
Answer: a
5. In which of the following cases, we wouldn’t worry about the behaviour of sharpening
filter?
a) Flat segments
b) Step discontinuities
c) Ramp discontinuities
Answer: d
Explanation: We are interested in the behaviour of derivatives used in sharpening in the
constant gray level areas i.e., flat segments, and at the onset and end of discontinuities,
i.e., step and ramp discontinuities.
6. Which of the following is the valid response when we apply a first derivative?
Answer: c
Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for first derivative should be zero in flat segments, nonzero at the
onset of a gray level step or ramp and nonzero along the ramps.
Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for second derivative should be zero in flat segments, zero at the
onset of a gray level step or ramp and nonzero along the ramps.
8. If f(x,y) is an image function of two variables, then the first order derivative of a
one dimensional function, f(x) is:
a) f(x+1)-f(x)
b) f(x)-f(x+1)
c) f(x-1)-f(x+1)
d) f(x)+f(x-1)
Explanation: The first order derivative of a single dimensional function f(x) is the
difference between f(x) and f(x+1).
That is, ∂f/∂x=f(x+1)-f(x).
Explanation: The point which has very high or very low gray level value compared to its
neighbours, then that point is called as isolated point or noise point. The noise point of
is of one pixel size.
10. What is the thickness of the edges produced by first order derivatives when
compared to that of second order derivatives?
a) Finer
b) Equal
c) Thicker
d) Independent
Explanation: We know that, the first order derivative is nonzero along the entire ramp
while the second order is zero along the ramp. So, we can conclude that the first order
derivatives produce thicker edges and the second order derivatives produce much finer
edges
11. First order derivative can enhance the fine detail in the image compared to that
of second order derivative.
a) True
b) False
Explanation: The response at and around the noise point is much stronger for the
second order derivative than for the first order derivative. So, we can state that the
second order derivative is better to enhance the fine details in the image including
noise when compared to that of first order derivative.
12. Which of the following derivatives produce a double response at step changes
in gray level?
a) First order derivative
b) Third order derivative
c) Second order derivative
d) First and second order derivatives
Explanation: Second order derivatives produce a double line response for the step
changes in the gray level. We also note of second-order derivatives that, for similar
changes in gray-level values in an image, their response is stronger to a line than to a
step, and to a point than to a line.
3. Which of the following fact(s) is/are true about sharpening spatial filters using
digital differentiation?
a) Sharpening spatial filter response is proportional to the discontinuity of the image at
the point where the derivative operation is applied
b) Sharpening spatial filters enhances edges and discontinuities like noise
c) Sharpening spatial filters deemphasizes areas that have slowly varying gray-level
values
d) All of the mentioned
Explanation: Derivative operator’s response is proportional to the discontinuity of the
image at the point where the derivative operation is applied.
Image differentiation enhances edges and discontinuities like noise and deemphasizes
areas that have slowly varying gray-level values.
Since a sharpening spatial filters are analogous to differentiation, so, all the above
mentioned facts are true for sharpening spatial filters.
4. Which of the facts(s) is/are true for the first order derivative of a digital function?
a) Must be nonzero in the areas of constant grey values
b) Must be zero at the onset of a gray-level step or ramp discontinuities
c) Must be nonzero along the gray-level ramps
d) None of the mentioned
5. Which of the facts(s) is/are true for the second order derivative of a digital
function?
a) Must be zero in the flat areas
b) Must be nonzero at the onset and end of a gray-level step or ramp discontinuities
c) Must be zero along the ramps of constant slope
d) All of the mentioned
Explanation: The second order derivative of a digital function is defined as:
Must be zero in the flat areas i.e. areas of constant grey values.
Must be nonzero at the onset of a gray-level step or ramp discontinuities.
Must be zero along the gray-level ramps of constant slope.
Explanation: The definition of a second order derivative of a one dimensional image f(x)
is:
(∂2 f)/∂x2 =f(x+1)+ f(x-1)-2f(x), where the partial derivative is used to keep notation same
even for f(x, y) when partial derivative will be dealt along two spatial axes.
8. What kind of relation can be obtained between first order derivative and second
order derivative of an image having a on the basis of edge productions that shows a
transition like a ramp of constant slope?
a) First order derivative produces thick edge while second order produces a very
fine edge
b) Second order derivative produces thick edge while first order produces a very fine
edge
c) Both first and second order produces thick edge
d) Both first and second order produces a very fine edge
Explanation: the first order derivative remains nonzero along the entire ramp of
constant slope, while the second order derivative remain nonzero only at onset and end
of such ramps.
If an edge in an image shows transition like the ramp of constant slope, the first order
and second order derivative values shows the production of thick and finer edge
respectively.
9. What kind of relation can be obtained between first order derivative and second
order derivative of an image on the response obtained by encountering an isolated
noise point in the image?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both enhances the same and so the response is same for both first and second order
derivative
d) None of the mentioned
Explanation: This is because a second order derivative is more aggressive toward
enhancing sharp changes than a first order.
10. What kind of relation can be obtained between the response of first order
derivative and second order derivative of an image having a transition into gray-
level step from zero?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both first and second order derivative has the same response
d) None of the mentioned
Explanation: This is because a first order derivative has stronger response to a gray-
level step than a second order, but, the response becomes same if transition into gray-
level step is from zero.
11. If in an image there exist similar change in gray-level values in the image, which of
the following shows a stronger response using second order derivative operator for
sharpening?
a) A line
b) A step
c) A point
d) None of the mentioned
Explanation: second order derivative shows a stronger response to a line than a step
and to a point than a line, if there is similar changes in gray-level values in an image.
7. The ability that rotating the image and applying the filter gives the same result,
as applying the filter to the image first, and then rotating it, is called _____________
a) Isotropic filtering
b) Laplacian
c) Rotation Invariant
d) None of the mentioned
Explanation: It is called Rotation Invariant, although the process used is Isotropic
filtering.
Explanation: In Unsharp Masking, all of the above occurs in the order: Blurring,
Subtracting the blurred image and then Adding the mask.
3. Response of the gradient to noise and fine detail is _____________ the Laplacian’s.
a) equal to
b) lower than
c) greater than
d) has no relation with
Explanation: Response of the gradient to noise and fine detail is lower than the
Laplacian’s and can further be lowered by smoothing.
8. How do you bring out more of the skeletal detail from a Nuclear Whole Body
Bone Scan?
a) Sharpening
b) Enhancing
c) Transformation
d) None of the mentioned
Explanation: Using a mask, formed from the smoothed version of the gradient image,
can be used for median filtering.
Explanation: Increasing the dynamic range of the sharpened image is the final step in
enhancement.
3. What is the process of moving a filter mask over the image and computing the
sum of products at each location called as?
a) Convolution
b) Correlation
c) Linear spatial filtering
d) Non linear spatial filtering
4. The standard deviation controls ___________ of the bell (2-D Gaussian function of
bell shape).
a) Size
b) Curve
c) Tightness
d) None of the Mentioned
Explanation: Convolution is the same as Correlation except that the image must be
rotated by 180 degrees initially.
7. Convolution and Correlation are functions of _____________
a) Distance
b) Time
c) Intensity
d) Displacement
8. The function that contains a single 1 with the rest being 0s is called
______________
a) Identity function
b) Inverse function
c) Discrete unit impulse
d) None of the Mentioned
Explanation: Only in dark images, we notice that the components of histogram are
concentrated on the low side on intensity scale.
8. The type of Histogram Processing in which pixels are modified based on the
intensity distribution of the image is called _______________.
a) Intensive
b) Local
c) Global
d) Random
Answer: d
Explanation: PDF stands for Probability Density Function.
Histogram Processing – 2
1. The histogram of a digital image with gray levels in the range [0, L-1] is
represented by a discrete function:
a) h(r_k)=n_k
b) h(r_k )=n/n_k
c) p(r_k )=n_k
d) h(r_k )=n_k/n
Explanation: The histogram of a digital image with gray levels in the range [0, L-1] is a
discrete function h(rk )=nk, where rk is the kth gray level and nkis the number of pixels in
the image having gray level rk.
5. The probability density function p_s (s) of the transformed variable s can be
obtained by using which of the following formula?
a) p_s (s)=p_r (r)|dr/ds|
b) p_s (s)=p_r (r)|ds/dr|
c) p_r (r)=p_s (s)|dr/ds|
d) p_s (s)=p_r (r)|dr/dr|
Explanation: The probability density function p_s (s) of the transformed variable s can be
obtained using a basic formula: p_s (s)=p_r (r)|dr/ds|
Thus, the probability density function of the transformed variable, s, is determined by
the gray-level PDF of the input image and by the chosen transformation function.
Explanation: A plot of pk_ (rk) versus r_k is called a histogram .The transformation
(mapping) given in sk =∑k j =0)k nj/n k=0,1,2,……,L-1 is called histogram equalization or
histogram linearization.
8. What is the method that is used to generate a processed image that have a
specified histogram?
a) Histogram linearization
b) Histogram equalization
c) Histogram matching
d) Histogram processing
9. Histograms are the basis for numerous spatial domain processing techniques.
a) True
b) False
Explanation: Histograms are the basis for numerous spatial domain processing
techniques. Histogram manipulation can be used effectively for image enhancement.
10. In a dark image, the components of histogram are concentrated on which side
of the grey scale?
a) High
b) Medium
c) Low
d) Evenly distributed
Explanation: We know that in the dark image, the components of histogram are
concentrated mostly on the low i.e., dark side of the grey scale. Similarly, the
components of histogram of the bright image are biased towards the high side of the
grey scale.
6. The non linear spacial filters whose response is based on ordering of the pixels
contained is called _____________.
a) Box filter
b) Square filter
c) Gaussian filter
d) Order-statistic filter
10. Which of the following is best suited for salt-and-pepper noise elimination?
a) Average filter
b) Box filter
c) Max filter
d) Median filter
Explanation: Median filter is better suited than average filter for salt-and-pepper noise
elimination.
Explanation: The average of pixels in the neighborhood of filter mask is simply the
output of the smoothing linear spatial filter.
Explanation: Random noise has sharp transitions in gray levels and smoothing filters
does noise reduction.
Explanation: Averaging filter or smoothing linear spatial filter is used: for noise
reduction by reducing the sharp transitions in gray level, for smoothing false contours
that arises because of use of insufficient number of gray values and for reduction of
irrelevant data i.e. the pixels regions that are small in comparison of filter mask.
6. A spatial averaging filter having all the coefficients equal is termed _________
a) A box filter
b) A weighted average filter
c) A standard average filter
d) A median filter
Explanation: An averaging filter is termed as box filter if all the coefficients of spatial
averaging filter are equal.
7. What does using a mask having central coefficient maximum and then the
coefficients reducing as a function of increasing distance from origin results?
a) It results in increasing blurring in smoothing process
b) It results to reduce blurring in smoothing process
c) Nothing with blurring occurs as mask coefficient relation has no effect on smoothing
process
d) None of the mentioned
Explanation: Use of a mask having central coefficient maximum and then the
coefficients reducing as a function of increasing distance from origin is a strategy to
reduce blurring in smoothing process.
8. What is the relation between blurring effect with change in filter size?
a) Blurring increases with decrease of the size of filter size
b) Blurring decrease with decrease of the size of filter size
c) Blurring decrease with increase of the size of filter size
d) Blurring increases with increase of the size of filter size
Explanation: Using a size 3 filter 3*3 and 5*5 size squares and other objects shows a
significant blurring with respect to object of larger size.
The blurring gets more pronounced while using filter size 5, 9 and so on.
2. Is it true or false that “the original pixel value is included while computing the
median using gray-levels in the neighborhood of the original pixel in median filter
case”?
a) True
b) False
Explanation: A median filter the pixel value is replaced by median of the gray-level in the
neighborhood of that pixel and also the original pixel value is included while computing
the median.
3. Two filters of similar size are used for smoothing image having impulse noise.
One is median filter while the other is a linear spatial filter. Which would the
blurring effect of both?
a) Median filter effects in considerably less blurring than the linear spatial filters
b) Median filter effects in considerably more blurring than the linear spatial filters
c) Both have the same blurring effect
d) All of the mentioned
Explanation: For impulse noise, median filter is much effective for noise reduction and
causes considerably less blurring than the linear spatial filters.
5. While performing the median filtering, suppose a 3*3 neighborhood has value
(10, 20, 20, 20, 15, 20, 20, 25, 100), then what is the median value to be given to the
pixel under filter?
a) 15
b) 20
c) 100
d) 25
Explanation: The values are first sorted and so turns out to (10, 15, 20, 20, 20, 20, 20,
25, and 100). For a 3*3 neighborhood the 5th largest value is the median, and so is 20.
6. Which of the following are forced to the median intensity of the neighbors by n*n
median filter?
a) Isolated cluster of pixels that are light or dark in comparison to their neighbors
b) Isolated cluster of pixels whose area is less than one-half the filter area
c) All of the mentioned
d) None of the mentioned
Explanation: The isolated cluster pixel value doesn’t come as a median value and since
are either are light or dark as compared to neighbors, so are forced with median
intensity of neighbors that aren’t even close to their original value and so are sometimes
termed “eliminated”.
If the area of such isolated pixels are < n2/2, that is again the pixel value won’t be a
median value and so are eliminated.
Larger cluster pixels value are more pronounced to be a median value, so are
considerably less forced to median intensity.
Explanation: A max filter gives the brightest point in an image and so is used.
8. The median filter also represents which of the following ranked set of numbers?
a) 100th percentile
b) 0th percentile
c) 50th percentile
d) None of the mentioned
Explanation: Since the median filter forces median intensity to the pixel which is almost
the largest value in the middle of the list of values as per the ranking, so represents a
50th percentile ranked set of numbers.
9. Which of the following filter represents a 0th percentile set of numbers?
a) Max filter
b) Mean filter
c) Median filter
d) None of the mentioned
Explanation: A min filter since provides the minimum value in the image, so represents
a 0th percentile set of numbers.
Spatial Filtering
1. In neighborhood operations working is being done with the value of image pixel
in the neighborhood and the corresponding value of a subimage that has same
dimension as neighborhood. The subimage is referred as _________
a) Filter
b) Mask
c) Template
d) All of the mentioned
2. The response for linear spatial filtering is given by the relationship __________
a) Sum of filter coefficient’s product and corresponding image pixel under filter
mask
b) Difference of filter coefficient’s product and corresponding image pixel under filter
mask
c) Product of filter coefficient’s product and corresponding image pixel under filter
mask
d) None of the mentioned
Explanation: In spatial filtering the mask is moved from point to point and at each point
the response is calculated using a predefined relationship. The relationship in linear
spatial filtering is given by: the Sum of filter coefficient’s product and corresponding
image pixel in area under filter mask.
3. In linear spatial filtering, what is the pixel of the image under mask
corresponding to the mask coefficient w (1, -1), assuming a 3*3 mask?
a) f (x, -y)
b) f (x + 1, y)
c) f (x, y – 1)
d) f (x + 1, y – 1)
Explanation: The pixel corresponding to mask coefficient (a 3*3 mask) w (0, 0) is f (x, y),
and so for w (1, -1) is f (x + 1, y – 1).
5. Which of the following is/are used as basic function in nonlinear filter for noise
reduction?
a) Computation of variance
b) Computation of median
c) All of the mentioned
d) None of the mentioned
6. In neighborhood operation for spatial filtering if a square mask of size n*n is used
it is restricted that the center of mask must be at a distance ≥ (n – 1)/2 pixels from
border of image, what happens to the resultant image?
a) The resultant image will be of same size as original image
b) The resultant image will be a little larger size than original image
c) The resultant image will be a little smaller size than original image
d) None of the mentioned
If the center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image, the
border pixels won’t get processed under mask and so the resultant image would be of
smaller size.
7. Which of the following method is/are used for padding the image?
a) Adding rows and column of 0 or other constant gray level
b) Simply replicating the rows or columns
c) All of the mentioned
d) None of the mentioned
Explanation: In neighborhood operation for spatial filtering using square mask, padding
of original image is done to obtain filtered image of same size as of original image done,
by adding rows and column of 0 or other constant gray level or by replicating the rows or
columns of the original image.
8. In neighborhood operation for spatial filtering using square mask of n*n, which
of the following approach is/are used to obtain a perfectly filtered result
irrespective of the size?
a) By padding the image
b) By filtering all the pixels only with the mask section that is fully contained in the
image
c) By ensuring that center of mask must be at a distance ≥ (n – 1)/2 pixels from
border of image
d) None of the mentioned
Answer: c
Explanation: Moving away from the origin of transform the low frequency corresponds to
the slowly varying components in an image. Moving further away from origin the higher
frequencies correspond to faster gray level changes.
2. Which of the following fact(s) is/are true for the relationship between high
frequency component of Fourier transform and the rate of change of gray levels?
a) Moving away from the origin of transform the high frequency corresponds to smooth
gray level variation
b) Moving away from the origin of transform the higher frequencies corresponds to
abrupt change in gray level
c) All of the mentioned
d) None of the mentioned
Answer: b
Explanation: Moving away from the origin of transform the low frequency corresponds to
the slowly varying components in an image. Moving further away from origin, the higher
frequencies correspond to faster gray level changes.
3. What is the name of the filter that multiplies two functions F(u, v) and H(u, v),
where F has complex components too since is Fourier transformed function of f(x, y),
in an order that each component of H multiplies both real and complex part of
corresponding component in F?
a) Unsharp mask filter
b) High-boost filter
c) Zero-phase-shift-filter
d) None of the mentioned
Answer: c
Explanation: Zero-phase-shift-filter multiplies two functions F(u, v) and H(u, v), where F has
complex components too since is Fourier transformed function of f(x, y), in an order that
each component of H multiplies both real and complex part of corresponding component
in F.
4. To set the average value of an image zero, which of the following term would be
set 0 in the frequency domain and the inverse transformation is done, where F(u, v)
is Fourier transformed function of f(x, y)?
a) F(0, 0)
b) F(0, 1)
c) F(1, 0)
d) None of the mentioned
Answer: a
Explanation: For an image f(x, y), the Fourier transform at origin of an image, F(0, 0), is
equal to the average value of the image.
5. What is the name of the filter that is used to turn the average value of a processed
image zero?
a) Unsharp mask filter
b) Notch filter
c) Zero-phase-shift-filter
d) None of the mentioned
Answer: b
Explanation: Notch filter sets F (0, 0), to zero, hence setting up the average value of image
zero. The filter is named so, because it is a constant function with a notch at origin and so is
able to set F (0, 0) to zero leaving out other values.
6. Which of the following filter(s) attenuates high frequency while passing low
frequencies of an image?
a) Unsharp mask filter
b) Lowpass filter
c) Zero-phase-shift filter
d) All of the mentioned
Answer: b
Explanation: A lowpass filter attenuates high frequency while passing low frequencies.
7. Which of the following filter(s) attenuates low frequency while passing high
frequencies of an image?
a) Unsharp mask filter
b) Highpass filter
c) Zero-phase-shift filter
d) All of the mentioned
Answer: b
Explanation: A highpass filter attenuates low frequency while passing high frequencies.
8. Which of the following filters has a less sharp detail than the original image
because of attenuation of high frequencies?
a) Highpass filter
b) Lowpass filter
c) Zero-phase-shift filter
d) None of the mentioned
Answer: b
Explanation: A lowpass filter attenuates high so the image has fewer sharp details.
Answer: d
Explanation: A highpass filter attenuates low frequency so have less gray-level variation in
smooth areas and allows high frequencies so to have emphasized transitional gray-level
details, resulting in a sharper image.
10. A spatial domain filter of the corresponding filter in frequency domain can be
obtained by applying which of the following operation(s) on filter in frequency
domain?
a) Fourier transform
b) Inverse Fourier transform
c) None of the mentioned
d) All of the mentioned
Answer: b
Explanation: Filters in spatial domain and frequency domain has a Fourier transform pair
relation. A spatial domain filter of the corresponding filter in frequency domain can be
obtained by applying inverse Fourier transform on frequency domain filter.
11. A frequency domain filter of the corresponding filter in spatial domain can be
obtained by applying which of the following operation(s) on filter in spatial domain?
a) Fourier transform
b) Inverse Fourier transform
c) None of the mentioned
d) All of the mentioned
Answer: a
Explanation: Filters in spatial domain and frequency domain has a Fourier transform pair
relation. A frequency domain filter of the corresponding filter in spatial domain can be
obtained by applying inverse Fourier transform on spatial domain filter.
12. Which of the following filtering is done in frequency domain in correspondence to
lowpass filtering in spatial domain?
a) Gaussian filtering
b) Unsharp mask filtering
c) High-boost filtering
d) None of the mentioned
Answer: a
Explanation: A plot of Gaussian filter in frequency domain can be recognized similar to
lowpass filter in spatial domain.
13. Using the feature of reciprocal relationship of filter in spatial domain and
corresponding filter in frequency domain, which of the following fact is true?
a) The narrower the frequency domain filter results in increased blurring
b) The wider the frequency domain filter results in increased blurring
c) The narrower the frequency domain filter results in decreased blurring
d) None of the mentioned
Answer: a
Explanation: The characteristics feature of reciprocal relationship says that the narrower
the frequency domain filter becomes it attenuates more low frequency component and so
increases blurring.
Answer: a
Explanation: Since edges and sharp transitions contribute significantly to high-frequency
contents in the gray level of an image. So, smoothing is done by attenuating a range of
high-frequency components.
Answer: d
Explanation: Lowpass filters are considered of three types: Ideal, Butterworth, and
Gaussian.
3. Which of the following lowpass filters is/are covers the range of very sharp filter
function?
a) Ideal lowpass filters
b) Butterworth lowpass filter
c) Gaussian lowpass filter
d) All of the mentioned
Answer: a
Explanation: Ideal lowpass filter covers the range of very sharp filter functioning of lowpass
filters.
4. Which of the following lowpass filters is/are covers the range of very smooth filter
function?
a) Ideal lowpass filters
b) Butterworth lowpass filter
c) Gaussian lowpass filter
d) All of the mentioned
Answer: a
Explanation: Gaussian lowpass filter covers the range of very smooth filter functioning of
lowpass filters.
Answer: a
Explanation: For high value of filter order Butterworth lowpass filter behaves as Ideal
lowpass filter, while for lower order value it has a smoother form behaving like Gaussian
lowpass filter.
Answer: b
Explanation: For high value of filter order Butterworth lowpass filter behaves as Ideal
lowpass filter, while for lower order value it has a smoother form behaving like Gaussian
lowpass filter.
7. In a filter, all the frequencies inside a circle of radius D0 are not attenuated while
all frequencies outside circle are completely attenuated. The D0 is the specified
nonnegative distance from origin of the Fourier transform. Which of the following
filter(s) characterizes the same?
a) Ideal filter
b) Butterworth filter
c) Gaussian filter
d) All of the mentioned
Answer: a
Explanation: In ideal filter all the frequencies inside a circle of radius D0 are not attenuated
while all frequencies outside the circle are completely attenuated.
8. In an ideal lowpass filter case, what is the relation between the filter radius and
the blurring effect caused because of the filter?
a) Filter size is directly proportional to blurring caused because of filter
b) Filter size is inversely proportional to blurring caused because of filter
c) There is no relation between filter size and blurring caused because of it
d) None of the mentioned
Answer: b
Explanation: Increase in filter size, removes less power from the image and so less severe
blurring occurs
9. The characteristics of the lowpass filter h(x, y) is/are_________
a) Has a dominant component at origin
b) Has a concentric, circular components about the center component
c) All of the mentioned
d) None of the mentioned
Answer: c
Explanation: the lowpass filter has two different characteristics: one is a dominant
component at origin and other one is a concentric, circular component about the center
component.
10. What is the relation for the components of ideal lowpass filter and the image
enhancement?
a) The concentric component is primarily responsible for blurring
b) The center component is primarily for the ringing characteristic of ideal filter
c) All of the mentioned
d) None of the mentioned
Answer: d
Explanation: The center component of ideal lowpass filter is primarily responsible for
blurring while, concentric component is primarily for the ringing characteristic of ideal
filter.
11. Using the feature of reciprocal relationship of filter in spatial domain and
corresponding filter in frequency domain along with convolution, which of the
following fact is true?
a) The narrower the frequency domain filter more severe is the ringing
b) The wider the frequency domain filter more severe is the ringing
c) The narrower the frequency domain filter less severe is the ringing
Answer: a
Explanation: The characteristics feature of reciprocal relationship says that the narrower
the frequency domain filter becomes it attenuates more low frequency component and so
increases blurring and more severe becomes the ringing.
12. Which of the following defines the expression for BLPF H(u, v) of order n, where
D(u, v) is the distance from point (u, v), D0 is the distance defining cutoff frequency?
a)
b)
c) All of the mentioned
d) None of the mentioned
Answer: a
Explanation: BLPF is the Butterworth lowpass filter and is defined as:
13. Which of the following defines the expression for ILPF H(u, v) of order n, where
D(u, v) is the distance from point (u, v), D0 is the distance defining cutoff frequency?
a)
b)
c) All of the mentioned
d) None of the mentioned
Answer: a
Explanation: ILPF is the Ideal lowpass filter and is defined as:
14. State the statement true or false: “BLPF has sharp discontinuity and ILPF doesn’t,
and so ILPF establishes a clear cutoff b/w passed and filtered frequencies”.
a) True
b) False
Answer: b
Explanation: ILPF has sharp discontinuity and BLPF doesn’t, so BLPF establishes a clear
cutoff b/w passed and filtered frequencies.
Answer: b
Explanation: In frequency domain terminology unsharp masking is defined as “obtaining
a highpass filtered image by subtracting from the given image a lowpass filtered version
of itself”.
Answer: b
Explanation: Unsharp masking is defined as “obtaining a highpass filtered image by
subtracting from the given image a lowpass filtered version of itself” while high-boost
filtering generalizes it by multiplying the input image by a constant, say A≥1.
3. High boost filtered image is expressed as: fhb = A f(x, y) – flp(x, y), where f(x, y) the
input image, A is a constant and flp(x, y) is the lowpass filtered version of f(x, y).
Which of the following facts validates if A=1?
a) High-boost filtering reduces to regular Highpass filtering
b) High-boost filtering reduces to regular Lowpass filtering
c) All of the mentioned
d) None of the mentioned
Answer: a
Explanation: High boost filtered image is modified as: fhb = (A-1) f(x, y) +f(x, y) – flp(x, y)
i.e. fhb = (A-1) f(x, y) + fhp(x, y). So, when A=1, High-boost filtering reduces to regular Highpass
filtering.
4. High boost filtered image is expressed as: fhb = A f(x, y) – flp(x, y), where f(x, y) the
input image, A is a constant and flp(x, y) is the lowpass filtered version of f(x, y).
Which of the following fact(s) validates if A increases past 1?
a) The contribution of the image itself becomes more dominant
b) The contribution of the highpass filtered version of image becomes less dominant
c) All of the mentioned
d) None of the mentioned
Answer: c
Explanation: High boost filtered image is modified as: fhb = (A-1) f(x, y) +f(x, y) – flp(x, y)
i.e. fhb = (A-1) f(x, y) + fhp(x, y). So, when A>1, the contribution of the image itself becomes
more dominant over the highpass filtered version of image.
5. If, Fhp(u, v)=F(u, v) – Flp(u, v) and Flp(u, v) = Hlp(u, v)F(u, v), where F(u, v) is the image in
frequency domain with Fhp(u, v) its highpass filtered version, Flp(u, v) its lowpass
filtered component and Hlp(u, v) the transfer function of a lowpass filter. Then,
unsharp masking can be implemented directly in frequency domain by using a filter.
Which of the following is the required filter?
a) Hhp(u, v) = Hlp(u, v)
b) Hhp(u, v) = 1 + Hlp(u, v)
c) Hhp(u, v) = – Hlp(u, v)
d) Hhp(u, v) = 1 – Hlp(u, v)
Answer: d
Explanation: Unsharp masking can be implemented directly in frequency domain by using
a composite filter: Hhp(u, v) = 1 – Hlp(u, v).
Answer: a
Explanation: Unsharp masking can be implemented directly in frequency domain by using
a composite filter: Hhp(u, v) = 1 – Hlp(u, v).
Answer: d
Explanation: For given composite filter of unsharp masking Hhp(u, v) = 1 – Hlp(u, v), the
composite filter for High-boost filtering is Hhb(u, v) = (A-1) + Hhp(u, v).
Answer: c
Explanation: To accentuate the contribution to enhancement made by high-frequency
components, we have to multiply the highpass filter by a constant and add an offset to the
highpass filter to prevent eliminating zero frequency term by filter.
Answer: c
Explanation: High frequency emphasis is the method that accentuates the contribution to
enhancement made by high-frequency component. In this we multiply the highpass filter
by a constant and add an offset to the highpass filter to prevent eliminating zero frequency
term by filter.
11. Which of the following transfer functions of High frequency emphasis {Hhfe(u, v)}
for Hhp(u, v) being the highpass filtered version of image?
a) Hhfe(u, v) = 1 – Hhp(u, v)
b) Hhfe(u, v) = a – Hhp(u, v), a≥0
c) Hhfe(u, v) = 1 – b Hhp(u, v), a≥0 and b>a
d) Hhfe(u, v) = a + b Hhp(u, v), a≥0 and b>a
Answer: d
Explanation: The transfer function of High frequency emphasis is given as:Hhfe(u, v) = a + b
Hhp(u, v), a≥0 and b>a.
12. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image,a≥0 and b>a. for
certain values of a and b it reduces to High-boost filtering. Which of the following is
the required value?
a) a = (A-1) and b = 0,A is some constant
b) a = 0 and b = (A-1),A is some constant
c) a = 1 and b = 1
d) a = (A-1) and b =1,A is some constant
Answer: d
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v) and the transfer function for High-boost filtering is Hhb(u, v) = (A-1) + Hhp(u, v), A
being some constant. So, for a = (A-1) and b =1, Hhfe(u, v) = Hhb(u, v).
13. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. What
happens when b increases past 1?
a) The high frequency is emphasized
b) The low frequency is emphasized
c) All frequency is emphasized
d) None of the mentioned
Answer: a
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. When b
increases past 1, the high frequency is emphasized.
14. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. When
b increases past 1 the filtering process is specifically termed as__________
a) Unsharp masking
b) High-boost filtering
c) Emphasized filtering
d) None of the mentioned
Answer: c
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image, a≥0 and b>a. When b
increases past 1, the high frequency is emphasized and so the filtering process is better
known as Emphasized filtering.
15. Validate the statement “Because of High frequency emphasis the gray-level
tonality due to low frequency components is not lost”.
a) True
b) False
Answer: a
Explanation: Because of High frequency emphasis the gray-level tonality due to low
frequency components is not lost.
Answer: d
Explanation: An image is expressed as the multiplication of illumination and reflectance
component.
Answer: b
Explanation: For an image is expressed as the multiplication of illumination and reflectance
component i.e. f(x, y)= i(x, y) * r(x, y), the equation can’t be used directly to operate
separately on the frequency component of illumination and reflectance because the
Fourier transform of the product of two function is not separable.
Answer: a
Explanation: For an image is expressed as the multiplication of illumination and reflectance
component i.e. f(x, y) = i(x, y) * r(x, y), the equation can’t be used directly to operate
separately on the frequency component of illumination and reflectance because the
Fourier transform of the product of two function is not separable. So, logarithmic operation
is used. I{z(x,y)} =I{ln(f(x,y))} =I{ln(i(x,y))} +I{ln(r(x,y))}.
Answer: b
Explanation: Homomorphic system is a class of system that achieves the separation of
illumination and reflectance component of an image.
Answer: a
Explanation: The illumination component of an image is characterized by a slow spatial
variation.
Answer: b
Explanation: The reflectance component of an image varies abruptly particularly at the
junction of dissimilar objects.
Answer: b
Explanation: The reflectance component of an image varies abruptly, so, is associated with
the high frequency of Fourier transform of logarithm of the image.
Answer: a
Explanation: The illumination component of an image is characterized by a slow spatial
variation, so, is associated with the low frequency of Fourier transform of logarithm of the
image.
9. If the contribution made by illumination component of image is decreased and the
contribution of reflectance component is amplified, what will be the net result?
a) Dynamic range compression
b) Contrast enhancement
c) All of the mentioned
d) None of the mentioned
Answer: c
Explanation: The illumination component of an image is characterized by a slow spatial
variation and the reflectance component of an image varies abruptly particularly at the
junction of dissimilar objects, so, if the contribution made by illumination component of
image is decreased and the contribution of reflectance component is amplified then there
is simultaneous dynamic range compression and contrast stretching.
Answer: b
Explanation: The negative is obtained using s = L – 1 + r.
Answer: a
Explanation: s = c.log (1 + r) is the log transformation.
3. Power-law transformations has the basic form of ________________ where c and ∆ are
constants.
a) s = c + r∆
b) s = c – r∆
c) s = c * r∆
d) s = c / r.∆
Answer: c
Explanation: s = c * r∆ is called the Power-law transformation.
4. For what value of the output must the Power-law transformation account for
offset?
a) No offset needed
b) All values
c) One
d) Zero
Answer: d
Explanation: When the output is Zero, an offset is necessary.
Answer: a
Explanation: The exponent in Power-law is called gamma and the process used to correct
the response of Power-law transformation is called Gamma Correction.
6. Which process expands the range of intensity levels in an image so that it spans
the full intensity range of the display?
a) Shading correction
b) Contrast sketching
c) Gamma correction
d) None of the Mentioned
Answer: b
Explanation: Contrast sketching is the process used to expand intensity levels in an image.
Answer: c
Explanation: Highlighting a specific range of intensities of an image is called Intensity
Slicing.
Answer: c
Explanation: It is called Bit-plane slicing.
Answer: c
Explanation: Image negatives use reversing intensity levels.
Answer: d
Explanation: Piecewise Linear Transformation function involves all the mentioned
functions.
3.19. Fuzzy Techniques – Transformations and
Filtering
1. What is the set generated using infinite-value membership functions, called?
a) Crisp set
b) Boolean set
c) Fuzzy set
d) All of the mentioned
Answer: c
Explanation: It is called fuzzy set.
2. Which is the set, whose membership only can be true or false, in bi-values Boolean
logic?
a) Boolean set
b) Crisp set
c) Null set
d) None of the mentioned
Answer: b
Explanation: The so-called Crisp set is the one in which membership only can be true or
false, in bi-values Boolean logic.
3. If Z is a set of elements with a generic element z, i.e. Z = {z}, then this set is called
_____________
a) Universe set
b) Universe of discourse
c) Derived set
d) None of the mentioned
Answer: b
Explanation: It is called the universe of discourse.
Answer: a
Explanation: It is called an Empty set.
Answer: d
Explanation: All of them are types of Membership functions.
Answer: d
Explanation: All the mentioned above are types of Membership functions.
8. Using the IF-THEN rule to create the output of fuzzy system is called _______________.
a) Inference
b) Implication
c) Both the mentioned
d) None of the mentioned
Answer: c
Explanation: It is called Inference or Implication.
Answer: a
Explanation: Maturity is the independent variable of fuzzy output.
Answer: d
Explanation: All the mentioned above are key steps in fuzzy technique.
Answer: c
Explanation: Increasing the dynamic range of gray-levels in the image is the basic idea
behind contrast stretching.
Answer: a
Explanation: If r1 = s1 and r2 = s2 the contrast stretching transformation is a linear function.
Answer: b
Explanation: If r1 = r2, s1 = 0 and s2 = L – 1, the contrast stretching transformation is a
thresholding function.
Answer: d
Explanation: While processing through contrast stretching, if r1 ≤ r2 and s1 ≤ s2 is maintained,
the function remains single valued and so monotonically increasing. This helps in the
prevention of creation of intensity artifacts.
5. A contrast stretching result been obtained by setting (r1, s1) = (rmin, 0) and (r2, s2) =
(rmax, L – 1), where, r and s are gray-values of image before and after processing
respectively, L is the max gray value allowed and rmax and rmin are maximum and
minimum gray-values in image respectively. What should we term the
transformation function if r1 = r2 = m, some mean gray-value.
a) Linear function
b) Thresholding function
c) Intermediate function
d) None of the mentioned
Answer: b
Explanation: From (r1, s1) = (rmin, 0) and (r2, s2) = (rmax, L – 1), we have s1 = 0 and s2 = L – 1 and if
r1 = r2 = m is set then the result becomes r1 = r2, s1 = 0 and s2 = L – 1, i.e. a thresholding
function.
Answer: d
Explanation: gray-level slicing is being done by two approaches: One approach is to give all
gray levels of a specific range high value and a low value to all other gray levels.
Second approach is to brighten the pixels gray-value of interest and preserve the
background.
I.e. in both highlighting of a specific range of gray-level has been done.
Answer: c
Explanation: There are basically two approaches of gray-level slicing:
One approach is to give all gray level of a specific range high value and a low value to all
other gray levels.
Second approach is to brighten the pixels gray-value of interest and preserve the
background.
Answer: c
Explanation: The approach of gray-level slicing “to give all gray level of a specific range high
value and a low value to all other gray levels” produces a binary image.
One of the transformations in Contrast stretching darkens the value of r (input image gray-
level) below m (some predefined gray-value) and brightens the value of r above m, giving a
binary image as result.
9. Specific bit contribution in the image highlighting is the basic idea of __________
a) Contrast stretching
b) Bit –plane slicing
c) Thresholding
d) Gray-level slicing
Answer: b
Explanation: Bit-plane slicing highlights the contribution of specific bits made to total
image, instead of highlighting a specific gray-level range.
Answer: a
Explanation: In bit-plane slicing, for the given data, the higher-ordered bits (mostly top four)
contain most of the data visually signified.
11. Which of the following helps to obtain the number of bits to be used to quantize
each pixel.
a) Gray-level slicing
b) Contrast stretching
c) Contouring
d) Bit-plane slicing
Answer: d
Explanation: Bits-plane slicing helps in obtaining the importance played by each bit in the
image by separating the image into bit-planes.
Answer: c
Explanation: Since the matrix is 3X3, the image f will have pixels at f(x,y) at M11 Similarly, all
the other pixels would be at their corresponding position and multiplication of image f with
the mask M will be T[f(x,y)] :
T[f(x,y)] = f(x-1,y-1)xM01 + f(x-1,y)xM02 + f(x-1,y+1)xM03 + f(x,y-1)xM10 + f(x,y)xM11 + f(x,y+1)xM12 +
f(x+1,y-1)xM20 + f(x+1,y)xM21 + f(x+1,y+1)xM22
3. Which of the following represents the gray level transformation for image
negative?
a) s=(L-1) -r
b) s=(L+1) +r
c) s=(L-1) *r
d) s=(L-1) /r
Answer: a
Explanation: In negative image transformations, each value of input image is subtracted
from (Level 1). This is then mapped on the output image. For an image 8 bpp image, there
will be 28 levels = 256. Putting the L = 256 in (d) we get, s = (256 –1) - r, s = 255 – r.
4. Which of the following represents the gray level transformation for log
transformation?
a) s=c+log(1+r)
b) s=c-log(1+r)
c) s=c/log(1+r)
d) s=c*log(1+r)
Answer: d
Explanation: In log transformation, s and r represents the pixel values of the input and the
output images and c are an arbitrary constant. We know that log (0) = infinity, so to make
the value finite, the value 1 is added to the input image pixel value, which makes the value
as 1, since log (1) is defined with the value = 0.
5. Which of the following represents the gray level transformation for power-law
transformation?
a) s=c+rγ
b) s=c+log(rγ)
c) s=c-rγ
d) s=c*rγ
Answer: d
Explanation: This transformation is used to enhance the image for different devices. The
gamma of different devices are different. The higher value of gamma corresponds to a
darker image and a lower value of gamma corresponds to a brighter image. Gamma for
CRT lies in between 1.8 to 2.5.
6. Smoothing filters are used for blurring and noise reduction. (True / False)
a) True
b) False
Answer: a
Explanation: Smoothing filters are used to reduce noise of an image or to produce a less
pixelated image. Most smoothing filters are low pass filters. Smoothing filters are also
known as average and low pass filters.
Answer: a
Explanation: Contrast stretching is also called normalization of an image. It is a simple
image enhancement technique to improve the contrast in an image. It is done by stretching
the intensity values to a desired range of values.
Answer: c
Explanation: Gray level slicing is also called intensity level slicing. As the name suggests,
gray level slicing is used for highlighting the different parts of the image. This is done in two
types: just highlighting the part of an image and highlighting and preserving the other
intensities as well. Thus L-1 gives the gray level slicing where L is the number of levels, for
8-bit L=256.
Answer: c
Explanation: Bit plane slicing is a method to represent images with one or more bits of the
byte for each pixel. Only MSB is used to represent the pixel. It reduces the original gray
level to a binary image. The three main goals of bit plane slicing are: Converting to a binary
image from gray level image, represent an image with few bits and convert the image to a
small size, Enhance the image by focusing.
Answer: d
Explanation: For image enhancement, the techniques used are Arithmetic and Logical
Operations. For Logical operations for image enhancement the operations are: AND, OR,
NOT. Arithmetic operations for image enhancement are Subtraction and Averaging. XOR
operation is not used in image enhancement.
from point(u, v), D0 is the distance defining cutoff frequency, then for what value of
D(u, v) the filter is down to 0.607 of its maximum value?
a) D(u, v) = D0
b) D(u, v) = D02
c) D(u, v) = D03
d) D(u, v) = 0
Answer: a
Explanation: For the given Gaussian filter of 2-D image, the value D(u, v) = D0 gives the filter
a down to 0.607 of its maximum value.
2. State the statement as true or false. “The GLPF did produce as much smoothing as
the BLPF of order 2 for the same value of cutoff frequency”.
a) True
b) False
Answer: b
Explanation: For the same value of cutoff frequency, the GLPF did not produce as much
smoothing as the BLPF of order 2, because the profile of GLPF is not as tight as BLPF of
order 2.
Answer: a
Explanation: Using Gaussian Lowpass Filter no ringing is assured, but Ideal Lowpass Filter
and Butterworth Lowpass Filter of order 2and more produces significant ringing.
4. The lowpass filtering process can be applied in which of the following area(s)?
a) The field of machine perception, with application of character recognition
b) In field of printing and publishing industry
c) In field of processing satellite and aerial images
d) All of the mentioned
Answer: d
Explanation: In case of broken characters recognition system, LPF is used. LPF is used as
preprocessing system in printing and publishing industry, and in case of remote sensed
images LPF is used to blur out as much detail as possible leaving the large feature
recognizable.
5. The edges and other abrupt changes in gray-level of an image are associated
with_________
a) High frequency components
b) Low frequency components
c) Edges with high frequency and other abrupt changes in gray-level with low frequency
components
d) Edges with low frequency and other abrupt changes in gray-level with high frequency
components
Answer: a
Explanation: High frequency components are related with the edges and other abrupt
changes in gray-level of an image.
6. A type of Image is called VHRR image. What is the definition of VHRR image?
a) Very High Range Resolution image
b) Very High-Resolution Range image
c) Very High-Resolution Radiometer image
d) Very High Range Radiometer Image
Answer: c
Explanation: A VHRR image is a Very High-Resolution Radiometer Image.
Answer: b
Explanation: The Image sharpening in frequency domain is achieved by attenuating the
low-frequency components without disturbing the high-frequency components.
Answer: c
Explanation: The function of filters in Image sharpening in frequency domain is to perform
precisely reverse operation of Ideal Lowpass filter.
The transfer function of Highpass filter is obtained by relation: Hhp(u, v) = 1 – Hlp(u, v), where
Hlp(u, v) is transfer function of corresponding lowpass filter.
9. If D0 is the cutoff distance measured from origin of frequency rectangle and D(u, v)
is the distance from point (u, v). Then what value does an Ideal Highpass filter will
give if D(u, v) ≤ D0 and if D(u, v) >D0?
a) 0 and 1 respectively
b) 1 and 0 respectively
c) 1 in both case
d) 0 in both case
Answer: a
Explanation: Unlike Ideal lowpass filter, an Ideal highpass filter attenuates the low-
frequency components and so gives 0 for D(u, v) ≤ D0 and 1 for D(u, v) >D0.
10. What is the relation of the frequencies to a circle of radius D0, where D0 is the
cutoff distance measured from origin of frequency rectangle, for an Ideal Highpass
filter?
a) IHPF sets all frequencies inside circle to zero
b) IHPF allows all frequencies, without attenuating, outside the circle
c) All of the mentioned
d) None of the mentioned
Answer: c
Explanation: An Ideal high pass filter gives 0 for D(u, v) ≤ D0 and 1 for D(u, v) >D0.
11. Which of the following is the transfer function of the Butterworth Highpass Filter,
of order n, D0 is the cutoff distance measured from origin of frequency rectangle and
D(u, v) is the distance from point (u, v)?
a)
b)
c)
d) none of the mentioned
Answer: a
Explanation: The transfer function of Butterworth highpass filter of order n, D0 is the cutoff
distance measured from origin of frequency rectangle and D(u, v) is the distance from point
12. Which of the following is the transfer function of the Ideal Highpass Filter? Given
D0 is the cutoff distance measured from origin of frequency rectangle and D(u, v) is
the distance from point (u, v).
a)
b)
c)
d) none of the mentioned
Answer: b
Explanation: The transfer function of Ideal highpass filter, whereD0 is the cutoff distance
measured from origin of frequency rectangle and D(u, v) is the distance from point (u, v) is
given by:
13. Which of the following is the transfer function of the Gaussian Highpass Filter?
Given D0 is the cutoff distance measured from origin of frequency rectangle and D(u,
v) is the distance from point (u, v).
a)
b)
c)
d) none of the mentioned
Answer: c
Explanation: The transfer function of Gaussian highpass filter, where D0 is the cutoff
distance measured from origin of frequency rectangle and D(u, v) is the distance from point
14. For a given image having smaller objects, which of the following filter(s), having
D0 as the cutoff distance measured from origin of frequency rectangle, would you
prefer for a comparably smoother result?
a) IHLF with D0 15
b) BHPF with D0 15 and order 2
c) GHPF with D0 15 and order 2
d) All of the mentioned
Answer: c
Explanation: For the same format as for BHPF, GHPF gives a result comparably smoother
than BHPF. However, BHPF performance for filtering smaller objects is comparable with
IHPF.
15. Which of the following statement(s) is true for the given fact that “Applying
Highpass filters has an effect on the background of the output image”?
a) The average background intensity increases to near white
b) The average background intensity reduces to near black
c) The average background intensity changes to a value average of black and white
d) All of the mentioned
Answer: b
Explanation: The Highpass filter eliminates the zero frequency components of the Fourier
transformed image HPFs are applied on. So, the average background intensity reduces to
near black.
Explanation: Rods are long slender receptors while cones are shorter and thicker receptors.
2. How is image formation in the eye different from that in a photographic camera
a) No difference
b) Variable focal length
c) Varying distance between lens and imaging plane
d) Fixed focal length
Explanation: Fibers in ciliary body vary shape of the lens thereby varying its focal length.
3. Range of light intensity levels to which the human eye can adapt (in Log of Intensity-mL)
a) 10-6 to 10-4
b) 104 to 106
c) 10-6 to 104
d) 10-5 to 105
Explanation: Range of light intensity to which human eye can adapt is enormous
and about the order 1010 from 10-6 to 10
Explanation: The human eye a wide dynamic range by changing the eye’s overall sensitivity
and this is called brightness adaptation.
Explanation: Iris is responsible for controlling the amount of light that enters the human
eye.
2. If S is a subset of pixels, pixels p and q are said to be ____________ if there exists a path
between them consisting of pixels entirely in S.
a) continuous
b) ambiguous
c) connected
d) none of the Mentioned
Explanation: Pixels p and q are said to be connected if there exists a path between them
consisting of pixels entirely in S.
4. Two regions are said to be ___________ if their union forms a connected set.
a) Adjacent
b) Disjoint
c) Closed
d) None of the Mentioned
5. If an image contains K disjoint regions, what does the union of all the regions represent?
a) Background
b) Foreground
c) Outer Border
d) Inner Border
Explanation: The union of all regions is called Foreground and its complement is called the
Background.
6. For a region R, the set of points that are adjacent to the complement of R is called as
________
a) Boundary
b) Border
c) Contour
d) All of the Mentioned
Explanation: The words boundary, border and contour mean the same set.
7. The distance between pixels p and q, the pixels have a distance less than or equal to
some value of radius r centred at (x,y) is called :
a) Euclidean distance
b) City-Block distance
c) Chessboard distance
d) None of the Mentioned
8. The distance between pixels p and q, the pixels have a distance less than or equal to
some value of radius r, form a diamond centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
9. The distance between pixels p and q, the pixels have a distance less than or equal to
some value of radius r, form a square centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
Color Fundamentals
1. How many categories does the color image processing is basically divided into?
a) 4
b) 2
c) 3
d) 5
Explanation: Color image processing is divided into two major areas: full-color and pseudo-
color processing.
Explanation: Color image processing is divided into two major areas: full-color and pseudo-
color processing. In the first category, the images are acquired with a full-color sensor like
color TV or color scanner. In the second category, there is a problem of assigning a color to
a particular monochrome intensity or range of intensities.
3. What are the basic quantities that are used to describe the quality of a chromatic light
source?
a) Radiance, brightness and wavelength
b) Brightness and luminence
c) Radiance, brightness and luminence
d) Luminence and radiance
Explanation: Three quantities are used to describe the quality of a chromatic light source:
radiance, luminance and brightness.
4. What is the quantity that is used to measure the total amount of energy flowing from the
light source?
a) Brightness
b) Intensity
c) Luminence
d) Radiance
Explanation: Three quantities are used to describe the quality of a chromatic light source:
radiance, luminance and brightness. Radiance is used to measure the total amount of
energy flows from the light source and is generally measured in watts (W).
5. What are the characteristics that are used to distinguish one color from the other?
a) Brightness, Hue and Saturation
b) Hue, Brightness and Intensity
c) Saturation, Hue
d) Brightness, Saturation and Intensity
Explanation: The characteristics generally used to distinguish one color from another are
brightness, hue and saturation. Brightness embodies the chromatic notion of intensity.
Hue is an attribute associated with dominant wavelength in a mixture of light waves.
Saturation refers to the relative purity or the amount of white light mixed with a hue.
Explanation: Hue and saturation are taken together are called chromaticity and therefore, a
color may be characterized by its brightness and chromaticity.
7. Which of the following represent the correct equations for trichromatic coefficients?
a) x=X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z)
b) x=(Y+Z)/(X+Y+Z), y=(X+Z)/(X+Y+Z), z=(X+Y)/(X+Y+Z)
c) x=X/(X-Y+Z), y=Y/(X-Y+Z), z=Z/(X-Y+Z)
d) x=(-X)/(X+Y+Z), y=(-Y)/(X+Y+Z), z=(-Z)/(X+Y+Z)
Explanation: Tri-stimulus values are the amounts of red, green and blue needed to form
any particular color and they are denoted as X,Y and Z respectively. A colors the specified
by its trichromatic coefficients x, y & z: =X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z).
Explanation: The amounts of red, green and blue needed to form any particular color are
called the tri-stimulus values and are denoted by X, Y and Z respectively. A color is then
specified by its trichromatic coefficients, whose equations are formed from tri-stimulus
values.
9. What is the value obtained by the sum of the three trichromatic coefficients?
a) 0
b)-1
c) 1
d) Null
Explanation: From the equations: x=X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z) it is the noted that
sum of the coefficients is x+y+z≅1.
10. What is the name of area of the triangle in C.I E chromatic diagram that shows a typical
range of colors produced by RGB monitors?
a) Color gamut
b) Tricolor
c) Color game
d) Chromatic colors
Explanation: The triangle in C.I.E chromatic diagram shows a typical range of colors called
the color gamut produced by RGB monitors. The irregular region inside the triangle is
representative of the color gamut of today’s high-quality color printing devices.
Color Models
Explanation: A color model is also called as color space or color system .Its purpose is to
facilitate the specification of colors in some standard, generally accepted way.
Explanation: Images are represented in the RGB color model consist of three component
images one for each primary color. When fed into RGB monitor, these three images
combine on the phosphor screen to produce a composite color image. The number of bits
used to represent each pixel in RGB space is called the pixel depth.
Explanation: The term full-color image is used often to denote a 24-bit RGB color image.
The total number of colors in a 24-bit RGB color image is (28)3=16777216.
4. What is the equation used to obtain S component of each RGB pixel in RGB color format?
a) S=1+3/(R+G+B) [min(R,G,B)].
b) S=1+3/(R+G+B) [max(R,G,B)].
c) S=1-3/(R+G+B) [max(R,G,B)].
d) S=1-3/(R+G+B) [min(R,G,B)].
Explanation: If an image is given in RGB format then the saturation component is obtained
by the equation.
5. What is the equation used to obtain I(Intensity) component of each RGB pixel in RGB
color format?
a) I=1/2(R+G+B)
b) I=1/3(R+G+B)
c) I=1/3(R-G-B)
d) I=1/3(R-G+B)
Explanation: If an image is given in RGB format then the intensity (I) component is obtained
by the equation, I=1/3 (R+G+B).
6. What is the equation used for obtaining R value in terms of HSI components?
a) R=I[1-(S cosH)/cos(60°-H) ].
b) R=I[1+(S cosH)/cos(120°-H)].
c) R=I[1+(S cosH)/cos(60°-H) ].
d) R=I[1+(S cosH)/cos(30°-H) ].
Explanation: Given values of HSI in the interval [0, 1], the R value in the RGB components is
given by the equation:
7. What is the equation used for calculating B value in terms of HSI components?
a) B=I(1+S)
b) B=S(1-I)
c) B=S(1+I)
d) B=I(1-S)
Explanation: Given values of HSI in the interval [0, 1], the B value in the RGB components is
given by the equation: B=I(1-S).
8. What is the equation used for calculating G value in terms of HSI components?
a) G=3I-(R+B)
b) G=3I+(R+B)
c) G=3I-(R-B)
d) G=2I-(R+B)
Explanation: Given values of HSI in the interval [0, 1], the B value in the RGB components is
given by the equation: G=3I-(R+B).
9. Which of the following color models are used for color printing?
a) RGB
b) CMY
c) CMYK
d) CMY and CMYK
Explanation: The hardware oriented models which are prominently used in the color
printing process are CMY (cyan, magenta and yellow) and CMYK (cyan, magenta, yellow and
black).