11.
Basic concept of sampling and quantization in the generation of digital
image. Discuss the effect of increasing the sampling rate and quantization
level in an image.
Advantages and Disadvantages
Image Sampling:
Advantages:
• Data Reduction: Converts a continuous signal into a finite set of points, making
storage and processing more manageable.
• Compatibility: Sampled images are easily processed by digital systems and
algorithms.
• Resolution Control: Allows for control over image resolution by adjusting the
sampling rate.
Disadvantages:
• Information Loss: Inevitably loses some information by approximating a continuous
signal.
• Aliasing: Can cause distortions and artifacts if the sampling rate is too low.
• Computationally Intensive: High-resolution sampling demands significant
computational resources and storage space.
Image Quantization:
Advantages:
• Data Compression: Reduces the amount of data by limiting the number of possible
values for each pixel.
• Simplified Processing: Makes image processing operations simpler and faster with
fewer distinct values.
• Noise Reduction: Helps reduce the impact of noise by mapping small variations in
intensity to the same value.
Disadvantages:
• Loss of Detail: Reduces the range of colors or intensity levels, leading to a loss of
fine detail and potential color banding.
• Quantization Error: Introduces differences between the original and quantized
values, which can become noticeable.
• Reduced Image Quality: Overly aggressive quantization can significantly degrade
image quality, making the image appear blocky or posterized.
Conclusion:
Understanding the differences and interplay between image sampling and quantization is
crucial for anyone working with digital images. While sampling determines how finely the
image is divided spatially, quantization determines how precisely the intensity values are
represented. Together, these processes enable the creation of digital images that can be
stored, manipulated, and displayed effectively.
12. (i) How the various sampling filters in spatial domain are used for image
enhancement. (ii) Process of color image enhancement. What are the
challenges associated with enhancing?
(I)
• In the spatial domain, image enhancement is achieved by directly manipulating pixel values
using different filters, also known as masks, kernels, or templates. These filters enhance the
image by either smoothing (blurring) or sharpening its features.
Types of Spatial Sampling Filters Used:
1. Smoothing Spatial Filters (Low-pass Filters):
o Used for noise reduction and minor detail removal.
o Averaging Filter: Each output pixel is the average of the neighboring pixels.
o Box Filter: A type of averaging filter where all coefficients are equal.
o Weighted Average Filter: Different weights are assigned to different pixels.
2. Order-Statistics Filters (Nonlinear Filters):
o Median Filter: Replaces each pixel value with the median of the
neighborhood. Effective against impulse (salt-and-pepper) noise.
oMax/Min Filters: Max filter highlights bright spots; Min filter highlights dark
spots.
3. Sharpening Spatial Filters (High-pass Filters):
o Enhance fine details and edges.
o Based on spatial differentiation (first or second derivative).
o Laplacian Filter: Uses second-order derivatives to highlight regions of rapid
intensity change.
o Gradient Operators (e.g., Sobel, Prewitt): Based on first derivatives to detect
edges.
These filters are applied by convolving the image with a kernel, which is systematically
moved across the image to compute new pixel values.
(II)
1. Independent Channel Processing
• In this method, the Red (R), Green (G), and Blue (B) channels of a color image are
treated separately.
• Standard grayscale enhancement techniques (e.g., histogram equalization, contrast
stretching, smoothing, sharpening) are applied independently to each channel.
• After processing, the channels are recombined to form the enhanced color image.
• Issue: Since enhancement is independent, the relationship between channels might
break, causing color imbalance.
2. Color Coding Based on Frequency Content
• The Fourier Transform is applied separately to each R, G, B channel.
• High-frequency components (representing edges and fine details) and low-frequency
components (representing smooth areas) are treated differently.
• Filters are applied:
o To enhance textures (sharpening)
o To suppress noise (smoothing)
• After enhancement, the Inverse Fourier Transform is performed to return to the
spatial domain.
• This method improves color images by frequency manipulation rather than direct
pixel manipulation.
3. Histogram Equalization
• Histogram Equalization is used to improve the contrast of color images.
• It can be applied:
o Directly to each RGB channel separately (can distort color if not careful)
o Or better: Convert to a color space like YCbCr, HSV, or HSI:
▪ Only the luminance (brightness) channel is enhanced.
▪ The chrominance (color information) is kept unchanged, preserving
color fidelity.
• This avoids unrealistic color changes while improving contrast.
4. Transform-Based Enhancement
• The image is converted from the RGB color model to another color space:
o HSI (Hue, Saturation, Intensity),
o YUV,
o YCbCr, etc.
• Only the intensity or luminance component is enhanced (e.g., using histogram
equalization, gamma correction).
• After enhancement, the image is converted back to the RGB color space.
• This method ensures that the color tone (hue) remains natural while only the
brightness or contrast is improved.
Challenges in Enhancing Color Images
1. Color Distortion
• When enhancing RGB channels separately, there is a risk of disturbing the natural
relationship among R, G, and B values.
• This can lead to unnatural colors, color shifts, or weird-looking images.
2. Channel Interdependency
• Human vision perceives luminance (brightness) more sensitively than chrominance
(color).
• If processing is not done carefully, enhancing one channel too much may degrade the
overall perceived image quality.
• Improper adjustment affects visual realism.
3. Artifacts Introduction
• Aggressive enhancement methods can introduce visual artifacts such as:
o Halos around edges
o Noise amplification in smooth regions
o False contours (banding effects) where smooth gradients should exist
• These artifacts reduce the quality instead of improving it.
4. Computational Complexity
• Color images have three channels, and if each requires separate processing
(especially in frequency domain), computational load increases.
• Processing in transformed color spaces (like YUV, HSI) also adds overhead due to
conversion steps.
5. Illumination Variance
• Real-world images often suffer from non-uniform lighting (bright spots, shadows).
• Simple enhancement may amplify lighting inconsistencies instead of correcting
them.
• Advanced methods like Homomorphic Filtering are used to separate illumination
and reflectance, but they are complex and computationally expensive.
13. Various noise models.
Noise in images can be defined as unwanted random variations in pixel intensities that
degrade image quality.
Noise can come from many sources like sensor defects, transmission errors,
environmental conditions, etc.
A Noise Model is a mathematical description that tells us:
• How noise behaves in an image,
• What statistical properties it has (like mean, variance),
• How it affects image pixels.
Noise models help in analyzing and designing filters to remove or reduce noise.
Noise Type Appearance Common Source Best Removal Method
Thermal noise,
Gaussian Smooth grainy noise Gaussian smoothing filter
electronics
Salt and Bit errors, faulty
Random black/white dots Median filter
Pepper sensors
Photon noise, signal- Variance stabilizing
Poisson Low-light photography
dependent transform
Grainy, multiplicative Radar, ultrasound Adaptive filters, Lee
Speckle
noise imaging filter
Uniform Flat random noise Artificial (for testing) Averaging filter
14. Region based segmentation procedure like region growing, splitting and
merging. Compare the effectiveness.
A. Region Growing
Procedure:
1. Select seed points (user-defined or automatic).
2. Check neighboring pixels for each seed point using 4- or 8-connectivity.
3. If the neighboring pixel satisfies the homogeneity criterion (e.g., intensity difference
below a threshold), add it to the region.
4. Repeat the process iteratively until no more pixels can be added.
5. The process stops when the region cannot grow further or a predefined
size/variance condition is met.
Homogeneity Criteria Examples:
• Intensity difference
• Local or global variance
• Gradient magnitude
• Texture consistency
B. Splitting and Merging
Procedure:
1. Splitting:
o Start with the entire image as one region.
o If the region does not meet the uniformity criterion, split it into four quadrants
(quad-tree).
o Apply the test recursively to each sub-region.
2. Merging:
o Adjacent regions that satisfy the uniformity predicate are merged.
o Merge continues until no further merge is possible.
3. Termination:
o When no region can be split or merged any further.
Predicate Example P(R):
• Standard deviation below a threshold
• Region intensity homogeneity
15. Calculate average word length, entropy and efficiency for the given
symbol and probability: {0.3, 0.3, 0.2, 0.1, 0.1} (FOR Huffman and regional
length)
Pg 10 (for my reference – U5(modified notes))
16. Create watershed algorithm including dam construction.
Image segmentation is a fundamental computer vision task that involves
partitioning an image into meaningful and semantically homogeneous
regions. The goal is to simplify the representation of an image or make it
more meaningful for further analysis. These segments typically
correspond to objects or regions of interest within the image.
Watershed Algorithm
The Watershed Algorithm is a classical image segmentation technique
that is based on the concept of watershed transformation.The
segmentation process will take the similarity with adjacent pixels of the
image as an important reference to connect pixels with similar spatial
positions and gray values.
When do I use the watershed algorithm?
The Watershed Algorithm is used when segmenting images with
touching or overlapping objects. It excels in scenarios with irregular
object shapes, gradient-based segmentation requirements, and when
marker-guided segmentation is feasible.
Working of Watershed Algorithm
The watershed algorithm divides an image into segments using
topographic information. It treats the image as a topographic surface,
identifying catchment basins based on pixel intensity. Local minima are
marked as starting points, and flooding with colors fills catchment basins
until object boundaries are reached. The resulting segmentation assigns
unique colors to regions, aiding object recognition and image analysis.
The whole process of the watershed algorithm can be summarized in the
following steps:
• Marker placement: The first step is to place markers on the
local minima, or the lowest points, in the image. These markers
serve as the starting points for the flooding process.
• Flooding: The algorithm then floods the image with different
colors, starting from the markers. As the color spreads, it fills up
the catchment basins until it reaches the boundaries of the
objects or regions in the image.
• Catchment basin formation: As the color spreads, the
catchment basins are gradually filled, creating a segmentation
of the image. The resulting segments or regions are assigned
unique colors, which can then be used to identify different
objects or features in the image.
• Boundary identification: The watershed algorithm uses the
boundaries between the different colored regions to identify the
objects or regions in the image. The resulting segmentation can
be used for object recognition, image analysis, and feature
extraction tasks.
The watershed algorithm is an image segmentation technique used to separate different
objects in an image, particularly when the objects touch or overlap. It is inspired by
geography, where the term "watershed" refers to a ridge that separates waters flowing to
different rivers.
Basic Concept
Imagine the grayscale image as a topographic surface:
• Intensity values represent elevation.
• Low-intensity areas (valleys or minima) correspond to basins.
• Water is allowed to "flood" the image starting from these minima.
• As water levels rise, basins expand.
• When water from two basins is about to merge, a dam is constructed to prevent
merging.
• The process continues until the entire image is flooded.
These dams become the boundaries of the segmented regions.
Steps Involved in the Watershed Algorithm
1. Convert Image to Grayscale: Simplifies the image by reducing it to a single
intensity channel.
2. Compute Gradient or Use Marker-based Approach:
o A gradient image highlights edges where intensity changes sharply.
o Or, marker-based watershed uses predefined markers (foreground and
background).
3. Initial Region Marking: Local minima are identified (or external markers are
provided) to serve as flooding seeds.
4. Flooding Process and Dam Construction:
o Water begins to flood from each minimum.
o When two flooding fronts meet, a dam (boundary) is constructed.
o These dams prevent water from different basins from mixing.
5. Segmentation Result: The final set of dams marks the segmented regions in the
image.
Dam Construction in the Watershed Algorithm
• Dams are constructed at locations of ambiguity, where two or more basins compete
to flood a pixel.
• These are typically set as boundary markers in the final segmentation map.
• In practice, these are marked with a unique value (e.g., -1) and may be visualized as a
red line or highlighted contour.
Applications
• Medical image segmentation (e.g., separating overlapping cells).
• Object detection and boundary marking.
• Image preprocessing for object recognition.
Advantages
• Produces accurate and closed boundaries.
• Works well with marker-based control to avoid oversegmentation.
Disadvantages
• Highly sensitive to noise.
• Can lead to oversegmentation without proper preprocessing.