COE 458 – ROBOTICS
AND COMPUTER
VISION
Instructor: James Okae (Ph.D.)
1
Image Preprocessing and Feature Extraction
Learning Objectives
By the end of this lecture, students will be able to:
• Understand and explain fundamental concepts of image preprocessing
• Describe and apply key image enhancement techniques to improve image
quality
• Explain the importance of feature extraction in computer vision
• Identify different types of image features
• Explain and implement popular traditional image feature extraction
algorithms
• Understand and implement CNNs based feature extraction algorithms
• Analyze challenges in feature extraction
2
Image Preprocessing
Definition: A set of techniques applied to raw images to improve their
quality or to transform them into a form more suitable for further analysis,
such as feature extraction, segmentation or classification.
Why preprocess images?
• Remove noise and distortions
• Normalize lighting and contrast
• Enhance relevant structures
• Reduce computational complexity
• Improve performance of downstream tasks (e.g. segmentation,
recognition)
3
Common Image Preprocessing Techniques
Noise Reduction or Filtering
• Remove unwanted noise using filters like Gaussian blur, median filter.
Contrast Enhancement
• Improve image contrast using histogram equalization
Normalization
• Scale pixel values to a common range, often [0, 1] or [-1,1], for better
algorithm performance
• Geometric Transformation
• Resize, rotate, or crop images to a consistent size or orientation.
4
Common Image Preprocessing Techniques
Color Space Conversion
• Convert images from RGB to grayscale, HSV or other color spaces for
specific tasks.
Thresholding / Binarization
• Convert grayscale images to binary images for specific tasks.
Sharpening
• Enhance edges to highlight important details
5
Histogram Equalization
Improves contrast in images
Redistributes pixel intensities
Use cases: Low-contrast or
poorly lit images
6
How Histogram Equalization Works
1. Calculate the histogram of the image
• Count the number of pixels for each intensity level.
2. Compute Probability Distribution Function (PDF) of the histogram.
3. Compute Cumulative Distribution Function (CDF) of the histogram.
7
How Histogram Equalization Works
4. Normalize the CDF to scale pixel intensities across the full range
[0 – 255] .
5. Map original pixel values to new values using the normalized CDF as
a lookup table.
8
Limitations of Histogram Equalization
May over-amplify noise.
Can lose brightness consistency
Can introduce unnatural effects and artefacts
Not suitable for color images directly
9
Applications of Histogram Equalization
Medical imaging (e.g., enhancing X-rays)
Satellite and aerial imagery
Preprocessing step for object detection or OCR
General image enhancement to improve visual quality.
10
Image Filtering – Median Filter
Median Filtering
A filtering technique that replaces each pixel value with the median
value of the neighboring pixels in each defined window (kernel).
Commonly used to remove salt-and-pepper noise from digital images
Median Filter
11
Image Filtering – Median Filter
How does it work?
12
Image Filtering – Median Filter
Example:
Original window: [12, 80, 13,
14, 15, 255,
16, 17, 18]
Sorted: [12, 13, 14, 15, 16, 17, 18, 80, 255]
Median = 16
The center pixel (15) is replaced with 16.
13
Feature Extraction
Definition: Process of identifying and describing relevant patterns in
images.
Converts raw image data into compact, informative representations
Helps to improve the accuracy and efficiency of computer vision
algorithms
Feature can be:
• Low-level: edges, corners, textures
• High-level: shapes, objects, semantics
14
Types of Image Features
Edge Features:
• Detect boundaries where image intensity changes sharply.
• Common techniques: Sobel, Prewitt, Canny Edge Detector.
Corner Features:
• Points with two dominant and different edge directions.
• Examples: Harris Corner Detector, Shi-Tomasi.
Blob Features:
• Regions in an image that are either brighter or darker than their
surroundings.
• Examples: Difference of Gaussians (DoG), Laplacian of Gaussian (LoG).
15
Desirable Feature Properties
Invariance to scale, rotation, lighting
Robustness to noise and occlusion
Distinctiveness: Differentiates between regions or objects
Efficiency: Computation and memory usage
16
Edge Detection
A fundamental technique used to identify points in a digital image
where the brightness changes sharply.
More formally, edge is region with a discontinuity in intensity.
Why Edge Detection?
• Helps simplify image data by reducing it to important structural
information.
• Crucial for object detection, recognition, segmentation and image
analysis.
• Provides a way to extract meaningful shapes and features from raw
images.
17
Edge Detection
Common algorithms:
• Sobel Operator
• Prewitt Operator
• Canny Edge Detector
Use cases
• Boundary detection, shape recognition
• Medical imaging
• Industrial inspection
18
Canny Edge Detection
Detects true edges while minimizing:
• Noise
• False positives (non-edges)
• False negatives (missing real edges)
Outperforms simpler operators (e.g., Sobel, Prewitt) in accuracy and
noise reduction.
19
Canny Edge Detection
Steps in Canny Edge Detection:
1. Noise Reduction:
• Smooth the image using a Gaussian filter.
2. Gradient Calculation:
• Compute intensity gradients using Sobel filters. Compute gradient
magnitude and direction.
20
Canny Edge Detection
3. Non-Maximum Suppression:
• Thin edges by keeping only local maxima in the gradient direction
4. Double Thresholding:
• Use two thresholds (low and high ) to classify strong and weak edges.
• Strong edges: Above high threshold; Weak edges: Between low and high threshold;
Non-edges: Below low threshold
5. Edge Tracking by Hysteresis:
• Use two thresholds to classify strong and weak edges.
• Suppress weak edges not connected to strong edges.
• Connect valid weak edges to strong ones, discard isolated noise.
21
Corner Detection
Corner detection is a technique used to identify points in the image where
the intensity changes sharply in multiple directions.
A corner is a point where edges meet or a point with large intensity
variations in all directions around it.
Why detect corners?
• Corners provide distinctive, repeatable and stable features in images
• Corners are important because they often correspond to key features
such as object boundaries, junctions or interest points that are robust
for matching, recognition, 3D reconstruction and motion tracking
22
Harris Corner Detector
Algorithm steps:
23
Harris Corner Detector
Algorithm steps:
24
Feature Descriptors
SIFT (Scale-Invariant Feature Transform):
• Detects and describes local features that are invariant to scale and
rotation.
SURF (Speeded-Up Robust Features):
• A faster approximation to SIFT. Uses Haar wavelets
ORB (Oriented FAST and Rotated BRIEF):
• Efficient and robust binary descriptor for real-time applications.
HOG (Histogram of Oriented Gradients)
• Used to describe shape and object appearance
25
Feature Matching
Purpose: Find corresponding features between two or more images.
Techniques:
Brute Force Matcher
FLANN (Fast Library for Approximate Nearest Neighbors)
Distance Metrics:
Euclidean distance (for SIFT/SURF)
Hamming distance (for ORB/BRIEF)
26
Feature Extraction Pipelines
Image Acquisition
Preprocessing (grayscale, smoothing, normalization)
Feature Detection (edges, corners, blobs)
Feature Description
Matching or classification
27
Evaluation of Feature Detectors and
Descriptors
Repeatability
Distinctiveness
Invariance to scale, rotation, and illumination
Computational efficiency
28
Deep Learning-based Feature Extraction
CNN-based features
Deep learning models
29
THANK YOU
30