Point Operations and Spatial Filtering: Ranga Rodrigo
Point Operations and Spatial Filtering: Ranga Rodrigo
Ranga Rodrigo
1/102
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
2/102
Outline
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
3/102
We can process images either in spatial domain or in transform
domains. Spatial domain refers to image plane itself. In spatial domain
processing, we directly manipulate pixels. Spatial domain processing
is common. Frequency domain and wavelets are examples of
transform domains.
4/102
Domains
Image enhancement
5/102
Spatial Domain Processing
6/102
Point Operations
£ ¤
g (x 0 , y 0 ) = T f (x 0 , y 0 ) .
These are known as gray level transformations.
Examples:
Gamma correction.
Window-center correction.
Histogram equalization.
7/102
Neighborhood Operations
(x 0 , y 0 ).
Examples:
Filtering: mean, Gaussian, median etc.
Image gradients etc.
8/102
Domain Operations
g (x 0 , y 0 ) = f (T x (x 0 , y 0 ), T y (x 0 , y 0 )).
Examples:
Warping.
Flipping, rotating, etc.
Image registration.
9/102
Point Operations: Recapitulation
£ ¤
g (x 0 , y 0 ) = T f (x 0 , y 0 ) .
These are known as gray level transformations.
The enhanced value of a pixel depends only on the original value
of the pixel.
If we denote the value of the pixel before and after the
transformation as r and s , respectively, the above expression
reduces to
s = T (r ). (1)
For example, s = 255 − r gives the negative of the image.
10/102
L −1
3
4L
1
2L
1
4L
0
0 1 1 3 L −1
4L 2L 4L
11/102
Example
Write a program to generate the negative of an image.
12/102
im = imread ( ‘ image . jpg ’ ) ;
imneg = 255 − im ;
subplot (1 ,2 ,1)
imshow ( im )
t i t l e ( ‘ Original ’ )
subplot (1 ,2 ,2)
imshow ( imneg )
t i t l e ( ‘ Negative ’ )
13/102
Power-Law Transformations
s = cr γ , (2)
where c and γ are positive constants.
Values of γ such that 0 < γ < 1 map a narrow range of dark input pixels
vales into a wider range of output values, with opposite
being true for higher values of input levels.
Values γ > 1 have the opposite behavior to above.
c = γ = 1 gives the identity transformation.
Gamma correction is an application.
14/102
L −1
γ = 0.1
3
4L γ = 0.5
1
2L
1 γ=2
4L
0
0 1 1 3 L −1
4L 2L 4L
15/102
Example
Write a program to carry out power-law transformation on an image.
16/102
r = imread ( ‘ image . jpg ’ ) ;
c = 1;
gamma = 0 . 9 ;
s1 = c * r . ^gamma ;
gamma = 1 . 2 ;
s2 = c * r . ^gamma ;
subplot (1 ,3 ,1)
imshow ( r )
t i t l e ( ’Original ’ )
subplot (1 ,3 ,2)
imshow ( s1 )
t i t l e ( ‘Gamma = 0 . 9 ’ )
subplot (1 ,3 ,3)
imshow ( s2 )
t i t l e ( ‘Gamma = 1 . 2 ’ )
17/102
Piecewise-Linear Transformation Functions
Contrast stretching.
Window-center correction to enhance a portion of levels.
Gray-level slicing.
18/102
L −1
3
4L
1
2L
1
4L T (r )
0
0 1 1 3 L −1
4L 2L 4L
19/102
Here is an example of a piecewise linear transformation.
im = imread ( ’Images / airplane .jpg ’ ) ;
r = rgb2gray ( im ) ;
line1 = 0:0.5:100;
l i n e 2 = 1 5 5 / 5 5 * ( [ 2 0 1 : 1 : 2 5 5 ] − 200) + 100;
t = [ line1 , line2 ]
plot ( t )
s = t ( r + 1);
subplot (1 ,2 ,1)
imshow ( r )
t i t l e ( ’Original ’ )
subplot (1 ,2 ,2)
imshow ( mat2gray ( s ) )
t i t l e ( ’Output ’ )
20/102
Example
Write a program to carry out contrast stretching.
21/102
Outline
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
22/102
Histogram
23/102
Number of Bins in a Histogram
24/102
Example
Write a program to compute the histogram with a given number of bins.
25/102
close a l l
im = imread ( ’fruits .jpg ’ ) ;
img = rgb2gray ( im ) ;
imshow ( img )
numbins = 1 0 ;
binbounds = l i n s p a c e ( 0 , 2 5 5 , numbins + 1 ) ;
cumhist = zeros ( numbins + 1 , 1 ) ;
f o r i = 2 : numbins + 1
cumhist ( i ) = sum ( sum ( img <= binbounds ( i ) ) ) ;
endfor
h i s t = cumhist ( 2 : end ) − cumhist ( 1 : end − 1 ) ;
b i n c e n t e r s = ( binbounds ( 2 : end )
+ binbounds ( 1 : end − 1 ) ) / 2 ;
bar ( b i n c e n t e r s ’ , h i s t , 0 . 2 )
d
26/102
Histogram Equalization
27/102
Consider, for now, that continuous intensity values of an image are to
be processed. We assume that r ∈ [0, L − 1]. Lets consider the intensity
transformation
s = T (r ) 0 ≤ r ≤ L − 1 (3)
that produces an output intensity level s for every pixel in the input
image having intensity r . We assume that
T (r ) is monotonically increasing in the interval 0 ≤ r ≤ L − 1, and
0 ≤ T (r ) ≤ L − 1 for 0 ≤ r ≤ L − 1.
The intensity levels in the image may be viewed as random variables in
the interval [0, L − 1]. Let p r (r ) and p s (s) denote the probability density
functions (PDFs) of r and s . A fundamental result from basic probability
theory is that if p r (r ) and T (r ) are known, and T (r ) is continuous and
differentiable over the range of values of interest, then the PDF of the
transformed variable s can be obtained using the simple formula
¯ ¯
¯dr ¯
p s (s) = p r (r ) ¯¯ ¯¯ . (4)
ds
28/102
Now let’s consider the following transform function:
Z r
s = T (r ) = (L − 1) p r (w)d w, (5)
0
29/102
To find p s (s) corresponding to this transformation we use Equation 3.
d s d T (r )
= ,
dr dr
d
·Z r ¸
= (L − 1) p r (w)d w , (6)
dr 0
= (L − 1)p r (r ).
31/102
Thus the output image is obtained by mapping each pixel in the input
image with intensity r k into a corresponding pixel level s k in the output
image using Equation 9.
32/102
Example
Suppose that a 3-bit image (L = 8) of size 64 × 64 pixels (M N = 4096)
has the intensity distribution shown in Table 1, where the intensity
levels are in the range [0, L − 1] = [0, 7]. carry out histogram equalization.
33/102
rk nk p r (r k ) = n k /M N
r0 = 0 790 0.19
r1 = 1 1023 0.25
r2 = 2 850 0.21
r3 = 3 656 0.16
r4 = 4 329 0.08
r5 = 5 245 0.06
r6 = 6 122 0.03
r7 = 7 81 0.02
Table 1: Table for Example 5.
34/102
Example
White a program to carry out histogram equalization.
35/102
img = imread ( ’Images /Fig3 .15(a)1 top.jpg ’ ) ;
L = 256;
cumhist = zeros ( L , 1 ) ;
f o r k = 0 : L−1
s ( k +1) = sum ( sum ( img <= k ) ) ;
endfor
n = s i z e ( img , 1 ) * s i z e ( img , 2 )
s = s/n;
imeq = mat2gray ( s ( img + 1 ) ) ;
subplot (2 ,2 ,1)
imshow ( img )
subplot (2 ,2 ,2)
imshow ( imeq )
subplot (2 ,2 ,3)
i m h i s t ( im2double ( img ) )
subplot (2 ,2 ,4)
i m h i s t ( imeq )
36/102
Outline
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
37/102
Spatial filtering is one of the main tools that we use in image
processing.
There are many applications including image enhancement, e.g.,
smoothing.
We can accomplish effects such as smoothing by applying a
spatial filter directly on the image.
Spatial filters are also called spatial masks, kernels, templates,
and windows.
38/102
Applying a Spatial Filter
39/102
0 j0 w −1
j
0
i0
h −1
40/102
w(−2, −2) w(−2, −1) w(−2, 0) w(−2, 1) w(−2, 2)
41/102
Consider the 3 × 3 kernel shown in Figure 6.
42/102
At any pixel (i , j ) in the image, the response g (i , j ) of the filter is the
sum of products of the filter coefficients and the image pixels
encompassed by the filter:
Observe that the center coefficient of of the filter, w(0, 0) aligns with the
pixel at the location (i , j ). For a mask of size m × n , we assume that
m = 2a + 1 and n = 2b + 1, where a and b are positive integers. This
means that we always choose filters of odd dimensions for
convenience. If general, linear spatial filtering for an image of size
M × N , with the filter of size m × n is given by the expression
a
X b
X
g (i , j ) = w(s, t ) f (i + s, j + t ), (11)
s=−a t =−b
44/102
Instead of using equal 1/9 entries, we can have a 3 × 3 kernel of all
ones and then divide the filter output by 9.
45/102
Example
Write a program to average filter an image using a 3 × 3 kernel.
46/102
w = 3;
h = 1 / 9 * ones (w, w ) ;
hw = f l o o r (w / 2 ) ;
imrgb = imread ( ’Images / airplane .jpg ’ ) ;
im = im2double ( rgb2gray ( imrgb ) ) ;
[ row , c o l ] = s i z e ( im )
r e s u l t = zeros ( row , c o l ) ;
f o r i =hw+1: row −hw
f o r j =hw+1: c o l −hw
result ( i , j ) =
sum ( sum ( h . * im ( i −hw : i +hw , j − hw : j + hw ) ) ) ;
end
end
figure ;
imshow ( r e s u l t ) ;
47/102
A faster and convenient implementation of the aforementioned loops is
as follows:
r e s u l t = conv2 ( im , h , " same " ) ;
48/102
Outline
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
49/102
Smoothing filters are used for blurring and noise reduction. Blurring is
used as a preprocessing operation to remove small noise-like objects
before large object extraction and bridging small gaps in lines and
curves. Noise reduction can be achieved by blurring with a linear filter
and by nonlinear filtering.
50/102
Examples of Smoothing Filters
1 1 1
1
× 1 1 1
9
1 1 1
51/102
The weighted averaging kernel shown in Figure 8 gives more
importance to pixels close to the center.
1 2 1
1
× 2 4 2
16
1 2 1
52/102
Example
Write a program to carry out weighted averaging using the kernel in
Figure 8.
53/102
Order-Statistics (Non-Linear) Filters
54/102
Outline
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
55/102
We were able to achieve image blurring by using pixel averaging,
an operation similar to integration.
Sharpening, the opposite of blurring, can be achieved by spatial
differentiation.
The strength of the response of the derivative operator is
proportional to the degree of intensity discontinuity in the image at
the point of interest.
Thus, image differentiation enhances the edges and other
discontinuities, while deemphasizing areas of low intensity
variations.
56/102
Using the Second Derivative for Sharpening—The
Laplacian
57/102
Laplacian
∂2 f ∂2 f
∇2 f = + . (12)
∂x 2 ∂y 2
58/102
Laplacian
∂2 f ∂2 f
∇2 f = + . (12)
∂x 2 ∂y 2
∂2 f
= f (x, y + 1) + f (x, y − 1) − 2 f (x, y). (14)
∂y 2
58/102
Therefore, the discrete version of the Laplacian is
59/102
0 1 0 1 1 1
1 -4 1 1 -8 1
0 1 0 1 1 1
0 -1 0 -1 -1 -1
-1 4 -1 -1 8 -1
0 -1 0 -1 -1 -1
60/102
Because the Laplacian is a derivative operator, its use highlights the
intensity discontinuities in an image and deemphasizes regions with
slowly varying intensity levels. We can produce a sharpened image by
combining the Laplacian with the original image. If we use the first
kernel in Figure 9, we can add the image and the Laplacian. If g (x, y) is
the sharpened image
61/102
Unsharp Masking
62/102
Using the First-Order Derivatives
∂f
· ¸
fx ∂x
∇ f = grad( f ) = ∂f .
= (17)
fy
∂y
This vector has the important property that it points in the direction of
the greatest rate of change of f at location (x, y).
63/102
The magnitude (length) of vector ∇ f , denoted as M (x, y), where,
q
M (x, y) = mag(∇ f ) = f x2 + f y2 , (18)
is the value at (x, y) of the rate of change in the direction of the gradient
vector. Note that M (x, y) is an image of the same size as the original,
created when x and y are allowed to vary over all the pixel locations in
f . This is commonly referred to as the gradient image of simply the
gradient.
64/102
Sobel operators are discrete approximations to the gradient. Figure 10
shows the Sobel operators.
-1 0 1 1 2 1
-2 0 2 0 0 0
-1 0 1 -1 -2 -1
65/102
(a) Input (b) f x
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
67/102
Segmentation subdivides an image into its constituent regions or
objects. the level to which the subdivision is carried depends on the
problem being solved. Segmentation of nontrivial images is one of the
most difficult tasks in image processing. Segmentation accuracy
determines the eventual success or failure of computerized analysis
procedures. For this reason, considerable care should be taken to
improve the probability of rugged segmentation. In some situations,
such as industrial inspection applications, at least some measure of
control over the environment is possible at times. In others, as in
remote sensing, user control over image acquisition is limited
principally to the choice of image sensors.
Segmentation algorithms for monochrome images generally are based
on one of two basic properties of image intensity values: discontinuity
and similarity. In the first category, the approach is to partition an
image based on abrupt changes in intensity, such as edges in an
image. The principal approaches in the second category are based on
partitioning an image into regions that are similar according to a set of
predefined criteria.
68/102
There are three basic types of intensity discontinuities in a digital
image:
1 points,
2 lines, and
3 edges.
The most common way to look for discontinuities is to run a mask
through the image.
69/102
For a 3 × 3 mask this procedure involves computing the sum of
products of the coefficients with the intensity levels contained in the
region encompassed by the mask. That is the response, R , of the
mask at any point in the image is given by
9
X
R= w i zi
i =1
-1 -1 -1
-1 8 -1
-1 -1 -1
70/102
Point Detection I
R ≥ T,
71/102
If T is given, the following command implements the point-detection
approach just discussed:
g = abs ( i m f i l t e r ( double ( f ) , w ) ) >= T ;
where f is the input image, w is an appropriate point-detection mask
and g is the resulting image.
72/102
Line Detection
73/102
-1 -1 -1 -1 -1 2 -1 2 -1 2 -1 -1
2 2 2 -1 2 -1 -1 2 -1 -1 2 -1
-1 -1 -1 2 -1 -1 -1 2 -1 -1 -1 2
74/102
If the first mask were moved around an image, it would respond more
strongly to lines (one pixel thick) oriented horizontally. With a constant
background, the maximum response would result when the line
passed through the middle row of the mask. Similarly, the second
mask in Figure13 responds best to lines oriented at +45◦ ; the third
mask to vertical lines; and the fourth mask to lines in the −45◦
direction. Note that the preferred direction of each mask is weighted
with a larger coefficient (i.e., 2) than other possible directions. The
coefficients of each mask sum to zero, indicating a zero response from
the mask in areas of constant intensity.
75/102
Edge Detection Using Function edge
76/102
The magnitude of this vector is
|∇ f | = [G x2 +G 2y ]1/2 .
|∇ f | ≈ G x2 +G 2y ,
|∇ f | ≈ |G x | + |G y |.
77/102
These approximations still behave as derivatives; that is, they are zero
in areas of constant intensity and their values are proportional to the
degree of intensity change in areas whose pixel values are variable. It
is common practice to refer to a magnitude of the gradient or its
approximations simply as “the gradient.” A fundamental property of the
gradient vector is that it points in the direction of the maximum rate of
change of f at coordinates (x, y). The angle which this maximum rate
of change occurs is
Gy
µ ¶
α(x, y) = tan−1 .
Gx
One of the key issues is how to estimate the derivative G x and G y
digitally. The various approaches used by function edge are discussed
later in this section.
78/102
Second order derivatives in image processing are generally computed
using the Laplacian. That is the Laplacian of a 2-D function f (x, y) is
formed from second-order derivatives, as follows:
∂2 f (x, y) ∂2 f (x, y)
∇2 f (x, y) = + .
∂x 2 ∂y 2
79/102
With the preceding discussion as background, the basic idea behind
edge detection is to find places in an image where the intensity
changes rapidly, using one of two general criteria:
1 find places where the first derivative of the intensity is grater in
magnitude than a specified threshold, or
2 find places where the second derivative of the intensity has a zero
crossing.
80/102
IPT’s function edge provides several derivative estimators based on the
criteria just discussed. For some of these estimators, it is possible to
specify whether the edge detector is sensitive to horizontal or vertical
edges or to both. The general syntax for this function is
[ g , t ] = edge ( f , ‘ method ’ , parameters )
where f is the input image, method is one of the approaches listed in its
help and parameters are additional parameters explained in the
following discussion. In the output, g is a logical array with 1s at the
locations where edge points where detected in f and 0s elsewhere.
Parameter t is a optional; it gives the threshold used by edge to
determine which gradient values are strong enough to be called edge
points.
81/102
Sobel Edge Detector
82/102
z1 z2 z3
z4 z5 z6
z7 z8 z9
-1 -2 -1 -1 0 1
0 0 0 -2 0 2
1 2 -1 -1 0 1
-1 -1 -1 -1 0 1
0 0 0 -1 0 1
1 1 1 -1 0 83/102
1
Sobel edge detection can be implemented by filtering an image, f ,
(using imfilter ) with the left mask in Figure 14 filtering f again with the
other mask, squaring the pixels values of each filtered image, adding
the two results, and computing their square root. Similar comments
apply to the other and third entries in Figure 14. Function edge simply
packages the preceding operations into the function call and adds
other features, such as accepting a threshold value or determining a
threshold automatically. In addition, edge contains edge detection
techniques that are not implementable directly with imfilter .
84/102
The general calling syntax for the Sobel detector is
[ g , t ] = edge ( f , ‘ sobel ’ , T , d i r )
85/102
Prewitt Edge Detector
86/102
Laplacian of a Gaussian (LoG) Detector
r 2 − σ2 − r 22
· ¸
2
∇ h(r ) = − e 2σ
σ4
87/102
This function is called the Laplacian of Gaussian (LoG). Because the
second derivative is a linear operation, convolving (filtering) an image
with ∇2 h(r ) is the same as convolving the image with the smoothing
function first and then computing the Laplacian of the result. This is the
key concept underlying the LoG detector. We convolve the image with
∇2 h(r ) knowing that it has two effects: It smoothes the image (thus
reducing noise), and it computes the Laplacian, which yields a
double-edge image. Locating edges then consists of finding the zero
crossings between the double edges.
88/102
The general calling syntax for the LoG detector is
[ g , t ] = edge ( f , ‘ log ’ , T , sigma )
where sigma is the standard deviation and the other parameters are as
explained previously. The default value of sigma is 2. As before, edge
ignores any edges that are not stronger than T. If T is not provided, or
it is empty, [ ], edge chooses the value automatically. Setting T to 0
produces edges that are closed contours, a familiar characteristic of
the LoG method.
89/102
Zero-Crossings Detector
This detector is based on the same concept as the LoG method, but
the convolution is carried out using a specified filter function, H. The
calling syntax is
[ g , t ] = edge ( f , ‘ zerocross ’ , T , H)
The other parameters are as explained for the LoG detector.
90/102
Canny Edge Detector I
The Canny detector [1] is the most powerful edge detector provided by
function edge. The method can be summarized as follows:
1 The image is smoothed using a Gaussian filter with a specified
standard deviation, σ to reduce noise.
2 The local gradient, g (x, y) = [G x2 +G 2y ]1/2 , and edge direction,
α(x, y) = tan−1 (G y /G x ), are computed at each point, using an
operator such as Sobel or Prewitt. An edge point is defined to be
a point whose strength is locally maximum in the direction of the
gradient.
3 The edge point is determined in (2) give rise to ridges in the
gradient magnitude image. The algorithm then tracks along the
top of these ridges and sets to zero all pixels that are not actually
on the ridge top so as to give a thin line in the output, a process
known as nonmaximal suppression. The ridge pixels are then
thresholded using two thresholds, T1 and T2 , with T1 < T2 . Ridge
91/102
Canny Edge Detector II
92/102
The syntax for the Canny edge detector is
[ g , t ] = edge ( f , ‘ canny ’ , T , sigma )
where T is a vector, T = [T1 , T2 ], containing the two thresholds
explained in step 3 of the preceding procedure, and sigma is the
standard deviation of the smoothing filter. If t is included in the output
argument, it is a two-element vector containing the two threshold
values used by the algorithm. The rest of the syntax is as explained for
the other methods, including the automatic computation of thresholds if
T is not supplied. The default value for sigma is 1.
93/102
We can extract and display the vertical edges in the image, f ,
[ gv , t ] = edge ( f , ‘ sobel ’ , ‘ vertical ’ ) ;
imshow ( gv )
t
t = 0.0516
94/102
Outline
1 Point Operations
Histogram Processing
2 Spatial Filtering
Smoothing Spatial Filters
Sharpening Spatial Filters
3 Edge Detection
Line Detection Using the Hough Transform
95/102
Ideally, the methods discussed in the previous section should yield
pixels lying only on edges. In practice, the resulting pixels seldom
characterize an edge complectly because of noise, breaks in the edge
from nonuniform illumination, and other effects that introduce spurious
intensity discontinuities. Thus edge-detection algorithms typically are
followed by linking procedures to assemble edge pixels into meaningful
edges. One approach that can be used to find and link line segments
in an image is the Hough transform.
96/102
Given a set of points in an image (typically a binary image), suppose
that we want to find subsets of these points that lie on straight lines.
One possible solution is to first find all lines determined by every pair of
points and then find all subsets of points that are close to particular
lines. The problem with this procedure is that it involves finding
n(n − 1)/2 ∼ n 2 lines and then performing n(n(n − 1))/2 ∼ n 3
comparisons of every point to all lines. This approach is
computationally prohibitive in all but the most trivial applications.
97/102
With the Hough transform, on the other hand, we consider a point
(x i , y i ) and all the lines that pass through it. Infinitely many lines pass
through (x i , y i ) all of which satisfy the slope-intercept equation
y i = ax i + b for some values of a and b . Writing this equation as
b = −ax i + y i and considering the ab -plane (also called parameter
space) yields the equation of a single line for a fixed pair (x i , y i ).
Furthermore, a second point (x j , y j ) also has a line in parameter space
associated with it, and this line intersects the line associated with
(x i , y i ) at (a 0 , b 0 ). where a 0 is the slope and b 0 the intercept of the line
containing both (x i , y i ) and (x j , y j ) in the x y -plane. In fact, all points
contained on this line have lines in parameter space that intersect at
(a 0 , b 0 ).
98/102
In principle, the parameter-space lines corresponding to all image
points (x i , y i ) could be plotted, and then the image lines could be
identified where large numbers of parameter lines intersect. A practical
difficulty with this approach, however, is that a (the slope of the line)
approaches infinity as the line approaches the vertical direction. One
way around this difficulty is to use the normal representation of a line:
x cos θ + y sin θ = ρ.
99/102
The computational attractiveness of the Hough transform arises from
subdividing theρθ parameter space into so-called accumulator cells.
Usually the maximum range of the values is −90◦ ≤ θ ≤ 90◦ and
−D ≤ ρ ≤ D where D is the distance between corners in the image.
The cell at the coordinates (i , j ) with accumulator value A(i , j ),
corresponds to the square associated with parameter space
coordinates (ρ i , θ j ) Initially, these cells are set to zero. Then, for every
non-background point (x k , y k ) in the image plane, we let θ equal each
of the allowed subdivision values on the θ axis and solve for the
corresponding ρ using the equation ρ k = x k cos θ + y k sin θ . The
resulting ρ -values are then rounded off to the nearest allowed cell
value along the ρ -axis. The corresponding accumulator cell is then
incremented. At the end of this procedure, a value of Q in A(i , j ),
means that Q points in the x y -plane lie on the line x cos θ j + y sin θ j = ρ i
The number of subdivisions in the ρθ -plane determines the accuracy
of the colinearity of these points.
100/102
Using houghlines to Find and Link Line Segments
l i n e s = h o u g h l i n e s ( f , t h e t a , rho , r , c )
f i g u r e , imshow ( f ) , h o l d on
fo r k = 1: length ( lines )
xy = [ l i n e s ( k ) . p o i n t 1 ; l i n e s ( k ) . p o i n t 2 ] ;
p o l t ( xy ( : , 2 ) , xy ( : , 1 ) , ‘ LineWidth ’ , 4 , ’Color ’ , [ . 6
end
101/102
J Canny.
A computational approach to edge detection.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
8(6):679–698, 1986.
102/102