0% found this document useful (0 votes)
12 views14 pages

Adaptive Weighted Data Fusion For Line Structured

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views14 pages

Adaptive Weighted Data Fusion For Line Structured

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

sensors

Article
Adaptive Weighted Data Fusion for Line Structured Light and
Photometric Stereo Measurement System
Jianxin Shi, Yuehua Li *, Ziheng Zhang, Tiejun Li and Jingbo Zhou *

School of Mechanical Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China;
[email protected] (J.S.); [email protected] (Z.Z.); [email protected] (T.L.)
* Correspondence: [email protected] (Y.L.); [email protected] (J.Z.);
Tel.: +86-155-3369-2068 (Y.L.); +86-183-3319-3615 (J.Z.)

Abstract: Line structured light (LSL) measurement systems can obtain high accuracy profiles, but
the overall clarity relies greatly on the sampling interval of the scanning process. Photometric stereo
(PS), on the other hand, is sensitive to tiny features but has poor geometrical accuracy. Cooperative
measurement with these two methods is an effective way to ensure precision and clarity results. In
this paper, an LSL-PS cooperative measurement system is brought out. The calibration methods
used in the LSL and PS measurement system are given. Then, a data fusion algorithm with adaptive
weights is proposed, where an error function that contains the 3D point cloud matching error and
normal vector error is established. The weights, which are based on the angles of adjacent normal
vectors, are also added to the error function. Afterward, the fusion results can be obtained by solving
linear equations. From the experimental results, it can be seen that the proposed method has the
advantages of both the LSL and PS methods. The 3D reconstruction results have the merits of high
accuracy and high clarity.

Keywords: line structured light; photometric stereo; cooperative measurement; adaptive weighted
data fusion

1. Introduction
Line structured light (LSL) sensors have the advantages of simple structure, high
accuracy, and low cost. A typical LSL sensor consists of a camera, a laser line projector,
Citation: Shi, J.; Li, Y.; Zhang, Z.; Li,
and a frame that connects them together [1,2]. Currently, they are widely used in quality
T.; Zhou, J. Adaptive Weighted Data
evaluation [3], geometric measurement [4,5], visual tracking [6], railway inspection [7], etc.
Fusion for Line Structured Light and
Photometric Stereo Measurement
In the measuring process, a laser line is projected onto the object, and the camera captures
System. Sensors 2024, 24, 4187.
the perturbed stripe image that carries the profile information. Camera coordinates of each
https://doi.org/10.3390/s24134187
point on this profile can be solved with camera intrinsic parameters and the laser plane
equation [8].
Received: 26 May 2024
Photometric stereo (PS) measurement has the advantages of fast measurement speed,
Revised: 19 June 2024
simple structure, and high clarity. The classical PS system consists of a camera and several
Accepted: 24 June 2024
light spots [9]. It has been applied in defect detection [10–14], face recognition [15,16], and
Published: 27 June 2024
cultural heritage digitization [17]. The measurement process of PS is achieved by taking
images of the object under different light spots. The surface normal vector of the object can
be calculated according to the light and dark changes. The 3D result can be achieved by
Copyright: © 2024 by the authors. gradient integration [18,19].
Licensee MDPI, Basel, Switzerland. LSL and PS are two measurement techniques that have the advantages of low cost, high
This article is an open access article degree of automation, and simple operation. Although LSL can provide 3D geometrical
distributed under the terms and information with high accuracy, its clarity is highly affected by the noises introduced in
conditions of the Creative Commons center extraction of the laser stripe and the sampling interval of the scanning. On the
Attribution (CC BY) license (https:// contrary, PS is sensitive to the details on the object. The measurement accuracy is low
creativecommons.org/licenses/by/ due to the noise accumulation in gradient integration. Therefore, how to achieve high
4.0/).

Sensors 2024, 24, 4187. https://doi.org/10.3390/s24134187 https://www.mdpi.com/journal/sensors


Sensors 2024, 24, 4187 2 of 14

precision and high clarity results efficiently is a key issue in the research of 3D measurement.
Cooperative measurement with LSL and PS may be a solution.
Based on the above considerations, Nehab et al. [20] fused the position information
obtained from a depth scanner with the normal vectors computed by PS, combining the
advantages of both measurements. Haque et al. [21] added Laplace smoothing terms to
the optimized surface equations, aiming to make the result smoother at the edges, but
the reconstructed surfaces had holes. Zhang et al. [22] constructed an optimized surface
equation for data fusion where a Gaussian filter was designed by considering both the
neighborhood and depth values, while it required a complex iterative process and was
time consuming.
Okatani et al. [23] solved the optimization problem efficiently by using recurrent
belief propagation. It has the limitation that accurate results can only be obtained when an
appropriate confidence level is selected. Bruno et al. [24] proposed a method combining
coded structured light and PS for the 3D reconstruction of underwater objects, but the image
acquisition time is very long and further improvement is needed for practical applications.
Massot [25] and Li [26] also combined structured light and PS for the 3D reconstruction
of underwater objects. Riegler et al. [27] combined photometric loss and geometric loss to
train a model in a self-supervised way, but the accuracy of their reconstruction results was
not high. Lu et al. [28] proposed a multiresolution surface reconstruction scheme, which
combines low-resolution geometric images with PS data, but the iterative process in their
algorithm takes a long time. Li et al. [29] proposed a novel local feature descriptor to fuse
neighborhood point cloud coordinates and normal vectors. The accuracy of the results is
improved, but the computation time is long, especially when the number of point clouds
is large. Antensteiner et al. [30] proposed a fusion method based on the total generalized
variance to improve the accuracy, but its computational speed still needs to be improved.
Hao et al. [31] corrected the deviation of the PS by fitting an error surface using a 3D point
cloud of structured light. The depth of PS is achieved by the integration process, and the
noises are also accumulated.
In this paper, we propose an adaptive weighted fusion algorithm based on the angle
of the adjacent normal vector. Firstly, the PS method is used to calculate the surface normal
vectors, and perform weighted fusion by use of the normal vector angles of the neighboring
points. Next, the error function of the fused surface is established, which consists of the
error in the 3D point cloud and the normal vector. The fusion result can be obtained by
establishing a sparse matrix and solved with a linear equation. Our algorithm has the
advantages of both the LSL and PS methods, and can achieve a high accuracy and high
clarity result.

2. Measurement Principle
The LSL-PS system measurement principle is shown in Figure 1. The LSL sensor
consists of a camera and a laser line projector. The laser plane is emitted by the laser
projector and intersects with the part to be measured. A perturbed laser stripe that carries
the geometrical information of the profile can be captured by the camera. Since the relative
position between the camera and the laser line projector is fixed, the coordinates of the
points on the intersecting profile can be solved using pre-calibrated sensor parameters.
As the part moves, the laser plane intersects the part at different positions and a series of
intersecting profiles can be calculated. By combining these profiles with the translation
distances, 3D point cloud of the part can be obtained.
The PS sensor uses the same camera and twelve spot light sources (LEDs). The light
sources are arranged at equal intervals on a circular plate. Each LED is switched on/off in
turn. The camera captures one image under the corresponding spot light to complete the PS
measurement. The surface normal vector can be achieved according to the pre-calibrated
sensor parameters, and then the depth value is calculated from the normal vector.
LSL and PS measurement are carried out sequentially. 3D measurement results from
LSL are translated into the pixel coordinate system and matched with the PS results. Data
Sensors 2024, 24, 4187 3 of 14

interpolation of the LSL is carried out according to the pixel coordinates of the PS results
so as to make the number of the two data sets consistent. The final step is to fuse the 3D
Sensors 2024, 24, 4187 3 of 15
point cloud of the LSL with the normal vector of the PS to achieve high precision and high
clarity results.

Figure 1.
Figure Illustration of
1. Illustration of the
the cooperative
cooperative measurement
measurement system.
system.

3. Line Structured Light Measurement


LSL and PS measurement are carried out sequentially. 3D measurement results from
Suppose
LSL are that Pinto
translated has the
the camera of (xc , yand
coordinatessystem
pixel coordinate c , zc )matched
and the corresponding world
with the PS results.
coordinates of (X w , Yw , Z w ), then
Data interpolation of the LSL is carried out according to the pixel coordinates of the PS
results so as to make the number of the two data sets consistent. The final step is to fuse
[ xc yc zc ]T = R[ Xw Yw Zw ]T + T, (1)
the 3D point cloud of the LSL with the normal vector of the PS to achieve high precision
and
wherehigh clarity
R is results.matrix and T is the translation vector. Let p (x, y) be the projection
the rotation
point of P on the normalized image plane with coordinates of
3. Line Structured Light Measurement
 
Suppose that P has the camera = f x · xc /zc of
[ x, y] coordinates ·ycc, /z
, f y(x yc,czc,) and the corresponding world (2)
coordinates of (Xw, Yw, Zw), then
The projected coordinates after considering radial and tangential distortions are
 ′ [ xc yc z2c ]T = R 4
[ X w Yw Z w ]T +2 T , 2 (1)
x = x (1 + k1 r + k2 r ) + 2p1 xy + p2 (r + 2x )
, (3)
where R is the rotationy′matrix
= y(1 and
+ k1T r2is+the
k2 r4translation
) + p1 (r2 +vector.
2y2 ) +Let2p2pxy
(x, y) be the projection
point of P on the normalized image plane with coordinates of
where k1 , k2 , p1 , and p2 are the distortion coefficients and r2 = x2 + y2 . The pixel coordinates
of P can be derived from Equation [ ]=  f x ⋅ xc zc , f y ⋅ yc zc  ,
x, y (4): (2)
T
The projected coordinates after v 1]T = A[ x ′radial
[u considering y′ 1] ,and tangential distortions are (4)

 x ' = x(1 + k1r + k2frx ) +0 2 pu10xy + p2 (r + 2 x )


2  4  2 2

 A2 =  0 4 f y v0 2 , 2
, (3)
(5)
 y ' = y(1 + k1r + k20r ) +0 p1 (1r + 2 y ) + 2 p2 xy
where
where kA 1, kis
2, the
p1, and p2 arematrix,
internal the distortion
fx and coefficients and rlengths,
fy are the focal 2 = x2 + y2. The pixel coordinates
and u0 and v0 are the
of P can be derived
coordinates from Equation
of the camera principal(4):
point. Camera coordinates of the points on the laser
stripe can be obtained by taking images of the planar target and the corresponding laser
stripe in different positions [2]. The v 1]T cloud
[u point = A [ofx ' the 1]T , stripe is fitted by the random
y ' laser (4)
sample consensus (RANSAC) algorithm [32] to obtain a more accurate laser plane equation,
as shown in Equation (6).  f x 0 u0 
B1 xc + B2 yc + B3 zc + B4 =, 0, (6)
A =  0 f y v0  (5)
where B1 , B2 , B3 and B4 are the coefficients  0of the0 laser1 plane
 equation. For any laser stripe
image, the pixel coordinates of the stripe center are extracted using the improved gray
where A is the internal matrix, fx and fy are the focal lengths, and u0 and v0 are the coor-
dinates of the camera principal point. Camera coordinates of the points on the laser
stripe can be obtained by taking images of the planar target and the corresponding laser
stripe in different positions [2]. The point cloud of the laser stripe is fitted by the random
sample consensus (RANSAC) algorithm [32] to obtain a more accurate laser plane equa-
tion, as shown in Equation (6).

B1 xc + B2 yc + B3 zc + B4 = 0 , (6)
Sensors 2024, 24, 4187
where B1, B2, B3 and B4 are the coefficients of the laser plane equation. For any laser 4stripe
of 14

image, the pixel coordinates of the stripe center are extracted using the improved gray
gravity method [33]. The normalized image plane coordinates after aberration correction
gravity
can be method
computed [33]. TheEquations
from normalized
(3) image
and (4)plane coordinates
in turn. Then, theafter aberration
camera correction
coordinate of the
can be computed from Equations (3) and (4) in turn. Then, the camera coordinate of the
cross-section profile is obtained by Equations (2) and (6). Motional direction is achieved
cross-section profile is obtained by Equations (2) and (6). Motional direction is achieved by
by taking two images of the target at different translation positions [2].
taking two images of the target at different translation positions [2].

4.4.Photometric
PhotometricStereo
StereoMeasurements
Measurements

AAceramic
ceramicball
ballisisused
usedtotosuccessively
successively calibrate
calibrate thethe direction
direction of each
of each spotspot light.
light. Let Let
P beP
be the highlight point on the sphere captured by camera, and H be the
the highlight point on the sphere captured by camera, and H be the surface normal vector surface normal
vector
at at P, as
P, as shown shown2a.
in Figure inTheFigure
image 2a.ofThe
pointimage
P and ofthepoint P and the
corresponding corresponding
cross-section are
cross-section are shown in Figure 2b. O 1 is the pixel coordinate of the sphere center, and
shown in Figure 2b. O1 is the pixel coordinate of the sphere center, and the radius of this
the radius ofisthis
cross-section r =cross-section
||O P||. The is rsurface
= ||O1P||. Thevector
normal surfaceat normal
P is vector at P is
1

(p −
H =H =u pu−

p −
ucu, vc ,pv−
p 2
)
2
2 −−r2r , ,
vcv, c , RR (7)
(7)

whereRRisisthe
where theradius
radiusof
ofthe
theceramic
ceramicsphere.
sphere.The
Thecamera
cameraview
viewdirection
directionisisV,
V,and
andthen
thenthe
the
lightsource
light sourcedirection
directioncan
canbebeobtained
obtainedby
byFigure
Figure2c:
2c:

LL
==2
2(HH (
·V⋅)V
H−ΗV, )
−V , (8)
(8)

Figure2.2.Light
Figure Lightsource
source direction
direction calibration:
calibration: (a)(a) computing
computing the the spherical
spherical normal
normal direction,
direction, (b) cir-
(b) circular
cular section
section where where P locates,
P locates, and (c)and (c) calibration
calibration of lightofsource
light source direction.
direction.

Fromthe
From theLambert
Lambertreflection
reflectionmodel,
model,the
theluminance
luminancevalue
valueI Iatatany
anypoint
pointon
onthe
thesurface
surface
canbe
can beexpressed
expressedas as
I = ρN·L, (9)
I = ρN ⋅ L , (9)
where ρ is the reflectivity. N is the surface normal vector and can be expressed by
where ρ is the reflectivity. N is the surface normal vector and can be expressed by
−1 T
N = (LT LT)
N = (L L)−L1 LI/ρ,
T
I ρ, (10)
(10)
Based
Based on
onthe
thenormal
normalvectors,
vectors, the gradients qqxx and
thegradients and qqyy in
in xx and
and yy directions
directions can
can be
be
calculated. The depth Z is obtained by use of the Fourier basis function method, as
calculated. The depth Z is obtained by use of the Fourier basis function method, as shown shown
in
inEquation
Equation(11).
(11). ( )
2πu 2πv

−1 N F {q x } + M F qy
Z=F −j , (11)
2πu 2
2
+ 2πv

N M

where F and F−1 are the two-dimensional fast Fourier transform and its inverse transform,
u and v represent the frequency indexes in the row and column directions, and M and N
are the number of rows and columns of the image, respectively.
  2π2πu u  + 2π2πv v  
    N  +   M    
  N    M  
whereF Fand
where andF−1
F−1are
arethe
thetwo-dimensional
two-dimensionalfastfastFourier
Fouriertransform
transformand
anditsitsinverse
inversetransform,
transform,
Sensors 2024, 24, 4187 u uand
andv vrepresent
representthethefrequency
frequencyindexes
indexesininthe
therow
rowandandcolumn
columndirections,
directions,and andMMandandNN
5 of 14
arethe
are thenumber
numberofofrows
rowsand andcolumns
columnsofofthe
theimage,
image,respectively.
respectively.

5.5.Adaptive
AdaptiveWeighted
WeightedFusion
Fusion
5. Adaptive Weighted Fusion
AAflowchart
flowchartshowing
showingthe theadaptive
adaptiveweighted
weightedfusion
fusionmethod
methodisisshown
shownininFigure
Figure3.3.
A flowchart showing the adaptive weighted fusion method is shown in Figure 3.
Firstly,the
Firstly, theLSL
LSLand
andPSPSsensors
sensorsarearecalibrated.
calibrated.Next,
Next,the
the3D3Dpoint
pointcloud
cloudobtained
obtainedfromfromthe
the
Firstly, the LSL and PS sensors are calibrated. Next, the 3D point cloud obtained from the
LSL is fusedwith
LSL with thenormal
normal vectorobtained
obtained fromthe the PS.The
The fusionisisperformed
performed by
LSLisisfused
fused withthe the normalvectorvector obtainedfromfrom thePS. PS. Thefusion
fusion is performedby by
minimizingananerror
minimizing errorfunction
functiontotoobtain
obtainthetheoptimized
optimizedsurface,
surface,which
whichconsists
consistsofofa adepth
depth
minimizing an error function to obtain the optimized surface, which consists of a depth
errorand
error
error andaaasurface
and surfacenormal
surface normalvector
normal vectorerror.
vector error.Adaptive
error. Adaptiveweights
Adaptive weightsare
weights arecalculated
are calculatedfrom
calculated fromthe
from theangles
the angles
angles
between
between adjacent
adjacent normal
normal vectors.
vectors. With
With the
the method,
method, the
the depth
depth value
value will
will
between adjacent normal vectors. With the method, the depth value will no longer need nono longer
longer need
needto
toto
bebebe calculated
calculated
calculated from
from
from the the
the surface
surface
surface normal
normal
normal vectors.
vectors.
vectors.

Figure3.3.3.Flow
Figure
Figure Flowchart
Flow chartshowing
chart showingthe
showing theadaptive
the adaptiveweighted
adaptive weightedfusion
weighted fusionmethod.
fusion method.
method.

The fusion principle isisshown


shown GT
ininFigure
Figure 4.4.Z
ZZ isisthe
the true depth, and PS is the PS
TheThefusion fusionprincipleprincipleis shownin Figure4. GTGTis thetrue
truedepth,
depth,andandZZZPSPSis isthe
thePS
PS
value. NGT GTGT
and NPS PSPS are the corresponding normal vectors. ZLSLS is the profile from LSL,
value.NN
value. andNN
i,jand
i,ji,j i,jare
i,j i,j arethe thecorresponding
correspondingnormal normalvectors.
vectors.ZZ isisthe
LSLS
LSLS theprofile
profilefromfromLSL, LSL,
OPT
ZZZ OPT is
OPT the
isisthe optimized
theoptimizedoptimized depth,
depth,
depth, andandPi,jPPi,jrepresents
and i,jrepresents
represents the
the points
points
the points atatpixel
atpixel position
position
pixel (u,(u,
(u,
position v)v)above
v)above it;it;
above
idis
dit;idisithe
isthe
the distance
distance distance fromfromfrom Pi,jPPi,jtoto the
to the
the corresponding
corresponding
corresponding point
point
point ofof the
the
the ZZZ LSLS
LSLS profile
LSLS profile in
in thethe
the vertical
vertical
vertical
i,j
x x y yy
direction;TTT
direction;
direction; x and andTTT
i,j i,jand arethe
i,j i,jare
are thetangent
the tangentvectors
tangent vectorsof
vectors ofZZZ
of
OPT in the x and y directions at the pixel
OPT
OPT ininthethexxandandyydirections
directionsatatthe thepixel
pixel
i,j i,j
(u,(u,
(u,v). v).
v).With With
Withthe the fusion,
thefusion,
fusion,the the
theoptimal optimal
optimaldepth depth
depthvalue value
valuecancan
canbe be calculated
becalculated
calculatedfor for each
foreach pixel
eachpixel
pixel(u, (u,
(u,v). v).
v).

Figure 4. Illustration of the fusion principle.


Figure4.4.Illustration
Figure Illustrationofofthe
thefusion
fusionprinciple.
principle.
The 3D coordinates of Pi,j can then be expressed as
 T
u OPT v OPT OPT
Pi,j (u, v) = − Zi,j (u, v) − Zi,j (u, v) Zi,j (u, v) , (12)
fx fy

where ZOPTi,j (u, v) is the depth of the surface point at (u, v), and f x and f y are the camera
focal lengths. Based on the error between the LSL measured profile and the optimized
profile in the depth direction, the depth error function is constructed as

1 M× N  2
Ep = ∑
M × N i =1
OPT
µi,j Zi,j LSLS
− µi,j Zi,j , (13)
OPT
where Zi,j (u, v) is the depth of the surface point at (u, v), and fx and fy are the camera focal
lengths. Based on the error between the LSL measured profile and the optimized profile
in the depth direction, the depth error function is constructed as
M ×N
1
 (μ ),
Sensors 2024, 24, 4187 2
Ep = , j − μi , j Z i , j
ZiOPT LSLS 6 of 14
i, j (13)
M ×N i =1
LSLS
where µµi,j
where i, j==[−u/fx, −v/f
[−u/f y, 1]T and T Zi,j Z
x , −v/fy , 1] and
are
LSLS theare
depth valuesvalues
the depth obtained from the
obtained LSL
from themeas-
LSL
i,j
urements.
measurements.
Normal vectors
Normal vectorswouldwould change
change dramatically
dramatically within
within the detail-rich
the detail-rich region
region and and
slightly
slightly
in in region.
the flat the flat region.
Thus, theThus, the weights
weights of theofpixel
the pixel
pointspoints
can becanassigned
be assigned according
according to
to the
the normal
normal vector
vector angles
angles between
between thethe current
current pixel
pixel andand its neighbors.
its neighbors. TheThe computa-
computation
tion principle
principle for weights
for weights is shown
is shown in Figure
in Figure 5, where5, where
Figure Figure
5a is the5aneighborhood
is the neighborhood
of normalof
normal vectors and Figure 5b is the angle change of the adjacent
vectors and Figure 5b is the angle change of the adjacent normal vectors. normal vectors.

Figure 5.
Figure 5. Weights
Weightscomputation
computationusing
usingnormal
normalvectors:
vectors:(a)(a)normal
normal vector
vector neighborhood
neighborhood and
and (b)(b) an-
angle
gle between adjacent normal vectors.
between adjacent normal vectors.

NPS
PS
Normal
Normal vector
vector N i,j point P
i,j at point Pi,j can be
i,j can be expressed
expressed as
T
PS =  N , N , N
NNPS Nx ,x Ny , Ny z ,z  ,
T
(14)

i, j = (14)
i,j

Assuming that
Assuming that the
the number
number of
of rows
rows and
and columns
columnsof of points
pointsPPi,ji,j in the image are i and j,
respectively,then
respectively, thenthe
thenormal
normalvectors
vectorsatatpoints
pointsPPi,ji,j can be represented
represented by by Equation
Equation (15).
(15).
T
V =  N (i,z j()i, j,)  ,
Nxx((i,i,j)j,),NNy (yi,(ij,),jN),zN T (15)
 
,j =
Vii,j (15)

At this time, the normal vectors of the neighboring points on the left and right right ofof
point PPi,ji,j are V
point Vi,j
i, j−1
−1 and
and V
V i,j+1
i,j+1, and
, and the
the normal
normal vectors
vectors of
of the
the neighboring
neighboring points
points on the top
bottom sides
and bottom sides of ofpoint
pointPP areVV
i,ji,jare 1,j and
i −i−1,j and V Vi+1,j
i+1,j.. The
The angle
angle betweenbetween point Pi,j i,j and itsits
neighboring points
neighboring points in in the
the X-direction
X-direction and and the the angleangle in in the
the Y-direction
Y-directioncan canbe
becalculated.
calculated.

x VV ⋅V −1i , j −1   VVi,ji·,Vj ⋅i,jV


+1i , j +1 
    
i,j ·V j i,j
 θi,jθ ix,=j =arc arccos cos |Vi,j ||i , V  +arc
+ cos |Vi,j ||
arccos Vi,j+1 | 


i,j−1 |
y  V V     V V  , (16)
Vi,ji·,Vj i−1,ji , j −1   V i·,Vj i , j +1  ,
 θi,j = arc cos V V + arc cos V i,j Vi+1,j (16)


V | ||  V | | || |
 y i , j ⋅ Vi −1, j i , j ⋅ Vi +1, j
i,j i −1,j i,j i +1,j

 θ i , j = arc cos   + arc cos  


After calculating the angles between Vi , j V the 
normal  Vati , j points
vectors Vi +1, j Pi,j and their neigh-
  i −1, j  
bors, a weight function on the magnitude of the angle between the normal vectors is
obtained as
x y
Wi,j = θi,j + θi,j , (17)

At this point, the normal vectors NAPS


i,j after adding the weighting function are

NAPS PS
i,j = Ni,j ·W, (18)

From Equation (12) the tangent vector at Pi,j is

OPT T
 OPT ∂ZOPT ∂Zi,j
 
∂Pi,j ∂Zi,j
 Txi,j = = − f1x (u ∂u OPT ) − 1 v i,j
+ Zi,j


∂u fy ∂u ∂u
 OPT OPT
 ,
OPT T
(19)
 ∂Pi,j 1 ∂Zi,j 1 ∂Zi,j
OPT
∂Zi,j
 Tyi,j =

∂v = − f x u ∂v − f y (v ∂v + Zi,j ) ∂v
Sensors 2024, 24, 4187 7 of 14

In ideal case, the tangent vector is perpendicular to the normal vector and its projection
along the direction of the normal vector is zero. Based on the relationship between the
normal vector of the PS and the tangent vector of the ideal result, the normal vector error
function is constructed as

1 M× N h
 i2 h i2 
M × N i∑
En = Tx ( P
i,j i,j )· N APS
i,j + Ty ( P
i,j i,j )· N APS
i,j , (20)
=1

Finally, by combining the depth error function of the LSL and the normal vector error
function of the PS, the fusion can be achieved by minimization of the error function, and is
expressed by
ZOPT = argmin {λE p + (1 − λ) En } (21)
ZOPT

where λ ∈ [0,1], which is used to control the degree of influence of the point cloud values
and normal vectors on the fused result; the smaller λ is, the fusion result is influenced more
by the normal vectors; the larger λ is, the fusion result is influenced more by the 3D point
cloud of the LSL.

6. Measurement Results and Discussions


The cooperative measurement system is shown in Figure 6. It consists of a laser
line projector (Shengzuan Laser, Shenzhen, China), a camera (MV-UB500M, MindVision,
Shenzhen, China), 12 LED light sources, a linear stage, and the components connecting
them together. The laser line projector has a wavelength of 650 nm and a power of 5 mW.
The minimum line width can reach 0.4 mm at the projection distance of 300 mm. The
resolution of the camera is 800 × 600 pixels, and the focal length of the lens is 4–12 mm,
which can be adjusted manually. The angle between the camera optical axis and the laser
plane is about 60◦ and the scanning is 10 mm/s for the following experiments. The LED
light sources are mounted around the camera on equally spaced circular panels. The
luminance of each source is the same, and the tilt angle of the light sources and the camera’s
optical axis is about 45◦ . The image plane of the camera is parallel to the circular plane
where the light source is located. The radius of the circular plane where the light source is
Sensors 2024, 24, 4187 located is 600 mm. About 1.2 s is needed to obtain the part images under different8 light of 15

spots for the PS measurement. The computer has an Intel i5-8300 CPU and 4 GB RAM.

Figure 6.
Figure 6. The
The LSL-PS
LSL-PS cooperative
cooperative measurement
measurement system.
system.

6.1.
6.1. Measurement
Measurement and and Evaluation
Evaluation ofof Stairs
Stairs
To
Toverify
verifythe
theeffectiveness
effectivenessofofthe the system,
system,measurement
measurement was carried
was out out
carried for precision-
for preci-
milled stairs,stairs,
sion-milled as shown in Figure
as shown 7a. The7a.
in Figure topmost step serves
The topmost stepas the reference
serves plane, and
as the reference the
plane,
remaining steps are steps
and the remaining named areS1named
, S2 , andS1S, 3S.2,The
andheights
S3. Thebetween
heights the steps the
between andsteps
the reference
and the
plane are denoted
reference plane are bydenoted
H1 , H2 , by
and HH H. 2The
1, 3 , anddiffused
H3. Thelaser stripes
diffused canstripes
laser be seen from
can the steps,
be seen from
the steps, which are fine and bright to ensure the accuracy. Figure 7b shows the point
cloud, and the boundary points on the steps are excluded before evaluation. The refer-
ence plane was first calculated by plane fitting and then the average distance from each
step to the reference plane was calculated. Similarly, the heights of the steps were meas-
ured on a CMM (Hexagon GLOBAL 7107, Qingdao, China) using the topmost step as the
Figure 6. The LSL-PS cooperative measurement system.

6.1. Measurement and Evaluation of Stairs


To verify the effectiveness of the system, measurement was carried out for preci-
Sensors 2024, 24, 4187 sion-milled stairs, as shown in Figure 7a. The topmost step serves as the reference plane, 8 of 14
and the remaining steps are named S1, S2, and S3. The heights between the steps and the
reference plane are denoted by H1, H2, and H3. The diffused laser stripes can be seen from
the steps,
which are which
fine and arebright
fine and brightthe
to ensure to ensure
accuracy.theFigure
accuracy. Figurethe
7b shows 7b point
showscloud,
the point
and
cloud, and the boundary points on the steps are excluded before evaluation.
the boundary points on the steps are excluded before evaluation. The reference plane The refer-
encefirst
was plane was first by
calculated calculated by plane
plane fitting fitting
and then and
the then the
average average
distance distance
from fromtoeach
each step the
reference plane was calculated. Similarly, the heights of the steps were measuredmeas-
step to the reference plane was calculated. Similarly, the heights of the steps were on a
ured on
CMM a CMM (Hexagon
(Hexagon GLOBAL 7107, GLOBAL 7107, China)
Qingdao, Qingdao, China)
using using thestep
the topmost topmost
as thestep as the
reference
reference
plane withplane with the measurement
the measurement error3 less
error less than µm. than 3µm. Measurement
Measurement results andresults
errorsandare
errors are shown in Table 1. The mean absolute error (MAE) of the deviation
shown in Table 1. The mean absolute error (MAE) of the deviation in H3 is 0.0735 mm and in H 3 is

0.0735 mm and the relative error (RE) is 0.41%, which indicates high measurement
the relative error (RE) is 0.41%, which indicates high measurement accuracy of the LSL accu-
racy of the LSL sensor.
sensor.

Figure 7. Measurement stairs using the LSL sensor: (a) stairs and (b) measured point cloud.

Table 1. Measurement
Table 1. Measurement results
results for
for the
the stairs
stairs (unit:
(unit: mm).
mm).

No. CMM No.


1 CMM2 1 3 2 43 45 5 MAEMAE RE
RE
H1 7.9992 H1
7.9664 7.9992
7.9703 7.9664 7.9895
7.9703 7.9895
7.9734 7.9734
7.9741 7.97410.02450.0245 0.31%
0.31%
H2 13.9982 13.9340
H2 13.9270 13.934013.9491
13.9982 13.9270 13.9377
13.9491 13.9331 13.9331
13.9377 0.0620
0.0620 0.44%
0.44%
H3 17.9987 17.9987
H3 17.9206 17.998717.9167
17.9987 17.9206 17.9421
17.9167 17.9208 17.9208
17.9421 0.0735
0.0735 0.41%
0.41%

When the laser plane is calibrated with the RANSAC algorithm, the measurement
accuracy can be further improved, as shown in Table 2. The MAE of H3 is 0.0328 mm
and the relative error is reduced from 0.41% to 0.18%. The REs of H1 and H2 are also
reduced significantly.

Table 2. Measured results for aluminum stairs using the RANSAC (unit: mm).

No. CMM 1 2 3 4 5 MAE RE


H1 7.9992 8.0009 7.9846 8.0018 7.9862 8.0023 0.0040 0.05%
H2 13.9982 13.9694 13.9664 13.9797 13.9594 13.9809 0.0270 0.19%
H3 17.9987 17.9680 17.9607 17.9702 17.9499 17.9809 0.0328 0.18%

Afterward, the fusion was performed and the results are shown in Figure 8. Figure 8a–c
are the LSL, the PS, and the fusion results. The LSL measurement results and the fusion
results are close to each other, while the PS result has a larger error because the light source
in the photometric stereo measurement is not a uniform parallel light source, which leads to
an error in the normal vector, and then the error accumulates in calculating the depth value,
which leads to a larger overall bias in the PS measurement results.
Measurement errors of the LSL, the PS, and the fused results are evaluated by compar-
ing with the CMM results, as shown in Table 3. It can be seen that the absolute error (AE)
of the LSL measurement result of H3 is 0.0349 mm, the error of the PS measurement result
is 0.9620 mm, and the error of the fused result is 0.0293 mm. The error of fused result is
H2 13.9982 13.9694 13.9664 13.9797 13.9594 13.9809 0.0270 0.19%
H3 17.9987 17.9680 17.9607 17.9702 17.9499 17.9809 0.0328 0.18%

Afterward, the fusion was performed and the results are shown in Figure 8. Figure
Sensors 2024, 24, 4187
8a, b, and c are the LSL, the PS, and the fusion results. The LSL measurement results9 of and14
the fusion results are close to each other, while the PS result has a larger error because the
light source in the photometric stereo measurement is not a uniform parallel light source,
reduced by 16.0%
which leads to ancompared to the
error in the LSL vector,
normal measurement result,
and then the and
error97.0% compared
accumulates into that of
calculat-
the
ing PS
themeasurement
depth value, result.
which Therefore, the fusion
leads to a larger method
overall can
bias in further
the improve theresults.
PS measurement accuracy.

Figure 8.
Figure 8. Measurement
Measurement results
results using
using the
the different
different methods:
methods: (a)
(a) LSL,
LSL, (b)
(b) PS,
PS, and
and (c)
(c) fused
fused results.
results.

3. Comparison of
TableMeasurement measured
errors results
of the LSL,using
thedifferent
PS, andmethods (unit:results
the fused mm). are evaluated by
comparing with the CMM results, as shown in Table 3. It can be seen that the absolute
No. CMM LSL AE PS MAE Fusion AE
error (AE) of the LSL measurement result of H3 is 0.0349 mm, the error of the PS meas-
H1 7.9992 urement7.9791result is 0.9620
0.0201mm, and 6.1526
the error of the1.8466
fused result is8.0095 0.0103
0.0293 mm. The error of
SensorsH
2024, 24, 4187 13.9982 13.9919 0.0063 11.3644 2.6338 14.0121 10 of 15
0.0139
2 fused result is reduced by 16.0% compared to the LSL measurement result, and 97.0%
H3 17.9987 18.0336 0.0349 17.0367 0.9620 18.0280 0.0293
compared to that of the PS measurement result. Therefore, the fusion method can further
improve the accuracy.
6.2. Effect
6.2. Effect of
of Different
DifferentValues
Values of
of λλ
TableDifferent
3. Comparison
Different ofof
values
values measured
ofλ were results
λ were using
analyzed
analyzed different
to show
to show methods (unit:
its impact
its impact on theonmm).
the fusion
fusion results,results,
as shownas
shown
in No. in9.Figure
Figure The
CMM 9. The
sum sum
LSL of the
of MAEs MAEsAE of (H
steps the
1 , steps
H2PS (HH
, and 1, 3H , and H
) 2varies
MAE 3) varies
along withalong
Fusion λ. Whenwith
AE λ.
λ is
0.1,
Whenthe λerror is the error
is 0.1, largest. With
is the a gradual
largest. Withincrease
a gradual in λ, the overall
increase in λ,trend of the trend
the overall error value
of the
H1 7.9992 7.9791 0.0201 6.1526 1.8466 8.0095 0.0103
iserror
decreasing,
value is and when λ and
decreasing, is 0.7, the error
when is the
λ is 0.7, thesmallest.
error is the smallest.
H2 13.9982 13.9919 0.0063 11.3644 2.6338 14.0121 0.0139
H3 17.9987 18.0336 0.0349 17.0367 0.9620 18.0280 0.0293

Figure9.9.Mean
Figure Meanabsolute
absoluteerror
errorfor
fordifferent
differentλ.
λ.

The
Theeffect
effecton
onthe
theclarity ofof
clarity thethe
fusion result
fusion when
result taking
when different
taking λ was
different λ also
was analyzed.
also ana-
Measurement results of an aluminum part are fused, and the results are
lyzed. Measurement results of an aluminum part are fused, and the results are shown in Figure
shown10.
in
For10.
Figure Figure 10a–f, the same position is analyzed that is at the outermost edge of the
petal indicated by the arrow. When λ = 0.1 and 0.2, it can be seen that the details in this
region are relatively blurred. When λ = 0.3, the details become somewhat clearer. When
λ = 0.4, the undulations at edge of the petals in this region are further increased, which is
Sensors 2024, 24, 4187 10 of 14

more closely
Figure 9. Meanmatched withfor
absolute error thedifferent
actual λ.
object. In addition, a small bump starts to appear
at the upper left of the arrow. In Figure 10f–h, the small bumps are no longer changing
compared to Figure
The effect on the10e. Therefore,
clarity when result
of the fusion λ is greater
when than 0.5,different
taking the clarity of the
λ was alsofusion
ana-
result has stabilized. Combining the results of the accuracy at different values
lyzed. Measurement results of an aluminum part are fused, and the results are shown of λ, λ in
is
taken as
Figure 10.0.7 when fusion is performed.

Figure 10.
Figure Fusionresults
10. Fusion resultsofof
anan aluminum
aluminum part
part for for different
different values
values of λ:of
(a)λ:λ (a) λ =(b)0.1,
= 0.1, λ =(b) = 0.2,
0.2,λ(c) λ=
0.3,
(c) λ(d) λ = (d)
= 0.3, 0.4,λ(e) λ = 0.5,
= 0.4, (e) λ(f)
=λ = 0.6,
0.5, (f) λ(g) λ = 0.7,
= 0.6, (g) λand (h)and
= 0.7, λ = 0.8.
(h) λ = 0.8.

6.3. Measurement
For Figure 10a–f,of Complex
the same Partsposition is analyzed that is at the outermost edge of the
petalThe purpose
indicated by of
thethis measurement
arrow. When λ = system
0.1 andis0.2,to it
obtain
can be3Dseen
geometric
that theinformation of
details in this
complex
region areparts, which
relatively can be When
blurred. used for
λ =quality
0.3, theinspection
details become and reverse
somewhat engineering. Firstly,
clearer. When λ
=six0.4,
letters ”HEBUST“ were
the undulations milled
at edge by apetals
of the precision
in thismachine.
region Figure 11a shows
are further the machined
increased, which is
Sensors 2024, 24, 4187 parts.closely
more The measurement
matched with result using the
the actual LSLInsensor
object. addition,is shown
a smallinbump
Figurestarts
11b. to
The 11 of 15
normal
appear at
vector calculated using PS is shown in Figure 11c, where each letter
the upper left of the arrow. In Figure 10f–h, the small bumps are no longer changing can be seen. The angle
of the normal
compared vector 10e.
to Figure in the X and Y when
Therefore, directions
λ is is calculated
greater than using
0.5, thethe proposed
clarity of themethod,
fusion
as shown
served
result in in
has theFigure 11d and e,directions.
corresponding
stabilized. Combining respectively. The
Figure
the results letters can fused
11faccuracy
of the is the only beresult
clearly
at different observed
where
values of λ,in
each λthe
letter
is
corresponding
becomes
taken very
as 0.7 directions.
clear,
when is Figure
the same
fusion 11f is
as that
performed. of the fused
Figure result
11c. The where
runningeach letter
time forbecomes
the fusionvery
is
clear, the
about 8 s. same as that of Figure 11c. The running time for the fusion is about 8 s.
6.3. Measurement of Complex Parts
The purpose of this measurement system is to obtain 3D geometric information of
complex parts, which can be used for quality inspection and reverse engineering. Firstly,
six letters ”HEBUST“ were milled by a precision machine. Figure 11a shows the ma-
chined parts. The measurement result using the LSL sensor is shown in Figure 11b. The
normal vector calculated using PS is shown in Figure 11c, where each letter can be seen.
The angle of the normal vector in the X and Y directions is calculated using the proposed
method, as shown in Figure 11d and e, respectively. The letters can only be clearly ob-

Figure 11.
Figure Measurement of
11. Measurement of aa machined
machined part
part with
with letters: (a)
(a) the
the part,
part, (b)
(b) LSL
LSL measurement
measurement results,
results,
(c) PS normal vectors, (d) angle of the adjacent normal vectorvector in
in the X direction, (e) angle
angle of the
adjacent
adjacent normal
normal vector
vector in
in the
the YY direction,
direction, and
and (f)
(f) fused results.
fused results.

The fusion
The fusion results
results ofof “HEBUST”
“HEBUST” are are shown
shown inin Figure
Figure 12. The fused
12. The fused result
result of
of the
the
“HEBUST” by Nehab method [20] is shown in Figure 12a, where the
“HEBUST” by Nehab method [20] is shown in Figure 12a, where the six letters can be six letters can be
seen, but the lateral part of the letters is insufficiently clear. In contrast, with the
seen, but the lateral part of the letters is insufficiently clear. In contrast, with the pro- proposed
method
posed all of the
method all letters
of the can be can
letters seenbeclearly, as shown
seen clearly, in Figure
as shown in 12c. Figure
Figure 12c. 12b,d
Figureare the
12b,d
enlargements
are denoteddenoted
the enlargements in Figurein12a,c,
Figurerespectively. Note that the
12a,c, respectively. Note features
that theoffeatures
the two letters
of the
two letters “BU” in the transverse direction are very fuzzy in Figure 12b. When using our
method, these letters become very clear and the transverse features can be seen.
adjacent normal vector in the Y direction, and (f) fused results.

The fusion results of “HEBUST” are shown in Figure 12. The fused result of the
“HEBUST” by Nehab method [20] is shown in Figure 12a, where the six letters can be
Sensors 2024, 24, 4187 seen, but the lateral part of the letters is insufficiently clear. In contrast, with the11 pro-
of 14
posed method all of the letters can be seen clearly, as shown in Figure 12c. Figure 12b,d
are the enlargements denoted in Figure 12a,c, respectively. Note that the features of the
two letters
“BU” in the“BU” in the transverse
transverse direction aredirection are very
very fuzzy fuzzy 12b.
in Figure in Figure
When 12b. When
using ourusing our
method,
method,
these these
letters lettersvery
become become
clearvery
andclear and the transverse
the transverse features
features can can be seen.
be seen.

Figure 12.Fusion
Figure 12. Fusionresults
resultsof of
“HEBUST”
“HEBUST” andand
its details: (a) Nehab
its details: method,
(a) Nehab (b) enlargement
method, of Nehab
(b) enlargement of
method, (c) our method,
Nehab method, and (d) and
(c) our method, enlargement of our method.
(d) enlargement of our method.

To further verify the effectiveness of the proposed method, a coin with rich texture in-
Sensors 2024, 24, 4187 To further verify the effectiveness of the proposed method, a coin with rich 12 texture
of 15
formation was measured. These textures include portraits, letters, and numbers. Figure 13b
information was measured. These textures include portraits, letters, and numbers. Figure
is the measurement result of the LSL where the approximate outline can be seen, but the
13b is the measurement result of the LSL where the approximate outline can be seen, but
details are not clear. Figure 13c shows the normal vector calculated from the PS, which
the details areshows
which not clear. Figurefeatures.
13c showsThethe normal vector calculatedinfrom theY PS,
clearlyclearly its detailed
shows its detailed features. The angles angles of the
of the normal normal vector
vector in X and YX directions
and di-
rections are calculated
are calculated using theusing the proposed
proposed method, as method,
shownas in shown
Figure in Figure
13d,e, 13d,e, respec-
respectively. The
tively.
coin can The coin
only becan onlycharacterized
clearly be clearly characterized in the corresponding
in the corresponding directions.directions.
Figure 13fFigure
is the
13f
fusedis the fused
result result
where wherefeatures
detailed detailed features
such as thesuch as the characters,
characters, letters, andletters,
numbers andonnumbers
the coin
become
on clear.
the coin become clear.

Figure 13. Measurement of coin parts: (a) coin, (b) LSL sensor measurement results, (c) PS normal
vectors, (d) angle of adjacent normal vector in the X direction, (e) angle of adjacent normal vector in
the
the Y
Y direction, and (f)
direction, and (f) fused
fused results.
results.

The fusion
The fusion result
result of
of the
the coin
coin is
is shown
shown inin Figure
Figure 14. Figure 14a
14. Figure 14a shows
shows the
the detail
detail
achieved by
achieved by the
the Nehab
Nehabmethod.
method.The
Thefusion
fusionresult
resultusing
using the
the proposed
proposed method
method is shown
is shown in
Figure 14c. Computing time required for data fusion was about 6 s. The details of the
result in the middle position are clearer compared to the Nehab method. Enlargement of
the fusion result is also shown. In Figure 14b, the approximate features of the hair can be
observed, but it is insufficiently clear. In Figure 14d, it becomes very clear with our
method.
Figure 13. Measurement of coin parts: (a) coin, (b) LSL sensor measurement results, (c) PS normal
vectors, (d) angle of adjacent normal vector in the X direction, (e) angle of adjacent normal vector in
the Y direction, and (f) fused results.

Sensors 2024, 24, 4187 The fusion result of the coin is shown in Figure 14. Figure 14a shows the 12 detail
of 14
achieved by the Nehab method. The fusion result using the proposed method is shown in
Figure 14c. Computing time required for data fusion was about 6 s. The details of the
in Figure
result 14c.
in the Computing
middle time
position arerequired for data fusion
clearer compared to the was about
Nehab 6 s. The
method. details of the
Enlargement of
result
the in the
fusion middle
result position
is also shown. areInclearer
Figurecompared to the Nehab
14b, the approximate method.
features of Enlargement
the hair can be of
the fusion but
observed, result
it is also shown. Inclear.
insufficiently FigureIn14b, the approximate
Figure features
14d, it becomes veryofclear
the hair
withcanour
be
observed, but it is insufficiently clear. In Figure 14d, it becomes very clear with our method.
method.

Sensors 2024, 24, 4187 13 of 15


Figure 14.
Figure Fusionresult
14. Fusion result of
of the
the coin:
coin: (a)
(a) Nehab
Nehab method,
method, (b)
(b) enlargement
enlargement of
of (a),
(a), (c)
(c) our
our method,
method, and
and
(d) enlargement
(d) enlargement of (c).

AAcross-section
cross-sectionprofile
profileofofthe
thecoin
coinisisselected
selectedfor
forcomparative
comparativeanalysis,
analysis,as
asshown
showninin
Figure15.
Figure 15.This
Thisprofile
profilewas
wasobtained
obtainedby byuseuseofofthe
theNehab
Nehabmethod,
method,thetheproposed
proposedmethod,
method,
andaachromatic
and chromaticconfocal
confocal(CC)(CC)sensor,
sensor,respectively.
respectively.TheTheCC CCsensor
sensor (Liyi
(LiyiD35A18R8S25,
D35A18R8S25,
Shenzhen,China)
Shenzhen, China)isisshown
shownininFigure
Figure15a,
15a,with
withaaresolution
resolutionofof4040nm
nmand
andaalinear
linearaccuracy
accuracy
of up to ±2 µm. Measurement accuracy of the CC sensor is very high, so it can be used
of up to ±2 µm. Measurement accuracy of the CC sensor is very high, so it can be usedas
as
thereference
the referencefor
foraccuracy
accuracyevaluation
evaluationofofthethefused
fusedresults.
results.

Figure Comparison
Figure15.15. Comparisonof fusion resultsresults
of fusion for coins:
for(a)coins:
chromatic confocal sensor
(a) chromatic and (b)
confocal cross-section
sensor and (b)
cross-section profile obtained using the different
profile obtained using the different methods. methods.

Figure
Figure15b
15b shows
shows the measurement result
the measurement result ofofthe
thecenter
centerprofile;
profile;it itcan
can
bebe seen
seen that
that the
the peak
peak to valley
to valley value
value of the
of the profile
profile is1D
is D = 0.2598
= 10.2598 mmmm using
using the the Nehab
Nehab method.
method. WithWith
our
our method D2 = 0.2334 mm, and the reference value D3 is 0.1901 mm. The deviation
method D2 = 0.2334 mm, and the reference value D3 is 0.1901 mm. The deviation between
between the Nehab method and the CC sensor is 0.0697 mm. With the proposed method,
the Nehab method and the CC sensor is 0.0697 mm. With the proposed method, the de-
the deviation is reduced to 0.0433 mm, a reduction of 37.9%. Therefore, the proposed
viation is reduced to 0.0433 mm, a reduction of 37.9%. Therefore, the proposed method
method not only improves the clarity, but also improves the accuracy.
not only improves the clarity, but also improves the accuracy.
7. Conclusions
7. Conclusions
A LSL-PS cooperative measurement system is designed, and an adaptive weighted
A LSL-PS
data fusion cooperative
method measurement
is proposed. system
The adaptive is designed,
fusion is based and annormal
on the adaptive weighted
vector that
data fusion method is proposed. The adaptive fusion is based on the normal vector that
is computed with the PS method. The 3D point cloud obtained from the LSL can be di-
rectly fused with the normal vector from the PS. Therefore, the integration process can be
eliminated in the PS measurement, which avoids the error accumulation. The weight
function based on the angle of the normal vector is added to the normal vector error
Sensors 2024, 24, 4187 13 of 14

is computed with the PS method. The 3D point cloud obtained from the LSL can be
directly fused with the normal vector from the PS. Therefore, the integration process can
be eliminated in the PS measurement, which avoids the error accumulation. The weight
function based on the angle of the normal vector is added to the normal vector error
function, which makes the features of the fusion result clearer. More experiments will be
carried out in the future for complex surfaces with fine features.

Author Contributions: Conceptualization, Y.L. and J.Z.; data curation, J.S.; formal analysis, J.S., Y.L.
and J.Z.; funding acquisition, Y.L.; methodology, Y.L. and T.L.; project administration, T.L.; resources,
Z.Z.; software, J.S. and Z.Z.; supervision, T.L.; validation, Z.Z. and J.Z.; visualization, J.S., Z.Z. and
J.Z.; writing—original draft, J.S.; writing—review and editing, J.Z. All authors have read and agreed
to the published version of the manuscript.
Funding: This research was funded by the Natural Science Foundation of Hebei Province, grant
number E2022208020, the S&T Program of Hebei, grant number 22311701D, and by the Hebei
Provincial Department of Education, grant number JZX2024014, CXZZSS2023092.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The original contributions presented in the study are included in the
article, further inquiries can be directed to the corresponding author.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Wei, Z.; Shao, M.; Wang, Y.; Hu, M. A Sphere-Based Calibration Method for Line Structured Light Vision Sensor. Adv. Mech. Eng.
2013, 5, 580417. [CrossRef]
2. Li, Y.; Zhou, J.; Mao, Q.; Jin, J.; Huang, F. Line Structured Light 3D Sensing with Synchronous Color Mapping. IEEE Sens. J. 2020,
20, 9796–9805. [CrossRef]
3. Zhao, J.; Cheng, Y.; Cai, G.; Feng, C.; Liao, L.; Xu, B. Correction model of linear structured light sensor in underwater environment.
Opt. Lasers Eng. 2022, 153, 107013. [CrossRef]
4. Xue, Q.; Ji, W.; Meng, H.; Sun, X.; Ye, H.; Yang, X. Estimating the quality of stripe in structured light 3D measurement. Optoelectron.
Lett. 2022, 18, 103–108. [CrossRef]
5. Deng, Z.; Ruan, Y.; Hao, F.; Liu, T. Hand-eye calibration of line structured-light sensor by scanning and reconstruction of a
free-placed standard cylindrical target. Measurement 2024, 229, 114487. [CrossRef]
6. Yang, L.; Fan, J.; Huo, B.; Li, E.; Liu, Y. Image denoising of seam images with deep learning for laser vision seam tracking. IEEE
Sens. J. 2022, 22, 6098–6107. [CrossRef]
7. Mao, Q.; Cui, H.; Hu, Q.; Ren, X. A rigorous fastener inspection approach for high-speed railway from structured light sensors.
ISPRS J. Photogramm. Remote Sens. 2018, 143, 249–267. [CrossRef]
8. Li, Y.; Zhou, J.; Liu, L. Research progress of the line structured light measurement technique. J. Hebei Univ. Sci. Technol. 2018,
39, 116–124.
9. Fan, H.; Qi, L.; Wang, N.; Dong, J.; Chen, Y.; Yu, H. Deviation correction method for close-range photometric stereo with
nonuniform illumination. Opt. Eng. 2017, 56, 103102. [CrossRef]
10. Ma, L.; Liu, Y.; Liu, J.; Pei, X.; Sun, F.; Shi, L.; Fang, S. A multi-scale methodology of turbine blade surface recovery based on
photometric stereo through fast calibrations. Opt. Lasers Eng. 2022, 150, 106837. [CrossRef]
11. Liu, H.; Wu, X.; Yan, N.; Yuan, S.; Zhang, X. A novel image registration-based dynamic photometric stereo method for online
defect detection in aluminum alloy castings. Digit. Signal Process. 2023, 141, 104165. [CrossRef]
12. Wang, S.; Xu, K.; Li, B.; Cao, X. Online micro defects detection for ductile cast iron pipes based on twin light photometric stereo.
Case Stud. Constr. Mater. 2023, 19, e02561. [CrossRef]
13. Gould, J.; Clement, S.; Crouch, B.; King, R.S. Evaluation of photometric stereo and elastomeric sensor imaging for the non-
destructive 3D analysis of questioned documents—A pilot study. Sci. Justice 2023, 63, 456–467. [CrossRef] [PubMed]
14. Blair, J.; Stephen, B.; Brown, B.; McArthur, S.; Gorman, D.; Forbes, A.; Pottier, C.; McAlorum, J.; Dow, H.; Perry, M. Photometric
stereo data for the validation of a structural health monitoring test rig. Data Brief 2024, 53, 110164. [CrossRef] [PubMed]
15. Pattnaik, I.; Dev, A.; Mohapatra, A.K. A face recognition taxonomy and review framework towards dimensionality, modality and
feature quality. Eng. Appl. Artif. Intell. 2023, 126, 107056. [CrossRef]
16. Sikander, G.; Anwar, S.; Husnain, G.; Thinakaran, R.; Lim, S. An Adaptive Snake Based Shadow Segmentation for Robust Driver
Fatigue Detection: A 3D Facial Feature based Photometric Stereo Perspective. IEEE Access 2023, 11, 99178–99188. [CrossRef]
17. Bornstein, D.; Keep, T.J. New Dimensions in Conservation Imaging: Combining Photogrammetry and Photometric Stereo for 3D
Documentation of Heritage Artefacts. AICCM Bull. 2023, 1–15. [CrossRef]
Sensors 2024, 24, 4187 14 of 14

18. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 191139.
[CrossRef]
19. Zhou, J.; Shi, J.; Li, Y.; Liu, X.; Yao, W. Data fusion of line structured light and photometric stereo point clouds based on wavelet
transformation. In Proceedings of the Third International Computing Imaging Conference (CITA 2023), Sydney, Australia, 1–3
June 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12921, pp. 960–964.
20. Nehab, D.; Rusinkiewicz, S.; Davis, J.; Ramamoorthi, R. Efficiently combining positions and normals for precise 3D geometry.
ACM Trans. Graph. (TOG) 2005, 24, 536–543. [CrossRef]
21. Haque, M.; Chatterjee, A.; Madhav Govindu, V. High quality photometric reconstruction using a depth camera. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2275–2282.
22. Zhang, Q.; Ye, M.; Yang, R.; Matsushita, Y.; Wilburn, B.; Yu, H. Edge-preserving photometric stereo via depth fusion. In
Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012;
IEEE: Piscataway, NJ, USA, 2012; pp. 2472–2479.
23. Okatani, T.; Deguchi, K. Optimal integration of photometric and geometric surface measurements using inaccurate re-
flectance/illumination knowledge. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition,
Providence, RI, USA, 16–21 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 254–261.
24. Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A.V. Experimentation of structured light and stereo vision for
underwater 3D reconstruction. ISPRS J. Photogramm. Remote Sens. 2011, 66, 508–518. [CrossRef]
25. Massot-Campos, M.; Oliver-Codina, G.; Kemal, H.; Petillot, Y.; Bonin-Font, F. Structured light and stereo vision for underwater
3D reconstruction. In Proceedings of the OCEANS 2015-Genova, Genova, Italy, 18–21 May 2015; IEEE: Piscataway, NJ, USA, 2015;
pp. 1–6.
26. Li, X.; Fan, H.; Qi, L.; Chen, Y.; Dong, J.; Dong, X. Combining encoded structured light and photometric stereo for underwater
3D reconstruction. In Proceedings of the 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted
Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation
(SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), San Francisco, CA, USA, 4–8 August 2017; IEEE: Piscataway, NJ,
USA, 2017; pp. 1–6.
27. Riegler, G.; Liao, Y.; Donne, S.; Koltun, V.; Geiger, A. Connecting the dots: Learning representations for active monocular depth
estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA,
15–20 June 2019; pp. 7624–7633.
28. Lu, Z.; Tai, Y.W.; Ben-Ezra, M.; Brown, M.S. A framework for ultra high resolution 3D imaging. In Proceedings of the 2010 IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE:
Piscataway, NJ, USA, 2010; pp. 1205–1212.
29. Li, Q.; Ren, J.; Pei, X.; Ren, M.; Zhu, L.; Zhang, X. High-accuracy point cloud matching algorithm for weak-texture surface based
on multi-modal data cooperation. Acta Opt. Sin. 2022, 42, 0810001.
30. Antensteiner, D.; Stolc, S.; Pock, T. A review of depth and normal fusion algorithms. Sensors 2018, 18, 431. [CrossRef] [PubMed]
31. Fan, H.; Rao, Y.; Rigall, E.; Qi, L.; Wang, Z.; Dong, J. Near-field photometric stereo using a ring-light imaging device. Signal
Process. Image Commun. 2022, 102, 116605. [CrossRef]
32. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and
automated cartography. Commun. ACM 1981, 24, 381–395. [CrossRef]
33. Li, Y.; Zhou, J.; Huang, F.; Liu, L. Sub-Pixel Extraction of Laser Stripe Center Using an Improved Gray-Gravity Method. Sensors
2017, 17, 814. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like