Automated Visual Inspection in Aerospace
Automated Visual Inspection in Aerospace
DOI 10.1007/s00138-017-0839-1
ORIGINAL PAPER
123
M. S. Biagio et al.
123
Automatic inspection of aeronautic components
123
M. S. Biagio et al.
Fig. 4 Part-based convex hull. The gearbox is divided in parts: The convex hull of each of these parts is computed and fused together to create a
unified model
should be analysed, it is possible to turn-on/off the modules we obtain the viewpoint vector by means of a summation of
via software. The modules are controlled by the computer the centroid vector and the local viewpoint vector (with origin
using the DMX512 protocol. on the centroid). These operations are formalized as
This subsection is divided in two main parts. First, we illus- where vi refers to the final viewpoint vector (in the object
trate the developed algorithms used for the camera placement coordinate frame), ci is the centroid vector in object coor-
and system registration. Second, we illustrate the proposed dinates (the centre of each triangle defining the mesh), ni is
method for model checking and visual inspection. the resulting vector defined by the normal surface vector and
dist is a standoff distance from the surface decided by the
3.2.1 Camera placement experimenter.
Once the viewpoints are computed, we analyse the rela-
In order to obtain a full coverage of the AvioAero gearbox, it tion between them and the surface points. Our goal, is to
is necessary to select the best position for the cameras inside decide which surface points are “correctly” observed. The
the inspection cage. In [2], the authors propose a method to term “correctly” means a group of criteria that define the
maximize the visual coverage and to minimize the number of observability of the surface point. These criteria are:
cameras used for inspection. Starting from [2], we implement
a slightly modified version of their algorithm that takes also – Frustum a truncated pyramid that geometrically models
into account the area of each surface triangle of the CAD the space seen by the camera. We use this geometrical
model viewed by a camera. entity as one of the criteria to select the surface points
Due to the complexity of the AvioAero gearbox, first we observed by the camera in optimal conditions. We do so
have to obtain a simplified version of the original CAD by transforming the frustum in a triangulated mesh and
model. To do so, we implemented a so called part-based testing the inclusivity of all the points of the AvioAero
convex hull method. This is a semi-automatic algorithm in gearbox CAD model.
which we divide the gearbox CAD model in subparts. Then, – Angle of incidence to further discriminate valid surface
we compute the convex hull of each of these parts and fuse points, we consider the angle of incidence between each
them together to create a unified model. The process is illus- viewpoint and the surface point. According to this crite-
trated in Fig. 4. ria, we consider a point valid if the angle of incidence
To generate the viewpoints we follow a procedure similar of the ray between the viewpoint and the surface point is
to [2]. We first find the centroid of the t-th triangle of the less than 60◦ .
mesh and its corresponding normal vector. Each of these – Occlusions To handle occlusions, we have developed a
normal vectors is multiplied by the given distance to compute z-buffer algorithm that detects occlusions between the
a camera viewpoint vector with origin in the centroid. Finally, viewpoint and the surface point using ray-tracing tech-
123
Automatic inspection of aeronautic components
niques. In such a case, the surface point is considered not pivot, and we save it in a separated list, namely S. We iterate
valid because is simply not observable from the view- the entire procedure, eliminating from the list D all the other
point because of self-occlusions. viewpoints that are direct neighbours in the overlapped graph.
Once the list D is empty, the procedure stops. As a final result,
According to these criteria, we can now build the so the list S has all the viewpoints with highest coverage of the
called measurability matrix [29]. This matrix is defined as object.
M( pi , v j ) with i, j ∈ n. In this case, v j stands for viewpoints Figure 5 depicts the output of the camera placement algo-
and pi for surface points. rithm on the original gearbox model. As a final remark, we
The coverage of a single viewpoint C(v j ) we can define want to underline the fact that the algorithm has no real
the coverage strength provided by that viewpoint for all scene knowledge of the existence of fixtures parts (that may create
points as [2]: occlusions) or other geometrical constraints. Consequently,
after the automatic placement it could be necessary to run a
1
n−1 manual prune of the cameras positions list taking into account
C(v j ) = M( pi , v j ) (2) these real world constraints.
n
i=0
Cmax = max(C(v j )) (3) In this section, we will describe the system registration pro-
cess. This is a fundamental step to determine the position
Now, we can calculate the degree of coverage overlap of the 3D object with respect to the image and to locate the
between two viewpoints, i.e.v j and vk . This can be expressed subparts/surfaces in the cameras’ image planes and in the 3D
as the dot product of the two column vectors M(:, v j ) and model reference system.
M(:, vk ) normalized by the maximum coverage [2]: A key component for such process is a set of fiducial
markers that are rigidly connected to the gearbox’s fixture,
n−1 maintaining a rigid configuration with respect to the gearbox.
1
n i=0 M( pi , v j ) · M( pi , vk )
o(v j , vk ) = (4) We decided to use these fiducial markers since they are easy
Cmax
to detect and allow the system to achieve high speed and suf-
The next step is to build a weighted undirected graph ficient precision. Among the several fiducial marker systems
O = (V, E, w) with the set of vertices V = v1 , . . . vn−1 and proposed in the literature, those based on square markers
edges E ⊆ V × V . We introduce an edge in the graph if the have gained popularity. They provide a 6D camera pose pro-
corresponding viewpoints share some of the surface points. vided that the camera is properly calibrated (i.e., intrinsic
The weight assigned to this edge is defined as w = o(v j , vk ). parameters).
Using the graph O and the row vector of coverage, it is In many approaches, markers encode a unique identifi-
possible to compute the so called compounded degree for cation by a binary code that may include error detection
each vertex of the graph. The compounded degree for a given and correction bits. In our system, we decided to use a
vertex v j is given by [2]: method based on [14] where authors presented a fiducial
marker system based on square markers particularly appro-
|h
j |−1 priate for camera pose estimation in real applications. The
d Cj = a(v j )C(v j ) w jk (5) library implemented is called ArUco [24] which has the capa-
k=0 bility of automatically generating fiducial markers and, in
the case of boards (more than one fiducial marker in the
Thus d Cj is the product between the sum of the weights same plane), it generates also the corresponding board con-
of all the edges connected to the viewpoint neighbourhood. figuration file. This board configuration file contains further
It is worth noticing that we have slightly improved the orig- geometric information, and it can be used in the detection
inal equation in [2] by including the term a(v j ). This term phase to improve detection performance. The result is higher
introduces a geometry criteria taking into consideration the accuracy when using boards with respect to the use of a single
size of the triangle area that contains the surface point and marker. For our registration process, we created two fiducial
the angle of the surface normal with respect to the camera marker physical supports mounted on both sides of the gear-
optical axis. In other words, by introducing this term we are box (Fig. 6). In this way, each camera always sees at least
prioritizing viewpoints with good visual over large areas. one valid marker board.
The last step is to compute the compounded degrees for Once the marker boards are properly installed, the anno-
all the vertices in O and put them in a descending order tated subparts/surfaces can be transferred into their corre-
list, namely D. Then, we select the highest degree, called sponding 3D locations in the model reference system. To do
123
M. S. Biagio et al.
Fig. 5 Camera placement for the AVIOAERO gearbox. In blue the first 9 cameras selected by the algorithm (colour figure online)
so, we assume a rigid roto-translation between the ArUco – camera-model registration using ArUco markers
boards and the gearbox. – 3D-2D subparts/surfaces transfer.
In practice, when a new gearbox gets inside the cage, the
ArUco markers are used to register the new location and We will describe in detail each of these phases involved
transfer the 3D subparts/surfaces into their new 2D location in the calibration stage.
in the image plane of the cameras. The process allows some Single camera intrinsics calibration This step involves a
tolerance in the position of the gearbox inside the cage (see classic intrinsic parameters camera calibration based on a
Fig. 14), making each camera independent so that the transfer black-white chessboard [30,38,39]. We used the well-known
123
Automatic inspection of aeronautic components
c2 = T b4 T c2 .
T c1 c1 b4
(8)
123
M. S. Biagio et al.
m = T c1 T m ,
T cn cn c1
(12)
m = T c1 T m ,
T bn bn c1
(11) 3.2.3 Inspection system
where n stands for the board ID with n = 1, . . . , 6. In this section, we will describe the full inspection system
These transformations are assumed to be kept rigid during that handles two different problems.
the operation of the system so that they can be used in the The first one, namely model checking, can be described
online phase to re-project the features on the image plane of as follow: Given a CAD model, the gearbox in our case, it
the cameras. checks exhaustively and automatically whether this model
2D–3D features transfer The transformation T m meets given specifications or rules.
c1 gives the
extrinsics between the cameras and the gearbox model as
2 This can also be done by modelling the rays in the reference system of
the camera and then roto-translating the rays into the reference system
1 We select camera 1 as master but any other camera could be used. of the model.
123
Automatic inspection of aeronautic components
CAD registraon
colourization of the surface, defect inspection or scratches LBP P,R = s(g p − gc )2 p , (15)
p=0
recognition. The automation problem for defect inspec-
tion falls into two general categories based on the types
where
of materials. The first category is associated with uni-
form materials such as metals, film, and paper. Defect 1, x ≥ 0
detection in these materials normally relies upon iden- s(x) = (16)
0, x > 0,
tification of regions that differ from a uniform back-
ground. The second category, instead, is associated with and gc is the grey value of the central pixel, g p is the value
textured materials such as textile, ceramics, plastics and of its neighbours, P is the total number of involved neigh-
others. bours, and R is the radius of the neighbourhood. Suppose
Figure 9 shows the pipeline of the proposed system. The the coordinate of gc is (0, 0), then the coordinates of g p are
system receives an acquired image as input of the algorithm. (R cos(2π p/P), R sin(2π p/P)). The grey values of neigh-
After some processing stages, it gives as output the miss- bours that are not in the image grids can be estimated by
ing subparts detected and the defect recognized. The entire interpolation. Suppose that the image is of size M × N , after
system is based on the following steps: the LBP pattern of each pixel is identified, a histogram is
Model checking (subparts) From a computer vision point of built to represent the texture image:
view, the problem of identifying subparts in the AvioAero
Gearbox (screws, bolts, pins etc.) can be considered as
M
N
an object detection problem that can be solved through a H (k) = f (LBP P,R (n, m), k) k ∈ [0, K ], (17)
learning stage. Therefore, an algorithm should discriminate m=1 n=1
between what is and what is not a particular object. Once
where
the registration process is completed, the next step is related
to the identification of a set of region of interests (ROI) 1, x = y
where the object should be found. According to the set of f (x, y) = (18)
0, otherwise
subparts that should be recognized, we re-project each sub-
part CAD model onto the image acquired by the camera and and K is the maximal LBP pattern value. An extension of
we automatically select the ground truth and a ROI, around the original LBP method is the uniform LBP. The U value of
the object. Inside each ROI, a sliding window strategy is an LBP pattern is defined as the number of spatial transitions
applied to extract features from each subwindow. As feature (bitwise 0/1 changes) in that pattern
descriptor, we evaluated several with the last being the local
binary pattern (LBP) [25]. This is a very efficient texture U (LBP P,R ) = |s(g P−1 − gc )|
descriptor which labels the pixels of an image according to
P−1
the differences between the values of the pixel itself and the + |s(g p − gc ) − s(g p−1 − gc )|. (19)
surrounding ones. Given a pixel in the image, the LBP code p=1
is computed by comparing it with its neighbours:
123
M. S. Biagio et al.
The uniform LBP patterns refer to the patterns which have Bootstrap The final step is the possibility to run a boot-
limited transition or discontinuities (U ≤ 2) in the circular srap technique over the detections founded by the algorithm.
binary presentation [26]. Bootstrap methods are designed to improve the stability and
Classification In all supervised learning methods, it is impor- accuracy of supervised machine learning algorithms used in
tant to choose the correct classifier. Popular classifiers are statistical classification and regression. It also reduces vari-
support vector machines (SVMs) [4,11] or boosting-based ance and helps to avoid over-fitting. Basically, it consists in
algorithms assembled in rejection cascades (AdaBoost [36] the re-training of the SVM classifier adding all these false
or LogitBoost [13]). In our case, we decided to use the positive windows as negative samples. This technique can
SVM linear classifier. The feature vector extracted from be iterated by the human operator during the inspection, to
each subwindow of the sliding window approach is fed into improve the detection algorithm.
the classifier. Each window labelled as positive contains the Visual inspection (surfaces) In our context, considering the
object. Differently, those labelled negative do not contain lack of a dataset of defect images and the homogeneity char-
any instances of the object. Each set of overlapping windows acteristics in terms of texture of the AvioAero gearbox, we
could contain all (or a substantial fraction of) the object of decided to use an unsupervised technique. Specifically, the
interest. This means that each of them might be labelled pos- phase only transform (PHOT) method proposed by [1] is an
itive by the classifier, meaning we would count the same efficient technique for detecting surface defects. This method
object multiple times. The usual strategy for managing this assumes that a defect can be defined as an abrupt change in a,
problem is non-maximum suppression [31]. In this strategy, otherwise, homogeneous area. The method essentially seg-
windows with a local maximum of the classifier response ments defects by removing any regularity from the image.
suppress nearby windows so reducing the number of con- This is done at various scales and patterns at once; to do so,
tiguous positive samples. the method basically normalizes the Fourier transform of the
Validation In the validation step, the positive samples found input image by its magnitude [9].
by the classifier are compared with the original ground truth We implemented the PHOT steps proposed by Aiger and
extracted from the CAD registration step and a set of detec- Talbot [1]; their algorithm is summarized in Algorithm 1; the
tions (true positive, false positive and missed) are given as resulting O(u, v) represents the image output of the algo-
result. Evaluation on the list of detected bounding boxes rithm. To further enhance this output, we apply some other
is done using the PASCAL criterion [12] which counts a image processing stages. Figure 10 depicts the organization
detection to be correct if the overlap with respect to the of the complete analysis with the heart of PHOT highlighted
union between the detected and ground truth bounding box in yellow. A median filter is applied both in the input and
is greater than 0.5. output of the PHOT to smooth out the image and remove
FFT normalization by
Surface Fast Fourier Median filter
serialization magnitude Inverse FFT
image Transform (smoothing)
(Phase information)
Morphological
Mahalanobis Adaptive Binary Area size
Filters (erosion,
Distance Thresholding image filter
dilation, holes fill)
Boundaries
filter
PHOT
result
Fig. 10 Extended PHOT modules. The original PHOT module is highlighted in yellow (colour figure online)
123
Automatic inspection of aeronautic components
DM = (x − μ)T −1 (x − μ) (20)
123
M. S. Biagio et al.
Fig. 13 PHOT Results on the scotch tape defects. Defects are high-
lighted in red (colour figure online)
Fig. 14 Overlay images. This image represents 11 acquisition images
at different object poses
In this section, we illustrate the results obtained by our system Bootstrap Level 0 represents the training done on the first
with the AvioAero gearbox for both tasks: model checking acquisition without bootstrap; Bootstrap Level 1 represents
and visual inspection. In these experiments, we tried to simu- the first round of bootstrap where false positives and miss ele-
late the real inspection task of an AvioAero operator. For such ments are fed again into the SVM classifier, re-training the
reason, we simulated 11 different poses of the object (i.e., model. Bootstrap Level 1 improves in average the classifica-
11 different acquisitions), considering also different rotation tion accuracy of 3.5% reaching an accuracy of 95.22% with
angles (from −10 to +10 degrees). The overlay acquisition a standard deviation of ±1.09 against 91.78% with a stan-
images are shown in Fig. 14. dard deviation of ±4.01 for the Bootstrap Level 0 method
Model checking The first acquisition is used to create the (last bars of the figure). Furthermore, it is worth noticing that
training set of the model checking task. Actually, the train- the number of false positives per image (FPPI) decreases. In
ing used only 25% of the number of each subpart and the rest fact, considering more than 5500 sliding windows for each
was used as testing. For example, having 40 screws, we used acquisition, the Bootstrap Level 1 has an average of 5 FPPI
10 of them for training and then we tested on the remaining. with respect to 8 FPPI for the Bootstrap Level 0.
Fig. 15 depicts the results of the Model Checking task. As These results indicate the effectiveness of the proposed
described in Sect. 3.2.3, the proposed method allows the use system, showing excellent classification accuracy, stability
of a bootstrap technique to improve the subparts detection. and robustness against multiple rotations of the object and
In the results, we decide to test two possible configurations: different subpart viewpoints.
123
Automatic inspection of aeronautic components
Fig. 16 Results on the first pose images. Analysis of different PHOT Fig. 18 Results obtained over ten different poses of the object. The
thresholds PHOT threshold value in all poses is equal to 9
ROC Curves
1
0.9
the labelled area with a defect). On the other hand, The FPR
True Positive Rate (TPR)
0.8
defines how many incorrect detection occurs in the areas that
0.7
do not contain a defect. Considering Fig. 17, with a false pos-
0.6
itive rate equal to 0.5, the best performance is reached with
0.5
the PHOT threshold equal to 9. This threshold value can be
0.4 modified by the human operator during inspection to adjust
0.3 the performances of the PHOT descriptor.
0.2 In is worth noticing that the criteria to decide if a defect
0.1 has been detected is based on the overlap of 25% between
0 the ground truth area and the area classified by PHOT as a
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
False Positive Rate (FPR)
defect. This criteria is reasonable considering that the tasks at
Threshold 6 Threshold 7 Threshold 8
hand ultimately involves reporting to human operators. Even
Threshold 9 Threshold 10 Threshold 11 if the system does not completely detects the defect, a partial
detection can be sufficient to attract the human attention.
Fig. 17 The ROC curve showing the performance of the detector at
varying thresholds
Figure 18 shows the results obtained over all the poses
(see Fig. 14), selecting a threshold value equal to 9 selected
from the one presented in Fig. 16. As already pointed out,
Visual inspection To evaluate the visual inspection algo- this threshold value can be modified by the human operator
rithm, in collaboration with AvioAero operators, we created during the inspection. However, having a small threshold
a ground truth dataset by manually annotating the defects value means a much higher number of false positives losing
present on the gearbox. Then, we used the PHOT results on the advantage of the detection system. On the contrary, a
the images of the first pose to select an initial threshold of the high threshold value means less accuracy of the system and
algorithm. This analysis is shown in Fig. 16. The x-axis shows a much higher human interference.
the threshold choices. The y-axis shows the accuracy and the In Fig. 18, as expected, the false positive area remains sim-
false positive area (FPArea), both in percentage. The accu- ilar in all the poses. The overall accuracy performs reasonable
racy represent the percentage of defects correctly detected well although there is a slight degradation on the accuracy
and the FPArea shows the percentage of the total inspected respect to the first pose. This is probably due to small errors
area that has been incorrectly misclassified as defect (false in the re-projection of the inspected surfaces. When this is so,
positive). We choose a PHOT threshold of 9 that provides a PHOT could partially model the full surface using pixels in
reasonable balance between accuracy (67%) and false posi- the periphery that strongly differ from the pixels in the inner
tives (3.5%). areas of the surface. Consequently, the sensitivity of PHOT
A more detailed analysis is presented in Fig. 17 show- in such cases is affected by this inconvenience. To deal par-
ing the receiver operating characteristic (ROC) curve for an tially with this problem, we have implemented a filter that
interval of thresholds which provides the best results accord- erodes externally the surface before applying the PHOT anal-
ing to Fig. 17. The ROC curve provides a plot of the true ysis. However, as stated before, these misalignments could
positive rate (TPR) against the false positive rate (FPR) at be tolerated in a men-in-the-loop scenario. Moreover, if we
varying value of the PHOT threshold. The TPR is given by consider a scenario with active registration we are confident
how many positive results (i.e., correctly classified area with that the overall accuracy of the system could be considerable
a defect) are detected among all positive samples (i.e., all improved.
123
M. S. Biagio et al.
5 Conclusion 8. Chin, R., Harlow, C.: Automated visual inspection: a survey. IEEE
Trans. Pattern Anal. Mach. Intell. (PAMI) 4(6), 557–573 (1982)
9. Choi, J., Kim, C.: Unsupervised detection of surface defects: A
In this paper we proposed a hardware-software automatic two-step approach. In: 2012 19th IEEE International Conference
visual inspection system based on images, that is able to on Image Processing, pp. 1037–1040 (2012)
address two main industrial problems: The first one refers 10. Corke, P.I.: Robotics, Vision and Control: Fundamental Algorithms
to the inspection of all the components in a final product, in Matlab. Springer, Berlin, Heidelberg (2011)
11. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20,
to check if they meet given specifications, if all of them are 273–297 (1995)
correctly mounted. This problem is also known as model 12. Everingham, M., Gool, L., Williams, C.K., Winn, J., Zisserman, A.:
checking. The second one, namely visual inspection, refers The Pascal visual object classes VOC challenge. Int. J. Comput.
to the inspection of the final product surface to check the Vis. 88(2), 303–338 (2010)
13. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression:
presence of aesthetic defects, e.g. scratches, or discolouriza-
a statistical view of boosting. Ann. Stat. 28, 2000 (1998)
tion. Since human operators have implicit limits, like work 14. Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F.J.,
shifts, low repeatability and oversights of the defects/missing Marín-Jiménez, M.J.: Automatic generation and detection of highly
parts, this system was developed to support operators during reliable fiducial markers under occlusion. Pattern Recognit. 47(6),
2280–2292 (2014). doi:10.1016/[Link].2014.01.005
their inspection tasks. Furthermore, the experimental results 15. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer
shows that this novel automatic inspection method can detect Vision, 2nd edn. Cambridge University Press, New York (2003)
missing parts and defects. Our future research will focus on: 16. Kumar, A.: Computer-vision-based fabric defect detection: A sur-
(1) use of a robotic arms to move the cameras; (2) improve- vey. IEEE Trans. Ind. Electron. 55(1), 348–363 (2008)
17. Legland, D.: Matgeom: matlab geometry toolbox for 2d/3d geo-
ments of the registration based on active markers; (3) prove metric computing. [Link] (2009)
the effectiveness of our system testing it in a real product 18. Lepetit, V., Moreno-Noguer, F., Fua, P.: Epnp: an accurate o(n)
line scenario; (4) consider how the combination of human solution to the pnp problem. Int. J. Comput. Vis. 81(2), 155–166
operators and inspection system can improve the detection (2008)
19. Levenberg, K.: A method for the solution of certain non-linear
results. problems in least squares. Q. Appl. Math. 2, 164–168 (1944)
20. Malamas, E., Petrakis, E., Zervakis, M., Petit, L., Legat, J.D.: A
Acknowledgements This work was carried out under the support of the survey on industrial vision systems, applications and tools, image
AvioAero company. Furthermore, we would like to thank Dr. Enrique and vision computing 21. Image Vis. Comput. 21, 171–188 (2003)
Muñoz-Corral and Dr. Luca Mazzei for their invaluable technical and 21. Markou, M., Singh, S.: Novelty detection: a review—part 2: neu-
human support. ral network-based approaches. Signal Process. 83(12), 2499–2521
(2003)
Compliance with ethical standards 22. Marquardt, D.W.: An algorithm for least-squares estimation of non-
linear parameters. SIAM J. Appl. Math. 11(2), 431–441 (1963)
Conflict of interest This research was funded by Avio Aero (grant 23. Moganti, M., Ercal, F., Dagli, C., Tsunekawa, S.: Automatic PCB
number P37508). inspection algorithms: a survey. Comput. Vis. Image Underst.
(CVIU) 63(2), 287–313 (1996)
24. Newman, T., Jain, A.: A survey of automated visual inspection.
Comput Vis Image Underst. (CVIU) 61, 231–262 (1995)
25. Ojala, T., Pietikinen, M., Harwood, D.: A comparative study of
References texture measures with classification based on feature distributions.
Pattern Recognit. Lett. 1(29), 51–59 (1998)
1. Aiger, D., Talbot, H.: The phase only transform for unsupervised 26. Ojala, T., Pietikinen, M., Menp, T.: Multiresolution gray-scale and
surface defect detection. In: IEEE Conference on Computer Vision rotation invariant texture classification with local binary patterns.
and Pattern Recognition (CVPR), pp. 295–302 (2010) IEEE Trans. Pattern Anal. Mach. Learn. (PAMI) 24(7), 971–987
2. Alarcón-Herrera, J., Xiang, C., Xuebo, Z.: Viewpoint selection for (2002)
vision systems in industrial inspection. In: 2014 IEEE International 27. Park, Y., Kweon, I.S.: Ambiguous surface defect image classifica-
Conference on Robotics and Automation (ICRA), pp. 4934 – 4939 tion of amoled displays in smartphones. IEEE Trans. Ind. Inform.
(2014) 99, 1–1 (2016)
3. Bahlmann, C., Heidemann, G., Ritter, H.: Artificial neural networks 28. Peng, X., Chen, Y., Yu, W., Zhou, Z., Sun, G.: An online defects
for automated quality control of textile seams. Pattern Recognit. inspection method for float glass fabrication based on machine
32(1), 1049–1060 (1999) vision. Int. J. Adv. Manuf. Technol. 39(11), 1180–1189 (2007)
4. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for 29. Scott, W.R.: Model-based view planning. Mach. Vis. Appl. 20(1),
optimal margin classifiers. In: Proceedings of the Fifth Annual 47–69 (2009)
Workshop on Computational Learning Theory, pp. 144–152. ACM, 30. Sturm, P.F., Maybank, S.J.: On plane-based camera calibration:
New York, NY, USA (1992) A general algorithm, singularities, applications. In: 1999 IEEE
5. Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with Computer Society Conference on Computer Vision and Pattern
OpenCV Library, 1st edn. O’Reilly Media, Beijing (2008) Recognition, vol. 1, p. 437 vol. 1 (1999)
6. Caulier, Y., Bourennane, S.: An image content description tech- 31. Szeliski, R.: Computer Vision: Algorithms and Applications.
nique for the inspection of specular objects. EURASIP J. Adv. Springer, New York (2010)
Signal Process. 2008, 195263 (2008) 32. Thomas, A., Rodd, M., Holt, J., Neill, C.: Real-time industrial
7. Chin, R.: Automated visual inspection: 1981 to 1987. Comput. Vis. visual inspection: a review. Real Time Imaging 1(2), 139–158
Gr. Image Process. 41(3), 346–381 (1988) (1995)
123
Automatic inspection of aeronautic components
33. Torres, F., Sebastian, J., Aracil, R., Jimenez, L., Reinoso, O.: Salvatore Giunta received the
Automated real-time visual inspection system for high-resolution degree in Electronic Engineer-
superimposed printings. Image Vis. Comput. 16(1213), 947–958 ing in 2003 from University of
(1998) Catania and Ph.D. degree in
34. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Metrology: Science and Tech-
Bundle adjustment—a modern synthesis. In: Proceedings of the nique of Measurements from
International Workshop on Vision Algorithms: Theory and Prac- Polytechnic of Turin in 2006.
tice. ICCV ’99, . Springer-Verlag, London, pp. 298–372 (2000) The main research activities, car-
35. Tucker, J.: Inside beverage can inspection: an application from start ried on during the 3 years of
to finish. In: Proceedings of the Vision ’89 Conference (1989) his doctorate, have been devoted
36. Viola, P., Jones, M.J., Snow, D.: Detecting pedestrians using pat- to the design and development
terns of motion and appearance. Int. J. Comput. Vis. 63(2), 153–161 of innovative devices for pres-
(2005) sure and temperature metrology,
37. Xie, X.: A review of recent advances in surface defect detection at the Italian National Research
using texture analysis techniques. Electron. Lett. Comput. Vis. Institute of Metrology (INRiM).
Image Anal. 7(3), 1–22 (2008) These research activities aimed at redefinition of intermediate temper-
38. Zhang, Z.: Flexible camera calibration by viewing a plane from ature scale and new determination of the Boltzmann Constant for the
unknown orientations. In: ICCV, pp. 666–673 (1999) redefinition of kelvin. He is working at GE Avio since 2008 and cur-
39. Zhang, Z.: A flexible new technique for camera calibration. rently he is at in charge of Metrology, NDT and Digital factory.
IEEE [Link] Anal. Mach. Intell. (PAMI) 22(11), 1330–1334
(2000)
Alessio Del Bue received the
Laurea degree in Telecommuni-
cation engineering in 2002 from
Marco San Biagio received the University of Genova and his
[Link]. degree cum laude in Infor- Ph.D. degree in Computer Sci-
matics Engineering from the ence from Queen Mary Uni-
University of Palermo, Italy, in versity of London in 2006. He
2010, and the Ph.D. in computer was a researcher in the Insti-
engineering from University of tute for Systems and Robotics
Genoa and Istituto Italiano di (ISR) at the Instituto Superior
Tecnologia (IIT), Italy, in 2014, Tecnico (IST) in Lisbon, Portu-
under the supervision of Prof. gal. Currently, he is leading the
Vittorio Murino and Prof. Marco Visual Geometry and Modelling
Cristani working on Data Fusion (VGM) Lab at the PAVIS depart-
in Video Surveillance. Before his ment of the Istituto Italiano di
current position, he was a post- Tecnologia (IIT) in Genova. His research focuses in the areas of 3D
doctoral fellow at the Pattern scene understanding and 3D reconstruction of non-rigid structure from
Analysis and Computer Vision image sequences.
department (PAVIS) at IIT, Genoa, Italy. His main research interests
include machine learning, statistical pattern recognition and data fusion Vittorio Murino is full pro-
techniques for object detection and classification. fessor and head of the Pattern
Analysis and Computer Vision
(PAVIS) department at the Isti-
Carlos Beltrán-González recei- tuto Italiano di Tecnologia (IIT),
ved a [Link]. in Computer Engi- Genoa, Italy. He received the
neering from the Polytechnic Ph.D. in Electronic Engineering
University of Valencia (UPV), and Computer Science in 1993
Spain, in 1996 and a [Link]. at the University of Genoa, Italy.
in Software Engineering from Then, he was first at the Univer-
the University of Valencia (UV), sity of Udine and, since 1998, at
Spain, in 1998. After 2 years the University of Verona, where
in industry he enrolled in the he was chairman of the Depart-
Ph.D. program of the Univer- ment of Computer Science from
sity of Genoa, Italy and obtained 2001 to 2007. His research inter-
a Ph.D. in Computer Engineer- ests are in computer vision and machine learning, in particular, prob-
ing and Electronics in 2005 spe- abilistic techniques for image and video processing, with applications
cialising in the areas of com- on video surveillance, biomedical image analysis and bio-informatics.
puter vision and robotics sys- He is also member of the editorial board of Pattern Recognition, Pat-
tems. After that, he served as principal investigator in FP7 Framework tern Analysis and Applications, and Machine Vision & Applications
european projects and as applied computer vision specialist in indus- journals, as well as of the IEEE Transactions on Systems Man, and
trial projects involving video surveillance and intelligent transporta- Cybernetics. Finally, he is senior member of the IEEE and Fellow of
tion systems. In 2012 he joined the Pattern Analysis and Computer the IAPR.
Vision department (PAVIS) at the Istituto Italiano di Tecnologia (IIT),
Genoa.
123