0% found this document useful (0 votes)
25 views15 pages

Automated Visual Inspection in Aerospace

This paper presents an automated visual inspection system for aeronautic components, specifically focusing on a gearbox produced by AvioAero. The system utilizes a multi-camera setup and advanced image processing techniques to perform model checking and defect detection, addressing challenges in traditional inspection methods. The proposed approach demonstrates effectiveness in ensuring high quality in production lines while meeting time constraints and adaptability requirements.

Uploaded by

ero.casa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views15 pages

Automated Visual Inspection in Aerospace

This paper presents an automated visual inspection system for aeronautic components, specifically focusing on a gearbox produced by AvioAero. The system utilizes a multi-camera setup and advanced image processing techniques to perform model checking and defect detection, addressing challenges in traditional inspection methods. The proposed approach demonstrates effectiveness in ensuring high quality in production lines while meeting time constraints and adaptability requirements.

Uploaded by

ero.casa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Machine Vision and Applications

DOI 10.1007/s00138-017-0839-1

ORIGINAL PAPER

Automatic inspection of aeronautic components


Marco San Biagio1 · Carlos Beltrán-González1 · Salvatore Giunta2 ·
Alessio Del Bue1 · Vittorio Murino1

Received: 6 September 2016 / Revised: 16 January 2017 / Accepted: 20 March 2017


© Springer-Verlag Berlin Heidelberg 2017

Abstract Industrial processes are costly in terms of time, 1 Introduction


money and customer satisfaction. The global economic
pressures have gradually led businesses to improve these pro- Nowadays, computer vision systems are a key element in
cesses to become more competitive. As a result, the demand several automation processes. Artificial vision is used for
of intelligent visual inspection systems aimed at ensuring many industrial applications such as electronics component
the high quality in production lines is increasing. In this manufacturing [27], quality textile production [3,16], metal
paper, we present a computer vision system that, using only product finishing [35], glass manufacturing [28], printing
images, is able to address two main problems: (i) model products [33] and many others [20].
checking: automatically check whether a component meets However, when the inspection process requires a highly
given specifications or rules, (ii) visual inspection: defect specific solution that rules out commercially available sys-
inspection on irregular surfaces, in particular, decolouriza- tems, manual inspection remains the most usual option [6].
tion and scratches detection. In the experimental results, we One example of a process usually performed by skilled work-
show the effectiveness of our system and the readiness of such ers is the surface quality inspection or component parts
technologies for their integration in industrial processes. checking. There are many reasons why these tasks prove
very complex. First, many defects are visible only under cer-
Keywords Automatic visual inspection · Model checking · tain lighting conditions and at close distance. Second, the
Machine learning · Defects inspection · Image processing · human operators have implicit limits: work turn over, low
Machine vision · Registration · Multi-view analysis repeatability and oversights of the defects/missing parts [28].
Finally, the inspection task must be carried out in the cycle
M. San Biagio, C. Beltran, A. Del Bue and V. Murino contributed time established for a specific production process, which tend
equally to this work. to be short. These three reasons can have an impact on the
production line and influence delivery time, production costs
B Carlos Beltrán-González
and customer satisfaction.
[Link]-gonzalez@[Link]
More specifically, in aeronautic applications, the global
Marco San Biagio
economic pressure has gradually led industries to improve
[Link]@[Link]
competitiveness. As a result, intelligent visual inspection sys-
Salvatore Giunta
tems are an impelling demand to ensure the high quality of
[Link]@[Link]
products in production lines. An automated visual system
Alessio Del Bue
must also comply with the following requirements:
[Link]@[Link]
Vittorio Murino
[Link]@[Link]
– Reliability and robustness the hardware devices used for
1 Pattern Analysis and Computer Vision Department (PAVIS), image acquisition and the software algorithms to analyse
Istituto Italiano di Tecnologia, Genova, Italy the images should guarantee the detection/classification
2 AVIOAero, Rivalta di Torino, Italy of all defects/subparts.

123
M. S. Biagio et al.

is complemented with a dynamic illumination system


based on eleven independent illuminators controlled pro-
grammatically by the DMX512 digital communications
protocol. The system also deals with the registration
problem between the gearbox and the cameras by using
fiducial markers.
– We address time constrains by deploying a complete
inspection (model checking and visual inspection tasks)
that takes less than 30 min ensuring the repeatability and
Fig. 1 In red the gearbox as deployed in the turboprop engine (image traceability of the tests.
modified and reproduced under the terms of CC BY 2.0) (colour figure – We propose a methodology based on machine learning
online)
techniques to inspect a complex and huge object. In fact,
the CAD model of the gearbox contains more than 130K
– Capability for real-time inspection since the automated faces and has an overall dimension of about 1 m3 . To the
visual inspection system is just one element of the best of our knowledge, publications related to automatic
production chain, the process must be performed in a inspection of aeronautic engine components is extremely
pre-established amount of time. limited.
– Adaptability since an industry does not develop only one
product, great effort must be geared to developing a sys- The rest of the paper is organized as follow. In Sect. 2,
tem that can adapt to new devices and tools. For this an overview on Automated Visual Inspection systems is
reason, it is absolutely necessary that the system com- presented. The proposed method, with the hardware and soft-
ponents may be included, eliminated or even modified ware implementation, is described in Sect. 3. Quantitative
easily. This flexibility will permit that different models results and discussion are shown in Sect. 4 and, finally, Sect. 5
of the same product or different parts may be inspected concludes the paper with some observations and future per-
by just one system. spectives.

In this paper, we present a computer vision system that,


using only images, is able to address two main problems 2 Background
that can happen in a product line. The first one is related
to exhaustively and automatically checking whether a com- The term “automated visual inspection” (AVI) is used to
ponent meets given specifications or guidelines. We refer to refer to a set of image-processing techniques for quality con-
this problem as model checking. The second one is related to trol that have been widely applied in the production line of
discolourization of the surface, defect inspection or scratches traditional manufacturing industries, such as for mechanical
recognition. We refer to this second problem as visual inspec- parts and vehicles. Many survey papers have reviewed sev-
tion. eral early inspection applications and related computer vision
Precisely, we focus on a gearbox component produced techniques, which can be classified broadly in three large
by AvioAero company (Fig. 1), but the proposed system areas: image representation, template matching and pattern
can be extend to other components. This gearbox features classification algorithms [7,8].
a new three-stage axial low-pressure compressor (replacing In most manufacturing industries, one goal is to achieve
a single centrifugal stage), increased turbine cooling and a the highest quality assurance of the parts, subassemblies, and
high-power (5000 shp), low-speed (1020 RPM) and reduced finished products. Visual inspection is an important step in
dimension of the gearbox. The dimensions for the gearbox the manufacturing process, and how to ensure that the quality
are 1 m ×1 m ×1 m with many occluded parts. This last char- of each product meets the standard is a challenging task.
acteristic impose a considerable challenge for an automated Moreover, inspection tasks are time-consuming, and most of
visual inspection system. the time, performed by human inspectors.
This work makes the following contributions: Recently, thanks to the advances in technology and manu-
facturing devices, AVI has become one of the most important
– We introduce an automated software that can, using only application areas in machine vision and numerous related
images, check if all the subparts have been correctly studies and works have been conducted, including hardware,
installed and if surface defects are present on the surface software and related applications.
of the final assembly. First works on AVI started from the early 70s. Chin and
– We describe a completely calibrated multi-camera sys- Harlow [8] surveyed AVIs from 1972 to 1980, including
tem based on nine machine vision cameras. This system applications on printed circuit boards (PCBs) (first reported

123
Automatic inspection of aeronautic components

case of automated visual inspection system), photomasks,


and integrated circuits. Chin [7] conducted a survey follow-
ing related development in the 1980s. Thomas et al. [32]
provided a review of related works from 1973 to 1994.
They focused on the machine vision algorithm, illumination,
schemes, and real-time performance and verification. New-
man and Jain [24] also reviewed relevant works from 1988 to
1993. They especially focused on the CAD models applied
in AVI. Malamas et al. [20] classified applications into two
parts: inspected features of the industrial product and the
inspection independent characteristics of the inspected prod-
uct.
Some of the studies focused mainly on specific products
or techniques applied in automated inspection. For example,
Moganti et al. [23] reviewed the algorithms and techniques
applied in PCB AVIs. Markou and Singh [21] provided
an overview of approaches of novelty detection, including Fig. 2 The inspection cage
the statistical approach and neural networks (NN)-based
approaches. In [16], Kumar focused on the application on
fabric defect detection, and presented a survey on the avail-
able techniques for the inspection of fabric defects. Xie [37]
reviewed texture analysis techniques used in surface detec-
tion, and also discussed colour texture analysis.
These studies presented an overview of automated inspec-
tion development in the past few decades. Advances in
equipment, technology and the methods applied in AVI
have been significant. Moreover, manufacturing technology
has improved: Precision manufacturing has been gradu-
ally developed, such as micro-precision manufacturing and
nano-manufacturing. On the other hand, product inspection
Fig. 3 The illumination modules used in the proposed system
requirements have become stricter and more challenging as
products become smaller, and the assembly process becomes
increasingly precise. resolution of 3400 × 2748 pixels (10 megapixels) and they
mount lenses with 3.5 mm focal length.
The lighting system is a critical component when deal-
3 Proposed method ing with intensity imaging of metallic surfaces. Shadows and
reflections can affect dramatically the accuracy of the system.
In this section, we illustrate the complete hardware and soft- Designing the optimal lighting setup is one of the most dif-
ware system developed in collaboration with AvioAero. First, ficult parts of any surface inspection system, which requires
we introduce the inspection cage and the illumination sys- a lot of intuition and experimental trials. For this reason,
tem realized for this project. Second, we present the software we designed an ad hoc illumination system based on LED
implementation for the camera placement, system registra- technology. Each illumination module can be controlled pro-
tion and for the two inspection tasks. grammatically (i.e., by means of a computer software), so we
can adapt in real time the illumination levels of the modules
3.1 Hardware implementation depending on the needs of the different inspection phases.
Each illumination module is composed by 6 LED stripes
The image acquisition is performed using an ad hoc inspec- (Fig. 3) in a 3 + 3 schema (2 different channels control three
tion cage with a set of nine cameras. The cage has a dimension stripes). Furthermore, each LED assumes a grey-scale inten-
of 2 × 2 × 2 metres open on one side, to allow the entrance sity value between 0 and 255, where 0 is equal to black LED
of the gearbox. It is built using aluminium bars and pro- and 255 to white LED.
files (Fig. 2). The cameras used for this project are Basler The total number of illumination modules mounted on
acA3800-14um mounted at a fixed distance (average dis- the inspection cage is 11. There are 3 modules on each side
tance of 80 cm) from the gearbox. These cameras have a and 2 on the top. According to the part of the gearbox that

123
M. S. Biagio et al.

Fig. 4 Part-based convex hull. The gearbox is divided in parts: The convex hull of each of these parts is computed and fused together to create a
unified model

should be analysed, it is possible to turn-on/off the modules we obtain the viewpoint vector by means of a summation of
via software. The modules are controlled by the computer the centroid vector and the local viewpoint vector (with origin
using the DMX512 protocol. on the centroid). These operations are formalized as

3.2 Software implementation vt = ct + (nt ∗ dist), (1)

This subsection is divided in two main parts. First, we illus- where vi refers to the final viewpoint vector (in the object
trate the developed algorithms used for the camera placement coordinate frame), ci is the centroid vector in object coor-
and system registration. Second, we illustrate the proposed dinates (the centre of each triangle defining the mesh), ni is
method for model checking and visual inspection. the resulting vector defined by the normal surface vector and
dist is a standoff distance from the surface decided by the
3.2.1 Camera placement experimenter.
Once the viewpoints are computed, we analyse the rela-
In order to obtain a full coverage of the AvioAero gearbox, it tion between them and the surface points. Our goal, is to
is necessary to select the best position for the cameras inside decide which surface points are “correctly” observed. The
the inspection cage. In [2], the authors propose a method to term “correctly” means a group of criteria that define the
maximize the visual coverage and to minimize the number of observability of the surface point. These criteria are:
cameras used for inspection. Starting from [2], we implement
a slightly modified version of their algorithm that takes also – Frustum a truncated pyramid that geometrically models
into account the area of each surface triangle of the CAD the space seen by the camera. We use this geometrical
model viewed by a camera. entity as one of the criteria to select the surface points
Due to the complexity of the AvioAero gearbox, first we observed by the camera in optimal conditions. We do so
have to obtain a simplified version of the original CAD by transforming the frustum in a triangulated mesh and
model. To do so, we implemented a so called part-based testing the inclusivity of all the points of the AvioAero
convex hull method. This is a semi-automatic algorithm in gearbox CAD model.
which we divide the gearbox CAD model in subparts. Then, – Angle of incidence to further discriminate valid surface
we compute the convex hull of each of these parts and fuse points, we consider the angle of incidence between each
them together to create a unified model. The process is illus- viewpoint and the surface point. According to this crite-
trated in Fig. 4. ria, we consider a point valid if the angle of incidence
To generate the viewpoints we follow a procedure similar of the ray between the viewpoint and the surface point is
to [2]. We first find the centroid of the t-th triangle of the less than 60◦ .
mesh and its corresponding normal vector. Each of these – Occlusions To handle occlusions, we have developed a
normal vectors is multiplied by the given distance to compute z-buffer algorithm that detects occlusions between the
a camera viewpoint vector with origin in the centroid. Finally, viewpoint and the surface point using ray-tracing tech-

123
Automatic inspection of aeronautic components

niques. In such a case, the surface point is considered not pivot, and we save it in a separated list, namely S. We iterate
valid because is simply not observable from the view- the entire procedure, eliminating from the list D all the other
point because of self-occlusions. viewpoints that are direct neighbours in the overlapped graph.
Once the list D is empty, the procedure stops. As a final result,
According to these criteria, we can now build the so the list S has all the viewpoints with highest coverage of the
called measurability matrix [29]. This matrix is defined as object.
M( pi , v j ) with i, j ∈ n. In this case, v j stands for viewpoints Figure 5 depicts the output of the camera placement algo-
and pi for surface points. rithm on the original gearbox model. As a final remark, we
The coverage of a single viewpoint C(v j ) we can define want to underline the fact that the algorithm has no real
the coverage strength provided by that viewpoint for all scene knowledge of the existence of fixtures parts (that may create
points as [2]: occlusions) or other geometrical constraints. Consequently,
after the automatic placement it could be necessary to run a
1
n−1 manual prune of the cameras positions list taking into account
C(v j ) = M( pi , v j ) (2) these real world constraints.
n
i=0

The maximum coverage is simply defined as [2]: 3.2.2 System registration

Cmax = max(C(v j )) (3) In this section, we will describe the system registration pro-
cess. This is a fundamental step to determine the position
Now, we can calculate the degree of coverage overlap of the 3D object with respect to the image and to locate the
between two viewpoints, i.e.v j and vk . This can be expressed subparts/surfaces in the cameras’ image planes and in the 3D
as the dot product of the two column vectors M(:, v j ) and model reference system.
M(:, vk ) normalized by the maximum coverage [2]: A key component for such process is a set of fiducial
markers that are rigidly connected to the gearbox’s fixture,
n−1 maintaining a rigid configuration with respect to the gearbox.
1
n i=0 M( pi , v j ) · M( pi , vk )
o(v j , vk ) = (4) We decided to use these fiducial markers since they are easy
Cmax
to detect and allow the system to achieve high speed and suf-
The next step is to build a weighted undirected graph ficient precision. Among the several fiducial marker systems
O = (V, E, w) with the set of vertices V = v1 , . . . vn−1 and proposed in the literature, those based on square markers
edges E ⊆ V × V . We introduce an edge in the graph if the have gained popularity. They provide a 6D camera pose pro-
corresponding viewpoints share some of the surface points. vided that the camera is properly calibrated (i.e., intrinsic
The weight assigned to this edge is defined as w = o(v j , vk ). parameters).
Using the graph O and the row vector of coverage, it is In many approaches, markers encode a unique identifi-
possible to compute the so called compounded degree for cation by a binary code that may include error detection
each vertex of the graph. The compounded degree for a given and correction bits. In our system, we decided to use a
vertex v j is given by [2]: method based on [14] where authors presented a fiducial
marker system based on square markers particularly appro-
|h
j |−1 priate for camera pose estimation in real applications. The
d Cj = a(v j )C(v j ) w jk (5) library implemented is called ArUco [24] which has the capa-
k=0 bility of automatically generating fiducial markers and, in
the case of boards (more than one fiducial marker in the
Thus d Cj is the product between the sum of the weights same plane), it generates also the corresponding board con-
of all the edges connected to the viewpoint neighbourhood. figuration file. This board configuration file contains further
It is worth noticing that we have slightly improved the orig- geometric information, and it can be used in the detection
inal equation in [2] by including the term a(v j ). This term phase to improve detection performance. The result is higher
introduces a geometry criteria taking into consideration the accuracy when using boards with respect to the use of a single
size of the triangle area that contains the surface point and marker. For our registration process, we created two fiducial
the angle of the surface normal with respect to the camera marker physical supports mounted on both sides of the gear-
optical axis. In other words, by introducing this term we are box (Fig. 6). In this way, each camera always sees at least
prioritizing viewpoints with good visual over large areas. one valid marker board.
The last step is to compute the compounded degrees for Once the marker boards are properly installed, the anno-
all the vertices in O and put them in a descending order tated subparts/surfaces can be transferred into their corre-
list, namely D. Then, we select the highest degree, called sponding 3D locations in the model reference system. To do

123
M. S. Biagio et al.

Fig. 5 Camera placement for the AVIOAERO gearbox. In blue the first 9 cameras selected by the algorithm (colour figure online)

of the subparts/surfaces associated with that camera can be


done independently based on the observed ArUco board.
Ideally, once the calibration of the cameras is performed, it
assures the correct use of the entire inspection system. Unfor-
tunately, in real scenarios it could be necessary to recalibrate
the system due to unexpected circumstances: (i) machinery
collisions with cameras, (ii) vibrations, (iii) degradation of
markers , (iv) changes in the relative positions of the markers
with respect to the gearbox.
To clarify the registration process, we can distinguish two
main phases, namely off-line and online. The off-line phase
involves:

– single camera intrinsics calibration;


– multi-camera system extrinsic calibration;
– CAD registration;
Fig. 6 Master snapshot example of four cameras with surfaces anno- – 2D-3D subparts/surfaces transfer.
tations a Camera 1 b Camera 2 c Camera 3 d Camera 4

The online phase involves:

so, we assume a rigid roto-translation between the ArUco – camera-model registration using ArUco markers
boards and the gearbox. – 3D-2D subparts/surfaces transfer.
In practice, when a new gearbox gets inside the cage, the
ArUco markers are used to register the new location and We will describe in detail each of these phases involved
transfer the 3D subparts/surfaces into their new 2D location in the calibration stage.
in the image plane of the cameras. The process allows some Single camera intrinsics calibration This step involves a
tolerance in the position of the gearbox inside the cage (see classic intrinsic parameters camera calibration based on a
Fig. 14), making each camera independent so that the transfer black-white chessboard [30,38,39]. We used the well-known

123
Automatic inspection of aeronautic components

Table 1 Roto-translations Board Roto-translations


between boards and cameras
1 T c1 c2
b1 ,T b1
2 T c2 c5 c6 c9
b2 ,T b2 ,T b2 ,T b2
3 T c5 c6 c8
b3 ,T b3 ,T b3
4 T c1 c2
b4 ,T b4
5 T c2 c3 c4 c6
b5 ,T b5 ,T b5 ,T b5
6 T c4 c7 c6
b6 ,T b6 ,T b6

It follows that we can compute camera-camera roto-


translations when cameras observe a common board. For
example, we can compute the roto-translation between cam-
era one and two as:

c2 = T b4 T c2 .
T c1 c1 b4
(8)

Similarly, we can get the roto-translation between camera


Fig. 7 Visibility map between cameras and ArUco boards. Cameras
are depicted with circles and boards with squares. Lines linking the one and three as:
nodes represent visibility of the board to a given camera
c3 = T c2 T b5 T c3 .
T c1 c1 c2 b5
(9)
openCV library [5] to implement a semi-automatic calibra-
We can compute in a similar way the remaining camera–
tion procedure for all the 9 cameras in the system.
This allows us to model the cameras using the pinhole
camera roto-translations T c1 cn where n = 4, . . . , 9 thus
describing completely the extrinsic parameters of the multi-
camera model, solving the following equation:
camera system.
Interestingly, we can use the markers’ corner positions and
p̃ = KP̃c (6)
the estimated pose to initialize a nonlinear minimization algo-
rithm to improve the extrinsic parameters computation. This
where p̃ = (u  , v  , w  ) is the homogeneous image-plane pixel is particularly relevant when various cameras share the com-
coordinates, K is the intrinsics matrix and P̃c is a homo- mon view of more than one board. We solve this minimization
geneous 3D point expressed in the reference system of the problem by using a Levenberg–Marquardt algorithm [19,22].
camera {C} [10]. In this calibration step, we also computed This technique is one of the many possibilities available to
both radial and tangential distortion coefficients used to cor- implement a classic bundle adjustment algorithm based on
rect lens distortion. the minimization of the re-projection error [15,34]. Con-
Multi-camera extrinsics calibration To obtain the extrinsic cretely, our cost function is given by
parameters of the multi-camera system, the ArUco library
is fed with a set of images from each camera, the camera  j
min ( x̃i − K i [ R̂i | t̂ i ]X j )2 , (10)
calibration files and board configuration files. The ArUco i j
output are the IDs of the markers, their 2D corners and the
board-camera roto-translations expressed in vector form. We j
where x̃i are the markers’ 2D corners provided by ArUco,
transform the roto-translation vectors into an homogenous K i are the intrinsics camera matrices, [ R̂i | t̂ i ] are the rotation
matrix taking care of transforming the rotation vector using and translation matrices of the cameras ([Link]) and
the Rodrigues’ rotation formula: X j are the 3D points of the markers’ corners. Notice that,
contrary to a typical reconstruction problem, in this case only
R = I 3x3 + sin θ S(v) + (1 − cos θ )(vv T − I 3x3 ) (7) R̂i and t̂ i are minimized (represented by theˆsymbol).
We compute the 3D points by modelling the ArUco boards
The result is a homogeneous 4x4 roto-translation matrix as 3D planes in the coordinate system of the reference cam-
T cb where c stands for camera and b for board reference era. We then project 3D rays passing through x̃ j ([Link]
systems. Following this procedure for all the cameras and markers’ corners in the image plane) that intersect these
considering the visibility map depicted in Fig. 7, we can get planes in the X j points ([Link] 3D counterparts of x̃ j ). These
the homogeneous roto-translation matrices listed in Table 1. 3D points are used for all the cameras to re-project the 2D

123
M. S. Biagio et al.

m = T c1 T m ,
T cn cn c1
(12)

where n stands for the camera ID with n = 2, . . . , 9.


Equation 12 allows us to transfer 2D features annotated
in a master snapshot into their 3D counterparts in the model
reference system. Therefore, we transfer the camera in the
reference system of the model and then we create the 3D
rays corresponding to the subparts/surfaces (each of them are
modelled as 2D points in the image plane).2 These 3D rays
intersect the mesh of the gearbox [17] in specific 3D points.
These points are adequately stored as the 3D representation
of the subparts/surfaces to be used in the online phase. We
will generically define these points as P̃m .
Camera-model registration using ArUco markers In the
online phase, we consider the production scenario where new
gearboxes are introduced into the inspection cage by a human
Fig. 8 Multi-board Bundle Adjustment. The green cameras represent operator. The operator has visual markers indicating where to
the first initialization of the position of the cameras. The yellow cameras place the gearbox but inevitably there is always some differ-
are the new positions found by BA. The squares are the models of the ence in the position. The system is tolerant to this positional
ArUco boards and the red lines are the 3D rays passing through the variance by using the ArUco markers in the new images to
optical centre of camera 1 and the corners of the markers (colour figure c
online) obtain the new boards-cameras roto-translation T̂ b . Then,
making use of Eq. 11 (computed in the off-line phase) we
can get
points during the bundle adjustment algorithm (BA). We
show this process applied to cameras one and two in Fig. 8. c c
T̂ m = T̂ b T bm , (13)
CAD registration Once we have obtained the extrinsic rela-
tions between the cameras and the boards, we are interested c
in computing the extrinsic parameters between the multi- where T̂ m is the homogeneous matrix representing the new
camera-board system and the gearbox. To do so, we solve roto-translation between a given camera and the new gearbox
a Perspective-n-Point (PnP) [10,18] problem by manually pose.
selecting six points in the 2D plane of a master camera1 and 3D-2D features transfer Finally, considering Eqs. 13 and 6
their corresponding 3D counterparts in the CAD model of the we can define
gearbox. We decided to use this technique to achieve the best
P̃c
correspondence between 2D and 3D points. The PnP algo-   
c
rithm gives us the transformation T m c1 which we can exploit p̂ = K T̂ m P̃m (14)
to find the relations between the boards and the model. By
computing the roto-translations of camera one with all the where p̂ are the new 2D location of the subparts/surfaces in
c1 = T c2 T c1 ) we can then express the board-
boards (e.g.T b2 b2 c2
pixel coordinates. These new locations are then fed to the
model transformations as system to start the inspection of the gearbox.

m = T c1 T m ,
T bn bn c1
(11) 3.2.3 Inspection system

where n stands for the board ID with n = 1, . . . , 6. In this section, we will describe the full inspection system
These transformations are assumed to be kept rigid during that handles two different problems.
the operation of the system so that they can be used in the The first one, namely model checking, can be described
online phase to re-project the features on the image plane of as follow: Given a CAD model, the gearbox in our case, it
the cameras. checks exhaustively and automatically whether this model
2D–3D features transfer The transformation T m meets given specifications or rules.
c1 gives the
extrinsics between the cameras and the gearbox model as
2 This can also be done by modelling the rays in the reference system of
the camera and then roto-translating the rays into the reference system
1 We select camera 1 as master but any other camera could be used. of the model.

123
Automatic inspection of aeronautic components

Fig. 9 Pipeline of the proposed


systems. The system receives in
input the acquired image. The Calibraon of the
outputs are the subparts checked Visual Inspecon Model Checking
Cameras
and the defect recognized (Surfaces) (Sub parts)

Acquired Image Classificaon


Bootstrap

Defects Detecon Validaon

CAD registraon

The second one, visual inspection, is related to dis- 


P−1

colourization of the surface, defect inspection or scratches LBP P,R = s(g p − gc )2 p , (15)
p=0
recognition. The automation problem for defect inspec-
tion falls into two general categories based on the types
where
of materials. The first category is associated with uni-

form materials such as metals, film, and paper. Defect 1, x ≥ 0
detection in these materials normally relies upon iden- s(x) = (16)
0, x > 0,
tification of regions that differ from a uniform back-
ground. The second category, instead, is associated with and gc is the grey value of the central pixel, g p is the value
textured materials such as textile, ceramics, plastics and of its neighbours, P is the total number of involved neigh-
others. bours, and R is the radius of the neighbourhood. Suppose
Figure 9 shows the pipeline of the proposed system. The the coordinate of gc is (0, 0), then the coordinates of g p are
system receives an acquired image as input of the algorithm. (R cos(2π p/P), R sin(2π p/P)). The grey values of neigh-
After some processing stages, it gives as output the miss- bours that are not in the image grids can be estimated by
ing subparts detected and the defect recognized. The entire interpolation. Suppose that the image is of size M × N , after
system is based on the following steps: the LBP pattern of each pixel is identified, a histogram is
Model checking (subparts) From a computer vision point of built to represent the texture image:
view, the problem of identifying subparts in the AvioAero
Gearbox (screws, bolts, pins etc.) can be considered as 
M 
N
an object detection problem that can be solved through a H (k) = f (LBP P,R (n, m), k) k ∈ [0, K ], (17)
learning stage. Therefore, an algorithm should discriminate m=1 n=1
between what is and what is not a particular object. Once
where
the registration process is completed, the next step is related

to the identification of a set of region of interests (ROI) 1, x = y
where the object should be found. According to the set of f (x, y) = (18)
0, otherwise
subparts that should be recognized, we re-project each sub-
part CAD model onto the image acquired by the camera and and K is the maximal LBP pattern value. An extension of
we automatically select the ground truth and a ROI, around the original LBP method is the uniform LBP. The U value of
the object. Inside each ROI, a sliding window strategy is an LBP pattern is defined as the number of spatial transitions
applied to extract features from each subwindow. As feature (bitwise 0/1 changes) in that pattern
descriptor, we evaluated several with the last being the local
binary pattern (LBP) [25]. This is a very efficient texture U (LBP P,R ) = |s(g P−1 − gc )|
descriptor which labels the pixels of an image according to 
P−1
the differences between the values of the pixel itself and the + |s(g p − gc ) − s(g p−1 − gc )|. (19)
surrounding ones. Given a pixel in the image, the LBP code p=1
is computed by comparing it with its neighbours:

123
M. S. Biagio et al.

The uniform LBP patterns refer to the patterns which have Bootstrap The final step is the possibility to run a boot-
limited transition or discontinuities (U ≤ 2) in the circular srap technique over the detections founded by the algorithm.
binary presentation [26]. Bootstrap methods are designed to improve the stability and
Classification In all supervised learning methods, it is impor- accuracy of supervised machine learning algorithms used in
tant to choose the correct classifier. Popular classifiers are statistical classification and regression. It also reduces vari-
support vector machines (SVMs) [4,11] or boosting-based ance and helps to avoid over-fitting. Basically, it consists in
algorithms assembled in rejection cascades (AdaBoost [36] the re-training of the SVM classifier adding all these false
or LogitBoost [13]). In our case, we decided to use the positive windows as negative samples. This technique can
SVM linear classifier. The feature vector extracted from be iterated by the human operator during the inspection, to
each subwindow of the sliding window approach is fed into improve the detection algorithm.
the classifier. Each window labelled as positive contains the Visual inspection (surfaces) In our context, considering the
object. Differently, those labelled negative do not contain lack of a dataset of defect images and the homogeneity char-
any instances of the object. Each set of overlapping windows acteristics in terms of texture of the AvioAero gearbox, we
could contain all (or a substantial fraction of) the object of decided to use an unsupervised technique. Specifically, the
interest. This means that each of them might be labelled pos- phase only transform (PHOT) method proposed by [1] is an
itive by the classifier, meaning we would count the same efficient technique for detecting surface defects. This method
object multiple times. The usual strategy for managing this assumes that a defect can be defined as an abrupt change in a,
problem is non-maximum suppression [31]. In this strategy, otherwise, homogeneous area. The method essentially seg-
windows with a local maximum of the classifier response ments defects by removing any regularity from the image.
suppress nearby windows so reducing the number of con- This is done at various scales and patterns at once; to do so,
tiguous positive samples. the method basically normalizes the Fourier transform of the
Validation In the validation step, the positive samples found input image by its magnitude [9].
by the classifier are compared with the original ground truth We implemented the PHOT steps proposed by Aiger and
extracted from the CAD registration step and a set of detec- Talbot [1]; their algorithm is summarized in Algorithm 1; the
tions (true positive, false positive and missed) are given as resulting O(u, v) represents the image output of the algo-
result. Evaluation on the list of detected bounding boxes rithm. To further enhance this output, we apply some other
is done using the PASCAL criterion [12] which counts a image processing stages. Figure 10 depicts the organization
detection to be correct if the overlap with respect to the of the complete analysis with the heart of PHOT highlighted
union between the detected and ground truth bounding box in yellow. A median filter is applied both in the input and
is greater than 0.5. output of the PHOT to smooth out the image and remove

FFT normalization by
Surface Fast Fourier Median filter
serialization magnitude Inverse FFT
image Transform (smoothing)
(Phase information)

Morphological
Mahalanobis Adaptive Binary Area size
Filters (erosion,
Distance Thresholding image filter
dilation, holes fill)

Boundaries
filter

PHOT
result

Fig. 10 Extended PHOT modules. The original PHOT module is highlighted in yellow (colour figure online)

123
Automatic inspection of aeronautic components

Algorithm 1 Phase Only Transform algorithm [1]


1: procedure PHOT(I (u, v))
2: compute F(u, v){I }
3: for all (u, v) do
F(u,v)
4: F̂(u, v) = M (u,v)
5: end for
6: O(u, v) = ˆF−1 (u, v)
7: end procedure

existing noise. Then, the Mahalanobis distance (see Eq. 20)


is applied to detect pixels far from the average and finally
an adaptive threshold is applied to segment the defect pixels
and obtain a binary image. Some other filters are applied in
a post-processing stage to refine the result.
The Mahalanobis distance is defined as:

DM = (x − μ)T  −1 (x − μ) (20)

where x is one observation (a point in the image) μ is the


mean of the point values and  is the covariance matrix.

4 Experimental results Fig. 11 Synthetic Experiments: a digital lines at different grey-scale


intensity; b scotch tape lines
This section presents the results obtained by our proposed
system over the two inspection tasks: model checking and
visual inspection. The experimental part is divided into two
subsections: First, for the task of visual inspection, we show
two synthetic experiments to verify the performances of the
system, varying the dimension of the defects and the light
conditions (Sect. 4.1). Second, for both tasks, we show some
results obtained on the real gearbox (Sect. 4.2).

4.1 Synthetic experiments

First, we introduce the synthetic experiments results obtained


in the Visual Inspection task. These are done in two different
ways: First, by digitally drawing some lines with different
thickness (1, 3, 5 and 8 pixels) and grey-scale intensity (220,
150, 50 and 0). Second, by using a piece of scotch tape to
simulate a 1 mm scratch (see Fig. 11).
Analysing Fig. 12, considering the line with the highest
contrast with respect to the background (the last one), we can
notice that around 98% of the area of the lines are identified Fig. 12 PHOT Results on the digitally drawing defects. The defect is
as scratch by the PHOT descriptor. On the other hand, the highlighted in red (colour figure online)
second line, that is the one with grey-scale values similar to
the background, is always considered as background itself.
The second experiment is done using scotch tape lines. are able to find only the initial and final part of the greatest
Results are shown in Fig. 13. In this experiment, we also defect but we almost do not have false detections. Notice that
consider the effect of different light conditions on the final the defects shown in Fig. 13 are of the order of millimetres.
result. As one can see, the results show that with a brighter The long defect on the left has a width of 1 mm and a height
image we are able to find the defects but we also have many of approximately 5 mm. The other smaller defects detected
false detections. In the case of a darker image, instead, we are approximately 1 mm2 .

123
M. S. Biagio et al.

Fig. 13 PHOT Results on the scotch tape defects. Defects are high-
lighted in red (colour figure online)
Fig. 14 Overlay images. This image represents 11 acquisition images
at different object poses

These two experiments gave us quantitative feedback


about the correct behaviour of the algorithm under near ideal
conditions and information about the typology of defects that
we can detect, both in terms of contrast and size. Not surpris-
ingly, large and highly contrasted defects are easier to detect
when the difference between the scratch and the background
is strong.
Notice, however, that there is a trade-off between the size
of the surface analysed and the size of the defects. Large
defects on small surfaces will impact on the average grey
level of the surface and, consequently, the defect could be
not detectable with this technique. To deal with this problem,
the surfaces must be reasonable large. Unfortunately, this
requirement makes impractical the analysis of some of parts
of the gearbox.
In the next subsection, we focus on real defects, such as
discolourization and real scratches.
Fig. 15 Results for the model checking task. These results are obtained
on 10 different poses of the object
4.2 Results

In this section, we illustrate the results obtained by our system Bootstrap Level 0 represents the training done on the first
with the AvioAero gearbox for both tasks: model checking acquisition without bootstrap; Bootstrap Level 1 represents
and visual inspection. In these experiments, we tried to simu- the first round of bootstrap where false positives and miss ele-
late the real inspection task of an AvioAero operator. For such ments are fed again into the SVM classifier, re-training the
reason, we simulated 11 different poses of the object (i.e., model. Bootstrap Level 1 improves in average the classifica-
11 different acquisitions), considering also different rotation tion accuracy of 3.5% reaching an accuracy of 95.22% with
angles (from −10 to +10 degrees). The overlay acquisition a standard deviation of ±1.09 against 91.78% with a stan-
images are shown in Fig. 14. dard deviation of ±4.01 for the Bootstrap Level 0 method
Model checking The first acquisition is used to create the (last bars of the figure). Furthermore, it is worth noticing that
training set of the model checking task. Actually, the train- the number of false positives per image (FPPI) decreases. In
ing used only 25% of the number of each subpart and the rest fact, considering more than 5500 sliding windows for each
was used as testing. For example, having 40 screws, we used acquisition, the Bootstrap Level 1 has an average of 5 FPPI
10 of them for training and then we tested on the remaining. with respect to 8 FPPI for the Bootstrap Level 0.
Fig. 15 depicts the results of the Model Checking task. As These results indicate the effectiveness of the proposed
described in Sect. 3.2.3, the proposed method allows the use system, showing excellent classification accuracy, stability
of a bootstrap technique to improve the subparts detection. and robustness against multiple rotations of the object and
In the results, we decide to test two possible configurations: different subpart viewpoints.

123
Automatic inspection of aeronautic components

Fig. 16 Results on the first pose images. Analysis of different PHOT Fig. 18 Results obtained over ten different poses of the object. The
thresholds PHOT threshold value in all poses is equal to 9

ROC Curves
1
0.9
the labelled area with a defect). On the other hand, The FPR
True Positive Rate (TPR)

0.8
defines how many incorrect detection occurs in the areas that
0.7
do not contain a defect. Considering Fig. 17, with a false pos-
0.6
itive rate equal to 0.5, the best performance is reached with
0.5
the PHOT threshold equal to 9. This threshold value can be
0.4 modified by the human operator during inspection to adjust
0.3 the performances of the PHOT descriptor.
0.2 In is worth noticing that the criteria to decide if a defect
0.1 has been detected is based on the overlap of 25% between
0 the ground truth area and the area classified by PHOT as a
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
False Positive Rate (FPR)
defect. This criteria is reasonable considering that the tasks at
Threshold 6 Threshold 7 Threshold 8
hand ultimately involves reporting to human operators. Even
Threshold 9 Threshold 10 Threshold 11 if the system does not completely detects the defect, a partial
detection can be sufficient to attract the human attention.
Fig. 17 The ROC curve showing the performance of the detector at
varying thresholds
Figure 18 shows the results obtained over all the poses
(see Fig. 14), selecting a threshold value equal to 9 selected
from the one presented in Fig. 16. As already pointed out,
Visual inspection To evaluate the visual inspection algo- this threshold value can be modified by the human operator
rithm, in collaboration with AvioAero operators, we created during the inspection. However, having a small threshold
a ground truth dataset by manually annotating the defects value means a much higher number of false positives losing
present on the gearbox. Then, we used the PHOT results on the advantage of the detection system. On the contrary, a
the images of the first pose to select an initial threshold of the high threshold value means less accuracy of the system and
algorithm. This analysis is shown in Fig. 16. The x-axis shows a much higher human interference.
the threshold choices. The y-axis shows the accuracy and the In Fig. 18, as expected, the false positive area remains sim-
false positive area (FPArea), both in percentage. The accu- ilar in all the poses. The overall accuracy performs reasonable
racy represent the percentage of defects correctly detected well although there is a slight degradation on the accuracy
and the FPArea shows the percentage of the total inspected respect to the first pose. This is probably due to small errors
area that has been incorrectly misclassified as defect (false in the re-projection of the inspected surfaces. When this is so,
positive). We choose a PHOT threshold of 9 that provides a PHOT could partially model the full surface using pixels in
reasonable balance between accuracy (67%) and false posi- the periphery that strongly differ from the pixels in the inner
tives (3.5%). areas of the surface. Consequently, the sensitivity of PHOT
A more detailed analysis is presented in Fig. 17 show- in such cases is affected by this inconvenience. To deal par-
ing the receiver operating characteristic (ROC) curve for an tially with this problem, we have implemented a filter that
interval of thresholds which provides the best results accord- erodes externally the surface before applying the PHOT anal-
ing to Fig. 17. The ROC curve provides a plot of the true ysis. However, as stated before, these misalignments could
positive rate (TPR) against the false positive rate (FPR) at be tolerated in a men-in-the-loop scenario. Moreover, if we
varying value of the PHOT threshold. The TPR is given by consider a scenario with active registration we are confident
how many positive results (i.e., correctly classified area with that the overall accuracy of the system could be considerable
a defect) are detected among all positive samples (i.e., all improved.

123
M. S. Biagio et al.

5 Conclusion 8. Chin, R., Harlow, C.: Automated visual inspection: a survey. IEEE
Trans. Pattern Anal. Mach. Intell. (PAMI) 4(6), 557–573 (1982)
9. Choi, J., Kim, C.: Unsupervised detection of surface defects: A
In this paper we proposed a hardware-software automatic two-step approach. In: 2012 19th IEEE International Conference
visual inspection system based on images, that is able to on Image Processing, pp. 1037–1040 (2012)
address two main industrial problems: The first one refers 10. Corke, P.I.: Robotics, Vision and Control: Fundamental Algorithms
to the inspection of all the components in a final product, in Matlab. Springer, Berlin, Heidelberg (2011)
11. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20,
to check if they meet given specifications, if all of them are 273–297 (1995)
correctly mounted. This problem is also known as model 12. Everingham, M., Gool, L., Williams, C.K., Winn, J., Zisserman, A.:
checking. The second one, namely visual inspection, refers The Pascal visual object classes VOC challenge. Int. J. Comput.
to the inspection of the final product surface to check the Vis. 88(2), 303–338 (2010)
13. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression:
presence of aesthetic defects, e.g. scratches, or discolouriza-
a statistical view of boosting. Ann. Stat. 28, 2000 (1998)
tion. Since human operators have implicit limits, like work 14. Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F.J.,
shifts, low repeatability and oversights of the defects/missing Marín-Jiménez, M.J.: Automatic generation and detection of highly
parts, this system was developed to support operators during reliable fiducial markers under occlusion. Pattern Recognit. 47(6),
2280–2292 (2014). doi:10.1016/[Link].2014.01.005
their inspection tasks. Furthermore, the experimental results 15. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer
shows that this novel automatic inspection method can detect Vision, 2nd edn. Cambridge University Press, New York (2003)
missing parts and defects. Our future research will focus on: 16. Kumar, A.: Computer-vision-based fabric defect detection: A sur-
(1) use of a robotic arms to move the cameras; (2) improve- vey. IEEE Trans. Ind. Electron. 55(1), 348–363 (2008)
17. Legland, D.: Matgeom: matlab geometry toolbox for 2d/3d geo-
ments of the registration based on active markers; (3) prove metric computing. [Link] (2009)
the effectiveness of our system testing it in a real product 18. Lepetit, V., Moreno-Noguer, F., Fua, P.: Epnp: an accurate o(n)
line scenario; (4) consider how the combination of human solution to the pnp problem. Int. J. Comput. Vis. 81(2), 155–166
operators and inspection system can improve the detection (2008)
19. Levenberg, K.: A method for the solution of certain non-linear
results. problems in least squares. Q. Appl. Math. 2, 164–168 (1944)
20. Malamas, E., Petrakis, E., Zervakis, M., Petit, L., Legat, J.D.: A
Acknowledgements This work was carried out under the support of the survey on industrial vision systems, applications and tools, image
AvioAero company. Furthermore, we would like to thank Dr. Enrique and vision computing 21. Image Vis. Comput. 21, 171–188 (2003)
Muñoz-Corral and Dr. Luca Mazzei for their invaluable technical and 21. Markou, M., Singh, S.: Novelty detection: a review—part 2: neu-
human support. ral network-based approaches. Signal Process. 83(12), 2499–2521
(2003)
Compliance with ethical standards 22. Marquardt, D.W.: An algorithm for least-squares estimation of non-
linear parameters. SIAM J. Appl. Math. 11(2), 431–441 (1963)
Conflict of interest This research was funded by Avio Aero (grant 23. Moganti, M., Ercal, F., Dagli, C., Tsunekawa, S.: Automatic PCB
number P37508). inspection algorithms: a survey. Comput. Vis. Image Underst.
(CVIU) 63(2), 287–313 (1996)
24. Newman, T., Jain, A.: A survey of automated visual inspection.
Comput Vis Image Underst. (CVIU) 61, 231–262 (1995)
25. Ojala, T., Pietikinen, M., Harwood, D.: A comparative study of
References texture measures with classification based on feature distributions.
Pattern Recognit. Lett. 1(29), 51–59 (1998)
1. Aiger, D., Talbot, H.: The phase only transform for unsupervised 26. Ojala, T., Pietikinen, M., Menp, T.: Multiresolution gray-scale and
surface defect detection. In: IEEE Conference on Computer Vision rotation invariant texture classification with local binary patterns.
and Pattern Recognition (CVPR), pp. 295–302 (2010) IEEE Trans. Pattern Anal. Mach. Learn. (PAMI) 24(7), 971–987
2. Alarcón-Herrera, J., Xiang, C., Xuebo, Z.: Viewpoint selection for (2002)
vision systems in industrial inspection. In: 2014 IEEE International 27. Park, Y., Kweon, I.S.: Ambiguous surface defect image classifica-
Conference on Robotics and Automation (ICRA), pp. 4934 – 4939 tion of amoled displays in smartphones. IEEE Trans. Ind. Inform.
(2014) 99, 1–1 (2016)
3. Bahlmann, C., Heidemann, G., Ritter, H.: Artificial neural networks 28. Peng, X., Chen, Y., Yu, W., Zhou, Z., Sun, G.: An online defects
for automated quality control of textile seams. Pattern Recognit. inspection method for float glass fabrication based on machine
32(1), 1049–1060 (1999) vision. Int. J. Adv. Manuf. Technol. 39(11), 1180–1189 (2007)
4. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for 29. Scott, W.R.: Model-based view planning. Mach. Vis. Appl. 20(1),
optimal margin classifiers. In: Proceedings of the Fifth Annual 47–69 (2009)
Workshop on Computational Learning Theory, pp. 144–152. ACM, 30. Sturm, P.F., Maybank, S.J.: On plane-based camera calibration:
New York, NY, USA (1992) A general algorithm, singularities, applications. In: 1999 IEEE
5. Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with Computer Society Conference on Computer Vision and Pattern
OpenCV Library, 1st edn. O’Reilly Media, Beijing (2008) Recognition, vol. 1, p. 437 vol. 1 (1999)
6. Caulier, Y., Bourennane, S.: An image content description tech- 31. Szeliski, R.: Computer Vision: Algorithms and Applications.
nique for the inspection of specular objects. EURASIP J. Adv. Springer, New York (2010)
Signal Process. 2008, 195263 (2008) 32. Thomas, A., Rodd, M., Holt, J., Neill, C.: Real-time industrial
7. Chin, R.: Automated visual inspection: 1981 to 1987. Comput. Vis. visual inspection: a review. Real Time Imaging 1(2), 139–158
Gr. Image Process. 41(3), 346–381 (1988) (1995)

123
Automatic inspection of aeronautic components

33. Torres, F., Sebastian, J., Aracil, R., Jimenez, L., Reinoso, O.: Salvatore Giunta received the
Automated real-time visual inspection system for high-resolution degree in Electronic Engineer-
superimposed printings. Image Vis. Comput. 16(1213), 947–958 ing in 2003 from University of
(1998) Catania and Ph.D. degree in
34. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Metrology: Science and Tech-
Bundle adjustment—a modern synthesis. In: Proceedings of the nique of Measurements from
International Workshop on Vision Algorithms: Theory and Prac- Polytechnic of Turin in 2006.
tice. ICCV ’99, . Springer-Verlag, London, pp. 298–372 (2000) The main research activities, car-
35. Tucker, J.: Inside beverage can inspection: an application from start ried on during the 3 years of
to finish. In: Proceedings of the Vision ’89 Conference (1989) his doctorate, have been devoted
36. Viola, P., Jones, M.J., Snow, D.: Detecting pedestrians using pat- to the design and development
terns of motion and appearance. Int. J. Comput. Vis. 63(2), 153–161 of innovative devices for pres-
(2005) sure and temperature metrology,
37. Xie, X.: A review of recent advances in surface defect detection at the Italian National Research
using texture analysis techniques. Electron. Lett. Comput. Vis. Institute of Metrology (INRiM).
Image Anal. 7(3), 1–22 (2008) These research activities aimed at redefinition of intermediate temper-
38. Zhang, Z.: Flexible camera calibration by viewing a plane from ature scale and new determination of the Boltzmann Constant for the
unknown orientations. In: ICCV, pp. 666–673 (1999) redefinition of kelvin. He is working at GE Avio since 2008 and cur-
39. Zhang, Z.: A flexible new technique for camera calibration. rently he is at in charge of Metrology, NDT and Digital factory.
IEEE [Link] Anal. Mach. Intell. (PAMI) 22(11), 1330–1334
(2000)
Alessio Del Bue received the
Laurea degree in Telecommuni-
cation engineering in 2002 from
Marco San Biagio received the University of Genova and his
[Link]. degree cum laude in Infor- Ph.D. degree in Computer Sci-
matics Engineering from the ence from Queen Mary Uni-
University of Palermo, Italy, in versity of London in 2006. He
2010, and the Ph.D. in computer was a researcher in the Insti-
engineering from University of tute for Systems and Robotics
Genoa and Istituto Italiano di (ISR) at the Instituto Superior
Tecnologia (IIT), Italy, in 2014, Tecnico (IST) in Lisbon, Portu-
under the supervision of Prof. gal. Currently, he is leading the
Vittorio Murino and Prof. Marco Visual Geometry and Modelling
Cristani working on Data Fusion (VGM) Lab at the PAVIS depart-
in Video Surveillance. Before his ment of the Istituto Italiano di
current position, he was a post- Tecnologia (IIT) in Genova. His research focuses in the areas of 3D
doctoral fellow at the Pattern scene understanding and 3D reconstruction of non-rigid structure from
Analysis and Computer Vision image sequences.
department (PAVIS) at IIT, Genoa, Italy. His main research interests
include machine learning, statistical pattern recognition and data fusion Vittorio Murino is full pro-
techniques for object detection and classification. fessor and head of the Pattern
Analysis and Computer Vision
(PAVIS) department at the Isti-
Carlos Beltrán-González recei- tuto Italiano di Tecnologia (IIT),
ved a [Link]. in Computer Engi- Genoa, Italy. He received the
neering from the Polytechnic Ph.D. in Electronic Engineering
University of Valencia (UPV), and Computer Science in 1993
Spain, in 1996 and a [Link]. at the University of Genoa, Italy.
in Software Engineering from Then, he was first at the Univer-
the University of Valencia (UV), sity of Udine and, since 1998, at
Spain, in 1998. After 2 years the University of Verona, where
in industry he enrolled in the he was chairman of the Depart-
Ph.D. program of the Univer- ment of Computer Science from
sity of Genoa, Italy and obtained 2001 to 2007. His research inter-
a Ph.D. in Computer Engineer- ests are in computer vision and machine learning, in particular, prob-
ing and Electronics in 2005 spe- abilistic techniques for image and video processing, with applications
cialising in the areas of com- on video surveillance, biomedical image analysis and bio-informatics.
puter vision and robotics sys- He is also member of the editorial board of Pattern Recognition, Pat-
tems. After that, he served as principal investigator in FP7 Framework tern Analysis and Applications, and Machine Vision & Applications
european projects and as applied computer vision specialist in indus- journals, as well as of the IEEE Transactions on Systems Man, and
trial projects involving video surveillance and intelligent transporta- Cybernetics. Finally, he is senior member of the IEEE and Fellow of
tion systems. In 2012 he joined the Pattern Analysis and Computer the IAPR.
Vision department (PAVIS) at the Istituto Italiano di Tecnologia (IIT),
Genoa.

123

You might also like