0% found this document useful (0 votes)
108 views7 pages

Underwater 3D Mapping with Sonar

This paper addresses the challenge of 3D reconstruction from 2D imaging sonars. It presents a new algorithm that enables generation of a 3D map from multiple imaging sonar images by associating different points of view. The algorithm solves the problem of missing elevation angle information by using an iterative approach that casts spherical arcs into a voxel grid and refines the results using each new sonar image. Tests using simulated and real inspection data showed the method can accurately reconstruct 3D scenes in real-time using standard sonars.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views7 pages

Underwater 3D Mapping with Sonar

This paper addresses the challenge of 3D reconstruction from 2D imaging sonars. It presents a new algorithm that enables generation of a 3D map from multiple imaging sonar images by associating different points of view. The algorithm solves the problem of missing elevation angle information by using an iterative approach that casts spherical arcs into a voxel grid and refines the results using each new sonar image. Tests using simulated and real inspection data showed the method can accurately reconstruct 3D scenes in real-time using standard sonars.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Underwater 3D reconstruction using BlueView

imaging sonar

Thomas Guerneve Yvan Petillot


Seebyte Ltd Seebyte Ltd
Scotland Scotland
Email: [Link]@[Link] Email: [Link]@[Link]

Abstract—This paper addresses the problem of 3D scene assumptions and achieves a real-time reconstruction of the
reconstruction from 2D imaging sonars. Reconstruction using scene.
imaging sonar being a non-trivial problem, we present an algo-
rithm enabling the generation of a 3D map of the observed scene
from a sequence of imaging sonar images. The inherent problem II. R ELATED WORK
when using large aperture sonars being the unavailability of
the elevation angle information, our algorithm solves this by While many researchers addressed the problem of 3D
associating multiple point of views and taking advantage of the mapping based on dedicated sensors such as lasers [3] or video
sonar imaging process. Using this technique, we enable the surface [17], little work has been done with sonars.
reconstruction of an inspected structure using standard sonars,
frequently embedded on many ROVs and AUVs on the market. In order to cope with the projective model of sonars and
In addition to performing tests in a simulated environment, tests reduce the uncertainty on the elevation angle, most of the
made with real inspection data showed that accurate 3D mapping sonar mapping research has been based on sonars with a small
was possible using noisy standard sonar and AUV navigation. vertical aperture [10], [18]. Other works made use of side-scan
Results can be obtained in real operation time, paving the way
for further 3D online applications.
sonars such as in [8] where shape from shading techniques
are used to recover the 3D. Synthetic Aperture Sonars (SAS)
techniques gave interesting results in [4], [14] but remain
I. I NTRODUCTION dependent on very accurate navigation (error under a fraction
While 3D reconstruction from laser and video has been an of the wavelength) which makes its use complicated for 3D
active area of research for several years, 3D mapping from inspection of structures.
sonar data remains a challenge. The sonar imaging process Since imaging sonars have a relatively large vertical aper-
compresses 3D into 2D and the recovery of the underlying ture, this type of sensor is often dismissed for applications
3D scene is inherently ill-posed. For this reason, it is often where spatial accuracy is fundamental such as 3D mapping.
addressed using prior assumptions on the scene being imaged However, imaging sonars image large volume of water and
[2] to retrieve the elevation angle lost in the imaging process. therefore remain very useful in many classic operational sit-
To avoid this problem, most of the 3D reconstruction tech- uations such as collision avoidance [24] [1] [27], seafloor
niques are based on the use of a pencil beam sonar where the mapping [20] [22] [21] or target detection and tracking [12]
uncertainty along the vertical aperture direction is low. This [13] [11] [5] . As a consequence, imaging sonars are present
comes at the cost of a reduced imaged volume and a longer on many ROVs and AUVs on the market and there is a great
inspection time. interest in using such standard sensors for other purposes
We propose here to use standard imaging systems, with involving 3D information. Little research has been carried on
a relatively large vertical aperture (see Figure 1) together around this topic. Some early work took advantage of acoustics
with a planned inspection of the structure to perform the 3D shadows to recover the elevation angle [26]. Although some
reconstruction. The multiple viewpoints gathered during the prototype imaging sonars have been developed to enable direct
inspection are used to remove the 3D ambiguity introduced by measure of the elevation angle [25], these system still produce
the imaging system. limited results in terms of accuracy. Coda Octopus developed
the Echoscope presented in [6] which shows promising results
Our approach is based on an iterative spherical arc casting with better accuracy. Despite these good results, the echoscope
into a voxel grid where each iteration uses an image acquired remains an expensive and relatively large sensor which makes
from a complementary point of view (relative to the direction it difficult to integrate on AUVs and ROVs. In [19], the use
of uncertainty). At each new iteration, a new sonar image of standard, yet relatively accurate sensors such as DIDSON
is acquired and used to recover the missing elevation angles sonars has been investigated through a stereo matching tech-
based on the multiple arcs intersecting the map at each voxel. nique to estimate the elevation angle of a few feature points
A voxel filtering is then performed to model the probability in the image.
of occupancy of a point based on the pixel intensities. Finally,
the elevation angles are recovered and used to resolve acoustic Some more theoretical approaches related to tomography
shadows in order to reconstruct the front of the imaged object. have also been investigated, such as the sonar transform in
This method has the benefit of not requiring any scene prior [9] and [7], however generalizing these approaches to 3D

978-1-4799-8736-8/15/$31.00 ©2015 IEEE


Authorized licensed use limited to: FUNDACAO UNIVERSIDADE DO RIO GRANDE. Downloaded on November 20,2023 at [Link] UTC from IEEE Xplore. Restrictions apply.
remains an open research subject, likely to be heavy in terms related to the sum of acoustic returns coming from all θ angles
of processing. of the vertical aperture, this measure represents an upper bound
of occupancy for every point along the vertical arc. Therefore,
III. T ECHNICAL APPROACH at this stage a physically empty voxel (3D location containing
only water) might have been affected an non-zero intensity
We propose a more practical approach to recover the if an other return have been measured along the vertical arc
3D information from standard BlueView imaging sonars and within the sonar vertical aperture. As a consequence, filtering
navigation without using any prior knowledge on the scene. the values by using multiple point of view is necessary.

A. Imaging model C. Voxel filtering


Our method takes advantage of the sonar imaging model The point cloud obtained from the reprojection in 3D of
(Figure 1) which projects an acoustic return coming from a a sonar image is then added to a 3D map through a data
3D point described in spherical coordinates by (r, θ, φ) on a association step. This voxel-based data association between
point in polar coordinate in the sonar image plan (r, φ). As each new projection and the 3D map is performed to update
a consequence, the measured intensity on the image at (r, φ) the voxel representation with the new data. Importantly, for
is the sum of the multiple acoustic reflections coming from each voxel, the minimum imaged intensity over all views is
different elevation angles θ weighted by an angular coefficient kept. This is akin to a carving method and helps to reduce
due to the sonar beam pattern. Thus, the intensity measured the influence of noise and incorrect back-projection of the
in a sonar image at a range r and a bearing angle θ can be 2D scene into the 3D volume. The resulting 3D map then
represented by the following equation : represents a set of points that potentially exist after a few
observations. As the robustness of these points depends on the
 A/2 number and complementarity of observations made, the sonar
I(r, θ) = b(φ)f (r, θ, φ)dφ (0) is moved around the target, along the direction of maximum
−A/2 uncertainty. This is related to the vertical aperture of the sonar
where A is the sonar vertical aperture, b is the beam pattern and the partial 3D scene obtained so far. In order to reduce the
function and f is the surface reflectance function of the memory usage, the map is stored in an octree [16] of points
observed object, equal to zero when not on any front part (x, y, z, i) where i is the filtered intensity.
of the object surface. On the front surface of the object, f
takes values related to the reflectance of the object and the D. Occlusion resolution
incidence angle to the normal of the sound. The dependency on
the incidence angle is often modelled by a Lambertian model Once enough measurements have been gathered, an occlu-
[23] while the reflectance of the object is related to the type sion resolution step is carried out to remove points lying behind
of material the object is made of. While acquiring a sonar the front surface of the object. In this step, the sonar images
image we gather these intensities in a polar image for ranges are reused and associated with the 3D map obtained during
in [rmin , rmax ] and bearing angles in [θmin , θmax ] at range the previous step. Given a sonar return at (r, φ), the existence
and bearing resolutions proper to the sonar model. of points in the 3D map at every elevation angle is checked by
accessing the octree and reading the intensity at points (r, θ, φ)
As represented in Figure 1, measurements from two differ- of various elevation angles within the vertical aperture. Each
ent points of view intersect in such a way that the uncertainty sonar return is therefore explicitly associated to a range of
on the elevation angle in the first measurement can be reduced elevation angles which effectively solves the elevation angle
using the information contained in the second measurement at recovery problem. Once the elevation angles recovered, we can
different ranges. The combination of multiple images acquired then resolve the shadowing effect happening at each bearing
from different positions therefore enables to recover the ele- angle during the acoustic imaging process. For each elevation
vation angle by associating the multiple arcs intersections. angle we therefore only keep the first return found, associating
returns at further ranges with other elevation angles values.
Based on this observation, we propose a three steps algo- Doing this for each bearing angle, we recover the occlusion
rithm (Figure 2) to iteratively recover the 3D information from phenomenon from the sonar image. This process enables to
a set of observations while observing the scene. The following retain the front part of the object and filter out the points that
sections will detail these three steps. could not have been eliminated in the previous steps of the
algorithm, when the occlusion effect was unknown.
B. Reprojection
In order to obtain a 3D representation from the 2D in- IV. R ESULTS
formation contains in the sonar images, a re-projection of the A. Description of the simulation environment
sonar image over a spherical arc is applied, taking into account
the imaging model of the sonar : for every point (r, φ) of the In order to test our algorithm, we used UWSim [15], a real-
sonar image, a spherical arc is generated in the elevation angle istic underwater simulator which has been extended to support
interval [θmin , θmax ] by a rotation around the sonar centre at imaging sonar simulation. The sonar simulation implemented
the corresponding range r and in the plane associated to the is based on a ray-tracing technique to model an imaging
bearing φ of the pixel. Every point on this spherical arc is then sonar and includes its geometry characteristics following the
affected the pixel intensity as a measure of occupancy. Since imaging model. The raytracing providing information on the
the intensity measured at the point (r, φ) of the sonar image is angle of incidence between the sonar beam and the normal

Authorized licensed use limited to: FUNDACAO UNIVERSIDADE DO RIO GRANDE. Downloaded on November 20,2023 at [Link] UTC from IEEE Xplore. Restrictions apply.
Fig. 1. Sonar imaging model showing the spherical projection happening during the imaging process. For greater clarity, we only provide a 2D representation
for a given azimuth angle φ : an acoustic return at range r and elevation angle θ gets projected on the sonar image plan at (r, φ). Using measurement 2 and
taking advantage of the range and bearing resolution, we get more information about the elevation angle lost in measurement 1.

Fig. 2. Algorithm diagram showing the ordering between the acquisition of a new point of view and the 3D reconstruction. The 3D map of the scene is
updated with each new point of view until the whole scene has been covered. The last step then creates the final map, effectively recovering the elevation angle
by comparing the temporary map to each sonar image.

of the observed object, we generated the intensities using a placed horizontally in front of the vehicle and the vehicle
lambertian model and a constant reflection coefficient on the was moved vertically by steps of 10 centimetres in order to
surface of the object. The beam pattern was modelled by a generate multiple views of the scene. The images generated
standard sinc(φ) function scaled so that the attenuation at had a range resolution rres of 2cm and a bearing resolution
φ = A/2 reaches −3dB. Based on the observation of real φres of 0.2◦ . The sonar maximum range was fixed at 10 meters
BlueView sequences recorded in open water, a probabilistic and the field of view (horizontal aperture) at 130 degrees.
noise model of every point (r, φ) in the sonar image was Our sonar simulations were then performed using a vertical
defined. This model was then used to add noise on the aperture resolution θres of 0.2◦ with a vertical aperture at 3dB
sonar image resulting from the raytracing, providing a realistic of 20◦ . The voxel grid resolution of both the temporary map
noise simulation. The simulator enabled the simulation of any and the final map was set to 2cm.
trajectory around realistic and ground truthed 3D structures.
These 3D structures could be described as CAD models and B. Tests using simulated noise-free images
loaded into the simulated environment with the ability to set
multiple environment parameters and vehicle parameters such Initial tests, simulating sonar images without noise and
as position and orientation of the sensors. The sonar was placing the sonar at a distance of 4 to 5 meters from the
structure were first performed to assert the accuracy of the

Authorized licensed use limited to: FUNDACAO UNIVERSIDADE DO RIO GRANDE. Downloaded on November 20,2023 at [Link] UTC from IEEE Xplore. Restrictions apply.
geometry model. As seen on Figure 3, the reconstruction
exhibits very few artefacts; the only observed artefacts being
due to some suboptimal sampling issues. Since samples are
taken every 10 centimetres, unreal reprojected points might
not be filtered out if the next measurements do not overlap
with them. Reducing the size of the vertical steps reduces this
issue, as well as increasing the vertical aperture resolution or
performing the inspection standing off closer to the structure.
Apart from this minor artefact, the reconstruction exhibits
a lots of details, correct geometrical shapes and sizes. The
occlusion resolution enables the recovery of the surface of the
object leaving the inside part unobserved.

C. Tests using simulated noisy images


The next set of tests included noise simulation in the sonar
imaging process. In this case, a pre-reconstruction denoising
step is required, using prior knowledge on the sonar noise
characteristics. This prior can be easily gathered by imaging
an empty scene to characterize the sonar noise at every range
and bearing in the image. Based on these characteristics, a Z
test on every pixel of the image is used to dissociate noise
from the data. As shown in Figure 5, this step allows the
recovery of clear images, only leaving a few noise points.
As a result, a few artefacts are visible on the 3D map as it
can be seen on Figure 4. These artefacts due to the imperfect
denoising step are of two types : a few details of the structure
are missing due to the thresholding and sub-optimal sampling Fig. 3. 3D map built from simulated sonar images without noise compared
leaving non-existent noise floating points in the 3D map. While to initial CAD model of the observed structure. The voxel size is 2cm and
shows few artefacts, these being due to sub-optimal sampling.
suboptimal issues can be solved as described in the previous
section, missing points on the surface of the object should be
solved by improving the denoising step. This could be done by
using a more suitable threshold or keep values as probabilities
rather than applying a threshold.

D. Tests with real inspection data


Finally tests were also performed on real data acquired
with an AUV operating around oil field structures called risers.
These risers are essentially vertical pipelines used to carry the
oil back to the surface. For mechanical reasons, buoyancies
are put on them at regular intervals as well as tie-back cables
in order to tie them to the seabed. We used this structure to
perform our test with a 10 meters vertical inspection and a
BlueView P900 placed horizontally at the front of the vehicle.
As showed in Figure 6, the 3D map obtained with this data
does not suffer from much noise, the buoyancy is clearly
visible on the right top end of the riser, as well as the two
tie-backs. The diameter of the pipe appears to grow as we get
down the riser, this is due to the increase in uncertainty of
the elevation angle recovery as the structure become oriented
more horizontally whereas on the other side of the riser, the
structure is nearly vertical which corresponds to the direction
of uncertainty (vertical aperture). Marine growth also affected
the shape of the reconstructed pipeline as it made the structure
bigger in some parts and the shape less smooth and round as
we could expect from a pipeline. Our algorithm was run with
a 2cm voxel size, 0.2 degree of vertical aperture resolution and Fig. 4. 3D reconstruction from noisy simulated images compared to initial
new images every 7 centimetres. The denoising step was done CAD model. The imperfect denoising of the sonar images results in two types
using a model generated from a sequence of open water images of artefacts : missing details on the structure due to the thresholding and sub-
optimal sampling letting noisy points in sonar images appearing on the 3D
recorder with the same sonar. This test based on real navigation map.
data and standard sonar images shows that relatively accurate

Authorized licensed use limited to: FUNDACAO UNIVERSIDADE DO RIO GRANDE. Downloaded on November 20,2023 at [Link] UTC from IEEE Xplore. Restrictions apply.
Fig. 5. Example of denoising using prior-knowledge on sonar noise characteristics. A noisy simulated sonar image (in polar coordinates) is shown on the left
side to be compared to the denoised image on the right side, showing very little noise.

mapping can be achieved using an operational underwater final map by re-processing hundreds of images. This is an
vehicle. More importantly, the map exhibits clear parts with acceptable overall processing time for real operations. Due
different objects such as a buoyancy and the tie-backs which to the very important number of points being processed and
we could expect to detect with a scene understanding algorithm stored (every pixel of an image generates a hundred projections
in order to extract semantic information. along the vertical aperture), about 1Gb of RAM are used at the
end of the reconstruction which is a relatively high memory
usage but remains affordable in an embedded environment.
E. Performance evaluation
Using our algorithm with a high enough resolution, the V. C ONCLUSION
accuracy limits come from the limited amount of information
contained in sonar images. Relatively low spatial resolution in The method presented in this paper demonstrates that accu-
sonar images limit the resolution of our 3D map. However, rate 3D mapping using off the shelf imaging sonar systems is
our results have showed that a voxel size of 2cm size was possible. In particular online 3D reconstruction using standard
well suited for 3D mapping with standard imaging sonar which imaging sensors on embedded underwater vehicle is made
remains a decent resolution. Typical acoustic effects such as possible with our practical approach. The accuracy of the
ghost effects, cross-talks and defocusing reduce the reliability reconstruction depends on the distance to the scene and does
of the information contained in the image. In addition to this, not require prior constraints on the scene itself. The algorithm
sensor noise often remain significant, affecting the confidence proposed runs in near real time on off the shelf computers.
in detected acoustic returns. The use of noise models enabled Further improvements of this algorithm will be investigated in
to mitigate this, making the reconstruction from sonar images the near future, including the integration of the beam pattern
possible with relatively accurate results. Measurements on in the reprojection process as well as improving the data
the 3D maps and comparisons to CAD models and known representation to develop a fully probabilistic approach. The
dimensions of the objects enabled us to verify the accuracy results obtained are very encouraging and should enable object
of our maps, both generated from simulated environment and recognition and semantic scene interpretation which we are
real data. Our mapping method relying on the imagery on one planning to investigate next.
part and on the navigation data on the other part, there is
a need for relatively accurate navigation. As verified in our ACKNOWLEDGMENT
experiment using real inspection data, the navigation accuracy This project was funded under the Marie Curie ITN pro-
is good enough on small trajectories, typically less than 10m, gram Robocademy FP7-PEOPLE-2013-ITN-608096.
to preserve the spatial coherence of the map without needing
to apply registration methods.
In terms of computational cost, the algorithm processes an
image and updates the map every 500ms using an Intel Core i7
- 4700MQ processor. It is worth noting here that this process is
not multi-threaded at the moment, hence we could expect to be
able to process nearly 8 images in this amount of time on this
processor. 500ms per iteration still remains compatible with
a typical underwater structure inspection. After the inspection
finished, the occlusion resolution step is run taking an average
of one second for each image. This step is multi-threaded
(8 threads) so that it only takes 15 seconds to generate the

Authorized licensed use limited to: FUNDACAO UNIVERSIDADE DO RIO GRANDE. Downloaded on November 20,2023 at [Link] UTC from IEEE Xplore. Restrictions apply.
Fig. 6. 3D reconstruction of a riser held to the seabed by an anchor chain using real data from a Blueview Sonar P900. On the top section, buoyancy structures
are visible.

R EFERENCES [12] J. Leederkerken J. Folkesson, J. Leonard and Rob Williams. Feature


tracking for underwater navigation using sonar. Proceedings of the 2007
[1] I. Quidu A Hetet and Y. Dupas. Obstacle detection and avoidance for IEEE/RSJ International Conference on Intelligent Robots and Systems
auv: problem analysis and first results (redermor). CMM06, 2006. San Diego, 2007.
[2] M. D. Aykin and S. Negahdaripour. Forward-look 2d sonar image [13] B. Schulz J. T. Cobb and G. Dobeck. Forward-looking sonar target
formation and 3d reconstruction. Oceans - San Diego, 2013, 2013. tracking and homing from a small aiv. Proceedings of MTS/IEEE
[3] G. Inglis C. Roman and J. Rutter. Application of structured light OCEANS2005, 2005.
imaging for high resolution mapping of underwater archaeological sites. [14] A. Stepnowski K. Bikonis and M. Moszynski. Computer vision
OCEANS 2010 IEEE - Sydney, 2010. techniques applied for reconstruction of seafloor 3d images from side
[4] Enrique Coiras and Johannes Groen. 3d target shape from sas images scan and synthetic aperture sonars data. Acoustics 08 Paris, 2008.
based on a deformable mesh. Proceedings of the 3rd International [15] J. Javier Fernandez Mario Prats, Javier Perez and Pedro J. Sanz.
Conference on Underwater Acoustic Measurements (UAM), 2009. An open source tool for simulation and supervision of underwater
[5] G. Okopal D. W. Krout, W. Kooiman and E. Hanusa. Object tracking intervention missions. IEEE/RSJ International Conference on Intelligent
with imaging sonar. 15th International Conference on Information Robots and Systems, 2012.
Fusion (FUSION), 2012. [16] Donald Meagher. Octree encoding: A new technique for the represen-
[6] A. Davis and A. Lugsdin. High speed underwater inspection for port tation, manipulation and display of arbitrary 3-d objects by computer.
and harbour security using coda echoscope 3d sonar. OCEANS, 2005. 1980.
Proceedings of MTS/IEEE, 2005. [17] Oscar Pizarro Mitch Bryson, Matthew Johnson-Roberson and Stefan B.
[7] Aleksander Denisiuk. On numerical reconstruction of a function from Williams. Colour-consistent structure-from-motion models using under-
incomplete data of arc means in seismic tomography. 2012. water imagery. Robotics: Science and Systems, 2012.
[8] Yvan Petillot Enrique Coiras and David M. Lane. Multiresolution 3- [18] Katsunori Mizuno and Akira Asada. Three dimensional mapping of
d reconstruction from side-scan sonar images. IEEE Transactions on aquatic plants at shallow lakes using 1.8 mhz high resolution acoustic
image processing, vol 16, 2007. imaging sonar and image processing technology. IEEE International
[9] Andreas Rieder Eric Todd Quinto and Thomas Schuster. Local inversion Ultrasonics Symposium Proceedings, 2014.
of the sonar transform regularized by the approximate inverse. 2010. [19] S. Daniel N. Brahim, D. Gueriot and B. Solaiman. 3d reconstruction of
[10] Ahmed Shafeeq Bin Mohd Shariff Liang lie Wong Georgios Pa- underwater scenes using didson acoustic sonar image sequences through
padopoulos, Hanna Kumiawati and Nicholas M. Patrikalakis. 3d- evolutionary algorithms. OCEANS, 2011 IEEE - Spain, 2011.
surface reconstruction for partially submerged marine structures using [20] N. Palomeras N. Hurtos, S. Nagappa and J. Salvi. Real-time mosaicing
an autonomous surface vehicle. IEEE/RSJ International Conference on with two-dimensional forward-looking sonar. 2014 IEEE International
Intelligent Robots and Systems, 2011. Conference on Robotics and Automation (ICRA), 2014.
[11] D. Lane I. Tena Ruiz, Y. Petillot and J. Bell. Tracking objects in [21] X. Cufi N. Hurtos and J. Salvi. A novel blending technique for two-
underwater multibeam sonar images. IEE Colloqium on Motion Analysis dimensional forward-looking sonar mosaicing. IEEE/MTS OCEANS13
and Tracking, 1999. San Diego, 2013.

Authorized licensed use limited to: FUNDACAO UNIVERSIDADE DO RIO GRANDE. Downloaded on November 20,2023 at [Link] UTC from IEEE Xplore. Restrictions apply.
[22] Y. Petillot N. Hurtos, X. Cufi and J. Salvi. Fourier-based registrations
for two-dimensional forward-looking sonar image mosaicing. IEEE/RSJ
Int. Conf. on Intelligent Robots and Systems (IROS), 2012.
[23] M. V. Trevorrow. Statistics of fluctuations in high-frequency low-
grazing-angle backscatter from a rocky sea bed. Journal of Oceanic
Enfineering, IEEE, 2004.
[24] I. T. Ruiz Y. Petillot and D. Lane. Underwater vehicle obstacle
avoidance and path planning using a multi beam forward looking sonar.
IEEE Journal of Oceanic Engineering, 26:240–251.
[25] George Yufit and Eric P. Maillard. 3d forward looking sonar tech-
nology for surface ships and auv: Example of design and bathymetry
application. Underwater Technology Symposium (UT), 2013.
[26] B. Zerr and B. Stage. Three-dimensional reconstruction of underwater
objects from a sequence of sonar images. Proceedings of the Interna-
tional Conference on Image Processing, 1996.
[27] Matthew J. Zimmerman. 3d, forward-looking, phased array, obstacle
avoidance sonar for autonomous underwater vehicles. 13th Annual Un-
manned Untethered Submersible Technology International Symposium,
2003.

Authorized licensed use limited to: FUNDACAO UNIVERSIDADE DO RIO GRANDE. Downloaded on November 20,2023 at [Link] UTC from IEEE Xplore. Restrictions apply.

You might also like