0% found this document useful (0 votes)
29 views16 pages

Underwater

The document reviews the state of underwater image processing, focusing on restoration and enhancement methods developed to address challenges such as light absorption and scattering in water. It discusses various algorithms and techniques that improve image quality by extending visibility range and enhancing contrast. The paper also highlights the importance of understanding the physics of light propagation in water to effectively apply these methods.

Uploaded by

Vakula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views16 pages

Underwater

The document reviews the state of underwater image processing, focusing on restoration and enhancement methods developed to address challenges such as light absorption and scattering in water. It discusses various algorithms and techniques that improve image quality by extending visibility range and enhancing contrast. The paper also highlights the importance of understanding the physics of light propagation in water to effectively apply these methods.

Uploaded by

Vakula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

See discussions, stats, and author profiles for this publication at: [Link]

net/publication/43807988

Underwater Image Processing: State of the Art of Restoration and Image


Enhancement Methods

Article in EURASIP Journal on Advances in Signal Processing · December 2010


DOI: 10.1155/2010/746052 · Source: DOAJ

CITATIONS READS

715 7,359

2 authors, including:

Silvia Elena Corchs


University of Insubria
79 PUBLICATIONS 1,859 CITATIONS

SEE PROFILE

All content following this page was uploaded by Silvia Elena Corchs on 12 February 2014.

The user has requested enhancement of the downloaded file.


Hindawi Publishing Corporation
EURASIP Journal on Advances in Signal Processing
Volume 2010, Article ID 746052, 14 pages
doi:10.1155/2010/746052

Review Article
Underwater Image Processing: State of the Art of Restoration and
Image Enhancement Methods

Raimondo Schettini and Silvia Corchs


Department of Informatics, Systems and Communication (DISCo), University of Milano-Bicocca, Viale Sarca 336, 20126 Milan, Italy

Correspondence should be addressed to Silvia Corchs, [Link]@[Link]

Received 9 July 2009; Revised 20 November 2009; Accepted 2 February 2010

Academic Editor: Warren Fox

Copyright © 2010 R. Schettini and S. Corchs. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.

The underwater image processing area has received considerable attention within the last decades, showing important
achievements. In this paper we review some of the most recent methods that have been specifically developed for the underwater
environment. These techniques are capable of extending the range of underwater imaging, improving image contrast and
resolution. After considering the basic physics of the light propagation in the water medium, we focus on the different algorithms
available in the literature. The conditions for which each of them have been originally developed are highlighted as well as the
quality assessment methods used to evaluate their performance.

1. Introduction also to other components such as dissolved organic matter


or small observable floating particles. The presence of the
In order to deal with underwater image processing, we have floating particles known as “marine snow” (highly variable in
to consider first of all the basic physics of the light propaga- kind and concentration) increase absorption and scattering
tion in the water medium. Physical properties of the medium effects. The visibility range can be increased with artificial
cause degradation effects not present in normal images taken lighting but these sources not only suffer from the difficulties
in air. Underwater images are essentially characterized by
described before (scattering and absorption), but in addition
their poor visibility because light is exponentially attenuated
tend to illuminate the scene in a non uniform fashion,
as it travels in the water and the scenes result poorly
producing a bright spot in the center of the image with
contrasted and hazy. Light attenuation limits the visibility
a poorly illuminated area surrounding it. Finally, as the
distance at about twenty meters in clear water and five
meters or less in turbid water. The light attenuation process amount of light is reduced when we go deeper, colors drop off
is caused by absorption (which removes light energy) and one by one depending on their wavelengths. The blue color
scattering (which changes the direction of light path). The travels the longest in the water due to its shortest wavelength,
absorption and scattering processes of the light in water making the underwater images to be dominated essentially
influence the overall performance of underwater imaging by blue color. In summary, the images we are interested on
systems. Forward scattering (randomly deviated light on can suffer of one or more of the following problems: limited
its way from an object to the camera) generally leads range visibility, low contrast, non uniform lighting, blurring,
to blurring of the image features. On the other hand, bright artifacts, color diminished (bluish appearance) and
backward scattering (the fraction of the light reflected by noise. Therefore, application of standard computer vision
the water towards the camera before it actually reaches the techniques to underwater imaging requires dealing first with
objects in the scene) generally limits the contrast of the these added problems.
images, generating a characteristic veil that superimposes The image processing can be addressed from two differ-
itself on the image and hides the scene. Absorption and ent points of view: as an image restoration technique or as an
scattering effects are due not only to the water itself but image enhancement method:
2 EURASIP Journal on Advances in Signal Processing

(i) The image restoration aims to recover a degraded dependence. The irradiance E at position r can be modeled
image using a model of the degradation and of the as:
original image formation; it is essentially an inverse
problem. These methods are rigorous but they E(r) = E(0)e−cr , (1)
require many model parameters (like attenuation
and diffusion coefficients that characterize the water where c is the total attenuation coefficient of the medium.
turbidity) which are only scarcely known in tables This coefficient is a measure of the light loss from the com-
and can be extremely variable. Another important bined effects of scattering and absorption over a unit length
parameter required is the depth estimation of a given of travel in an attenuation medium. Typical attenuation
object in the scene. coefficients for deep ocean water, coastal water and bay water
are 0.05 m−1 , 0.2 m−1 , and 0.33 m−1 , respectively.
(ii) Image enhancement uses qualitative subjective crite- Assuming an isotropic, homogeneous medium, the total
ria to produce a more visually pleasing image and attenuation coefficient c can be further decomposed as a
they do not rely on any physical model for the image sum of two quantities a and b, the absorption and scattering
formation. These kinds of approaches are usually coefficients of the medium, respectively:
simpler and faster than deconvolution methods.
E(r) = E(0)e−ar e−br . (2)

In what follows we give a general view of some of the most The total scattering coefficient b is the superposition of all
recent methods that address the topic of underwater image scattering events at all angles through the volume scattering
processing providing an introduction of the problem and function β(θ) (this function gives the probability for a ray
enumerating the difficulties found. Our scope is to give of light to be deviated of an angle θ from its direction of
the reader, in particular who is not an specialist in the propagation)
field and who has a specific problem to address and solve, π
the indications of the available methods focusing on the
b = 2π β(θ) sin θ dθ. (3)
imaging conditions for which they were developed (lighting 0
conditions, depth, environment where the approach was
tested, quality evaluation of the results) and considering the The parameters a, b, c, and β(θ) represent the inherent
model characteristics and assumptions of the approach itself. properties of the medium and their knowledge should
In this way we wish to guide the reader so as to find the theoretically permit us to predict the propagation of light
technique that better suits his problem or application. in the water. However, all these parameters depend on the
In Section 2 we briefly review the optical properties of location r (in a three dimensional space) and also on time.
the light propagation in water and the image formation Therefore, the corresponding measurements are a complex
model of Jaffe-McGlamery, following in Section 3 with a task and computational modeling is needed.
report of the image restoration methods that take into McGlamery [1] laid out the theoretical foundations of
account this image model. In Section 4, works addressing the optical image formation model while Jaffe [2] extended
image enhancement and color correction in underwater the model and applied it to design different subsea image
environment are presented. We include a brief description acquisition systems. Modeling of underwater imaging has
of some of the most recent methods. When possible, some also been carried out by Monte Carlo techniques [3].
examples (images before and after correction) that illustrate In this section we follow the image formation model of
these approaches are also included. Section 5 considers the Jaffe-McGlamery. According to this model, the underwater
lighting problems and Section 6 focuses on image quality image can be represented as the linear superposition of
metrics. Finally the conclusions are sketched in Section 7. three components (see Figure 1). An underwater image
experiment consists of tracing the progression of light from
a light source to a camera. The light received by the camera
2. Propagation of Light in the Water is composed by three components: (i) the direct component
Ed (light reflected directly by the object that has not been
In this section we focus on the special transmission proper- scattered in the water), (ii) the forward-scattered component
ties of the light in the water. Light interacts with the water E f (light reflected by the object that has been scattered at a
medium through two processes: absorption and scattering. small angle) and (iii) the backscatter component Eb (light
Absorption is the loss of power as light travels in the medium reflected by objects not on the target scene but that enters
and it depends on the index of refraction of the medium. the camera, for example due to floating particles). Therefore,
Scattering refers to any deflection from a straight-line the total irradiance ET reads:
propagation path. In underwater environment, deflections
can be due to particles of size comparable to the wavelengths E T = Ed + E f + Eb . (4)
of travelling light (diffraction), or to particulate matter with
refraction index different from that of the water (refraction). Spherical spreading and attenuation of the source light beam
According to the Lambert-Beer empirical law, the decay is assumed in order to model the illumination incident
of light intensity is related to the properties of the material upon the target pane. The reflected illumination is then
(through which the light is travelling) via an exponential computed as the product of the incident illumination and the
EURASIP Journal on Advances in Signal Processing 3

z
Illumination source Source
Camera RV
Zci ϕb

Floating ΔVi Rs
particles
Object Rc
y

θ
(x , y  )
x
Image plane Scene point
Camera
Figure 2: Coordinate system of the Jaffe-McGlamery model.

Direct component
Forward component
Backscatter component
where the function g is given by
 
Figure 1: The three components of underwater optical imaging: g x, y, Rc , G, c, B
direct component (straight line), forward component (dashed line)   (7)
and backward scatter component (dash-dot line). = exp(−GRc ) − exp(−cRc ) F −1 exp(−BRc w)

with G an empirical factor such that |G| < |c| and B a


damping function determined empirically. F −1 indicates the
reflectance map. Assuming a Lambertian reflector, geometric inverse Fourier transform and w is the radial frequency.
optics is used to compute the image of the direct component Experimental measurements of the point spread function
in the camera plane. The reflected light is also small scattered validate the use of the small angle scattering theory [5, 6].
on its way to the camera. A fraction of the resultant blurred For the calculation of the backscatter component the small
image is then added to the direct component. The backscatter angle approximation is no longer valid as the backscattered
component is the most computationally demanding to calcu- light enters the camera from a large distribution of angles.
late. The model partitions 3-dimensional space into planes The model takes into account the light contributions from
parallel to the camera plane, and the radiation scattered the volume of water between the scene and the camera. The
toward the camera is computed superposing small volume three dimensional space is divided into a large number N
elements weighted by an appropriate volume scattering of differential volumes ΔV . The backscatter component is a
function. The detail derivation of each of the components linear superposition of these illuminated volumes of water,
of (4) can be found in [2]. We report here the final results, weighted by the volume scattering function
as they appear in Jaffe’s article. The direct component results
       
(see Figure 2 for the coordinate system) Eb x, y = Eb,d x, y + Eb,d x, y ∗ g x, y, Rc , G, c, B ,
  (8)
Ed x, y
   2 (5) where Eb, d is the direct component of the backscattered
 M x , y  R − Fl
 
= EI x , y exp(−cRc ) Tl cos4 θ c , irradiance and is evaluated as
4F Rc
N
      πΔZi
where EI is the irradiance on the scene surface at point Eb,d x, y = exp(−c Z ci )β φb Es x , y  , z
4F 2
(x , y  ), Rc is the distance from (x , y  ) to the camera and i=1
(9)
the function M(x , y  ) represents the surface reflectance map.  2
Zci − Fl
We note that M(x , y  ) < 1 and typical values for objects × cos3 θTl ,
Zci
of oceanographic interest are 0.02 < M(x , y  ) < 0.1 [4].
The camera system is characterized by F (F-number of the with ΔZi the thickness of the backscattering volume ΔVi , Zci
lens), Tl (lens transmittance) and Fl (focal length). The the distance from a point in the camera to the center of the
angle θ is the angle between the reflectance map and a line backscatter slab; β(φb ) is the volume scattering function and
between the position (x , y  ) and the camera. The forward Es (x , y  , z ) is the irradiance in the three dimensional space
scatter component is calculated from the direct component propagating away from the light source.
via the convolution operator with a point spread function g; In Jaffe’s work [2, 7] the relationship between image
and its derivation is valid under the small angle scattering range, camera light separation and the limiting factors in
approximation underwater imaging are considered. If only short ranges
      are desired (one attenuation length), a simple conventional
E f x, y = Ed x, y ∗ g x, y, Rc , G, c, B , (6) system that uses close positioning of camera and lights can
4 EURASIP Journal on Advances in Signal Processing

yield good results but these configurations are contrast- The exponent, D(φ), is the decay transfer function obtained
limited at greater ranges. If longer distances are desired (2- by Wells [12] for the seawater within the small angle
3 attenuation lengths), systems with separated camera and approximation
lights are preferred but backscattering problems appear as the  
distance increases. For greater distances more sophisticated   b 1 − exp −2πθ0 φ
D φ =c− , (14)
technology is required, like for example, laser range-gated 2πθ0 φ
systems and synchronous scan imaging.
where θ0 is the mean square angle, b and c are the
total scattering and attenuation coefficients, respectively.
3. Image Restoration The system (camera/lens) response was measured directly
from calibrated imagery at various spatial frequencies.
A possible approach to deal with underwater images is to
In water optical properties during the experiment were
consider the image transmission in water as a linear system
measured: absorption and attenuation coefficients, particle
[8].
size distributions and volume scattering functions. The
Image restoration aims at recovering the original image
authors implemented an automated framework termed
f (x, y) from the observed image g(x, y) using (if available)
Image Restoration via Denoised Deconvolution. To deter-
explicit knowledge about the degradation function h(x, y)
mine the quality of the restored images, an objective quality
(also called point spread function PSF) and the noise
metric was implemented. It is a wavelet decomposed and
characteristics n(x, y):
denoised perceptual metric constrained by a power spectrum
       
g x, y = f x, y ∗ h x, y + n x, y , (10) ratio (see Section 6). Image restoration is carried out and
medium optical properties are estimated. Both modeled
where ∗ denotes convolution. The degradation function and measured optical properties are taken into account in
h(x, y) includes the system response from the imaging system the framework. The images are restored using PSFs derived
itself and the effects of the medium (water in our case). In the from both the modeled and measured optical properties (see
frequency domain, we have: Figure 3).
Trucco and Olmos [13] presented a self-tuning restora-
G(u, v) = F(u, v)H(u, v) + N(u, v), (11) tion filter based on a simplified version of the Jaffe-
McGlamery image formation model. Two assumptions are
where (u, v) are spatial frequencies and G, F, H, and N are
made in order to design the restoration filter. The first one
Fourier transforms of g, f , h, and n, respectively. The system
assumes uniform illumination (direct sunlight in shallow
response function H in the frequency domain is referred
waters) and the second one is to consider only the forward
as the optical transfer function (OTF) and its magnitude is
component E f of the image model as the major degradation
referred as modulation transfer function (MTF). Usually, the
source, ignoring back scattering Eb and the direct component
system response is expressed as a direct product of the optical
Ed . This appears reasonable whenever the concentration
system itself and the medium:
of particulate matter generating backscatter in the water
optical column is limited. A further simplification considers the
H(u, v) = Hsystem (u, v)Hmedium (u, v). (12)
difference of exponentials in the forward scatter model (6)
The better the knowledge we have about the degradation as an experimental constant K (with typical values between
function, the better are the results of the restoration. How- 0.2 and 0.9)
ever, in practical cases, there is insufficient knowledge about  
the degradation and it must be estimated and modeled. In K ≈ exp(−GRc ) − exp(−cRc ) . (15)
our case, the source of degradation in underwater imaging Within these assumptions, from (7), a simple inverse filter in
includes turbidity, floating particles and the optical prop- the frequency domain is designed as follows (the parameter
erties of light propagation in water. Therefore, underwater B is approximated by c)
optical properties have to be incorporated into the PSF and
 
MTF. The presence of noise from various sources further G f , Rc , c, K ≈ K exp(−cRc w). (16)
complicates these techniques.
Recently, Hou et al. [9–11] incorporated the underwa- Optimal values of these parameters were estimated auto-
ter optical properties to the traditional image restoration matically for each individual image by optimizing a quality
approach. They assume that blurring is caused by strong scat- criterion based on a global contrast measure (optimality
tering due to water and its constituents which include various is defined as achieving minimum blur). Therefore, low-
sized particles. To address this issue, they incorporated backscatter and shallow-water conditions represent the opti-
measured in-water optical properties to the point spread mal environment for this technique. The authors assessed
function in the spatial domain and the modulation transfer both qualitative (by visual inspection) and quantitatively the
function in frequency domain. The authors modeled Hmedium performance of the restoration filter. They assessed quanti-
for circular symmetrical response systems (2-dimensional tatively the benefits of the self-tuning filer as preprocessor
space) as an exponential function for image classification: images were classified as containing
    or not man-made objects [14, 15]. The quantitative tests
Hmedium φ, r = exp −D φ r . (13) with a large number of frames from real videos show an
EURASIP Journal on Advances in Signal Processing 5

(a) (b)

(c)

Figure 3: Image taken at 7.5 m depth in Florida. The original (a), the restored image based on measured MTF (b) and the restored image
based on modeled MTF (c). Courtesy of Hou et al. [9].

important improvement to the classification task of detecting where Sg is the power spectrum of the blurred image. Then,
man-made objects on the seafloor. The training videos were the spectrum of the restored image is
acquired under different environments: instrumented tank,
shallow and turbid waters conditions in the sea. H ∗ (u, v)
Liu et al. [16] measured the PSF and MTF of seawater in F(u, v) = G(u, v) 2 . (19)
|H(u, v)| + Sn /S f
the laboratory by means of the image transmission theory
and used Wiener filters to restore the blurred underwater
images. The degradation function H(u, v) is measured in a Also parametric Wiener filter is used by the authors and both
water tank. An experiment is constructed with a slit image deconvolution methods are compared.
and a light source. In a first step, one dimensional light Schechner and Karpel [17] exploit the polarization effects
intensity distribution of the slit images at different water in underwater scattering to compensate for visibility degra-
path lengths is obtained. The one dimensional PSF of sea dation. The authors claim that image blur is not the domi-
water can be obtained by the deconvolution operation. Then, nant cause for image contrast degradation and they associate
according to the property of the circle symmetry of the PSF underwater polarization with the prime visibility distur-
of seawater, the 2-dimensional PSF can be calculated by bance that they want to delete (veiling light or backscattered
mathematical method. In a similar way, MTFs are derived. light). The Jaffe-McGlamery image formation model is
These measured functions are used for blurred image applied under natural underwater lighting exploiting the fact
restoration. The standard Wiener deconvolution process is that veiling light is partially polarized horizontally [18]. The
applied. The transfer function W(u, v) reads algorithm is based on a couple of images taken through a
polarizer at different orientations. Even when the raw images
H ∗ (u, v)
W(u, v) = 2 , (17) have very low contrast, their slight differences provide the
|H(u, v)| + Sn /S f key for visibility improvement. The method automatically
where Sn and S f are the power spectrum of noise and original accounts for dependencies on object distance, and estimates
image, respectively, and H ∗ (u, v) is the conjugate matrix of a distance map of the scene. A quantitative estimate for the
H(u, v) (measured result as previously described). Noise is visibility improvement is defined as a logarithmic function
regarded as white noise, and Sn is a constant that can be of the backscatter component. Additionally, an algorithm
estimated form the blurred images with noise while S f is to compensate for the strong blue hue is also applied.
estimated as Experiments conducted in the sea show improvements of
scene contrast and color correction, nearly doubling the
Sg (u, v) − Sn (u, v) underwater visibility range. In Figure 4 a raw image and its
S f (u, v) = 2 , (18)
|H(u, v)| recovered version are shown.
6 EURASIP Journal on Advances in Signal Processing

As a consequence, a strong and non uniform color cast will


characterize the typical underwater images.
Bazeille et al. [20, 21] propose an algorithm to pre-
∗ process underwater images. It reduces underwater perturba-
tions and improves image quality. It is composed of several
successive independent processing steps which correct non
uniform illumination (homorphic filtering), suppress noise
(wavelet denoising), enhance edges (anisotropic filtering)

and adjust colors (equalizing RGB channels to suppress
predominant color). The algorithm is automatic and requires
no parameter adjustment. The method was used as a
preliminary step of edge detection. The robustness of the
method was analyzed using gradient magnitude histograms
and also the criterion used by Arnold-Bos et al. [22] was
applied. This criterion assumes that a well-contrasted and
noise-free image has a distribution of the gradient magnitude
histogram close to exponential and it attributes a mark from
Figure 4: Underwater scene at the Red-Sea at 26 m below the water zero to one. In Figure 6 pairs of images are shown before and
surface. Left, raw image; right, recovered image. Image courtesy of after Bazeille et al’. processing [20].
Schechner and Karpel [17]. Chambah et al. [23] proposed a color correction method
based on ACE model, an unsupervised color equalization
algorithm developed by Rizzi et al. [24]. ACE is a perceptual
approach inspired by some adaptation mechanisms of the
Recently, Treibitz and Schechner [19] used a similar human vision system, in particular lightness constancy
polarization-based method for visibility enhancement and and color constancy. ACE was applied on videos taken
distance estimation in scattering media. They studied the in aquatic environment that present a strong and non
formation of images under wide field (non-scanning) artifi- uniform color cast due to the depth of the water and the
cial illumination. Based on backscattered light characteristics artificial illumination. Images were taken from the tanks
(empirically obtained) they presented a visibility recovery of an aquarium. Inner parameters of the ACE algorithm
approach which also yields a rough estimate of the 3D were properly tuned to meet the requirements of image and
scene structure. The method is simple and requires compact histogram shape naturalness and to deal with these kinds of
hardware, using active wide field polarized illumination. aquatic images. In Figure 7 two example original images and
Two images of the scene are instantly taken, with different their restored ACE version are shown.
states of a camera-mounted polarizer. The authors used Iqbal et al. [25] presented an underwater image enhance-
the approach to demonstrate recovery of object signals and ment method using an integrated color model. They pro-
significant visibility enhancement in experiments in various posed an approach based on slide stretching: first, contrast
sea environments at night. The distance reconstruction is stretching of RGB algorithm is used to equalize the color
effective in a range of 1-2 m. In Figure 5, an underwater contrast in the images. Second, saturation and intensity
image taken in the Mediterranean sea with two articial stretching of HSI is applied to increase the true color and
light sources is shown together with the corresponding de- solve the problem of lighting. The blue color component in
scattered image result [19]. the image is controlled by the saturation and intensity to
create the range from pale blue to deep blue. The contrast
ratio is therefore controlled by decreasing or increasing its
4. Image Enhancement and Color Correction value. In Figure 8 two example images before and after Iqbal
et al’. technique are shown.
These methods make total abstraction of the image forma- Arnold Bos et al. [22, 26] presented a complete prepro-
tion process, and no a priori knowledge of the environment cessing framework for underwater images. They investigated
is needed (do not use attenuation and scattering coefficients the possibility of addressing the whole range of noises
for instance). They are usually simpler and faster than the present in underwater images by using a combination of
image restoration techniques. deconvolution and enhancement methods. First, a contrast
Regarding color correction, as depth increases, colors equalization system is proposed to reject backscattering,
drop off one by one depending on their wavelength. First of attenuation and lighting inequalities. If I(i, j) is the original
all, red color disappears at the depth of 3 m approximately. At image and ILP (i, j) its low-pass version, a contrast-equalized
the depth of 5 m, the orange color is lost. Most of the yellow version of I is Ieq = I/ILP . Contrast equalization is followed
goes off at the depth of 10 m and finally the green and purple by histogram clipping and expansion of the image range. The
disappear at further depth. The blue color travels the longest method is relevant because backscattering is a slowly varying
in the water due to its shortest wavelength. The underwater spatial function. Backscattering is considered as the first
images are therefore dominated by blue-green color. Also noise addressed in the algorithm but contrast equalization
the light source variations will affect the color perception. also corrects the effect of the exponential light attenuation
EURASIP Journal on Advances in Signal Processing 7

(a) (b)

Figure 5: Raw image (a), De-scattered image (b) [19]. From [Link]

(a) (b)

Figure 6: Pairs of images before (a) and after (b) Bazeille et al.’ processing. Image courtesy of Bazeille et al. [20].

with distance. Remaining noises corresponding to sensor a color value to each pixel of the input image that best
noise, floating particles and miscellaneous quantification describes its surrounding structure using the training image
errors are suppressed using a generic self-tuning wavelet- patches. This model uses multi-scale representations of
based algorithm. The use of the adaptive smoothing filter the color corrected and color depleted (bluish) images to
significantly improves edge detection in the images. Results construct a probabilistic algorithm that improves the color
on simulated and real data are presented. of underwater images. Experimental results on a variety of
The color recovery is also analyzed by Torres-Mendez underwater scenes are shown.
and Dudek [27] but from a different perspective: it is for- Ahlen et al. [28] apply underwater hyperspectral data
mulated as an energy minimization problem using learned for color correction purposes. They develop a mathematical
constraints. The idea, on which the approach is based, stability model which gives a value range for wavelengths
is that an image can be modeled as a sample function that should be used to compute the attenuation coefficient
of a stochastic process known as Markov Random Field. values that are as stable as possible in terms of variation with
The color correction is considered as a task of assigning depth. Their main goal is to monitor coral reefs and marine
8 EURASIP Journal on Advances in Signal Processing

(a) (b)

Figure 7: Original images (a), after correction with ACE (b). Image courtesy of Chambah et al. [23].

(a) (b)

Figure 8: Original images (a), images after enhancement using Iqbal et al’. technique (b). Image courtesy of Iqbal et al. [25].
EURASIP Journal on Advances in Signal Processing 9

habitats. Spectrometer measurements of a colored plate at by the smoothed image fs , giving rise to an estimate of ideal
various depths are performed. The hyperspectral data is then image rideal
color corrected with a formula derived from Beer’s law  
f x, y
rideal =   δ, (23)
I(z ) = I(z) exp[c(z)z − c(z )z ], (20) fs x, y
where δ is a normalization constant. Next, the contrast of
where I(z) is the pixel intensity in the image for depth z and the resulting image is emphasized, giving rise to an equalized
c(z) is the corresponding attenuation coefficient calculated version of r.
from spectral data. In this way, they obtain images as if they Some authors compensate for the effects of non-uniform
were taken at a much shallower depth than in reality. All lighting by applying local equalization to the images [31, 32].
hyperspectral images are “lifted up” to a depth of 1.8 m, The non uniform of lighting demands a special treatment for
where almost all wavelengths are still present (they have not the different areas of the image, depending on the amount of
been absorbed by the water column). The data is finally light they receive. The strategy consists in defining an nxn
brought back into the original RGB space. neighborhood, computing the histogram of this area and
Another approach to improve color rendition is pro- applying an equalization function but modifying uniquely
posed by Petit et al. [29]. The method is based on light the central point of the neighborhood [33]. A similar strategy
attenuation inversion after processing a color space contrac- is used in Zuidervel [34].
tion using quaternions. Applied to the white vector (1, 1, 1) An alternative model consists of applying homomorphic
in the RGB space, the attenuation gives a hue vector H filtering [30]. This approach assumes that the illumination
characterizing the water color factor varies smoothly through the field of view; generating
  low frequencies in the Fourier transform of the image (the
H = exp{−cR z}, exp{−cG z}, exp{−cB z} , (21)
offset term is ignored). Taking the logarithm of (22), the
multiplicative effect is converted into an additive one
where cR , cG , and cB are the attenuation coefficients for
     
red, green and blue wavelengths, respectively. Using this ln f x, y = ln cm x, y + ln r x, y . (24)
reference axis, geometrical transformations into the color
space are computed with quaternions. Pixels of water areas Taking the Fourier transform of (24) we obtain
of processed images are moved to gray or colors with a low
F(u, v) = Cm (u, v) + R(u, v), (25)
saturation whereas the objects remain fully colored. In this
way, objects contrasts result enhanced and bluish aspect of where F(u, v), Cm (u, v), and R(u, v) are the Fourier trans-
images is removed. Two example images before and after forms of ln f (x, y), ln cm (x, y), and ln r(x, y), respectively.
correction by Petit et al’. algorithm are shown in Figure 9. Low frequencies can be suppressed by multiplying these
components by a high pass homomorphic filter H given by
5. Lighting Problems  −1
H(u, v) = 1 + exp −s u2 + v2 − w0 + ρ, (26)
In this section we summarize the articles that have been
specifically focused on solving lighting problems. Even if where w0 is the cutoff frequency, s is a multiplicative factor
this aspect was already taken into account in some of the and ρ is an offset term. This filter not only attenuates non
methods presented in the previous sections, we review here uniform illumination but also enhances the high frequencies,
the works that have addressed in particular this kind of sharpening the edges.
problem, proposing different lighting correction strategies. Rzhanov et al. [35] disregards the multiplicative constant
Garcia et al. [30] analyzed how to solve the lighting cm , considering the lighting of the scene as an additive factor
problems in underwater imaging and reviewed different which should be subtracted from the original image
techniques. The starting point is the illumination-reflectance      
r x, y = f x, y − Φ x, y + δ, (27)
model, where the image f (x, y) sensed by the camera is
considered as a product of the illumination i(x,y), the where Φ(x, y) is a two dimensional polynomial spline and δ
reflectance function r(x, y) and a gain factor g(x, y) plus an is a normalization constant.
offset term o(x, y): Garcia et al. [30] tested and compared the different
          lighting-corrections strategies for two typical underwater
f x, y = g x, y · i x, y · r x, y + o x, y . (22) situations. The first one considers images acquired in shallow
waters at sun down (simulating deep ocean). The vehicle
The multiplicative factor cm (x, y) = g(x, y) · i(x, y) due carries its own light producing a bright spot in the center
to light sources and camera sensitivity can be modeled of the image. The second sequence of images was acquired in
as a smooth function (the offset term is ignored). In shallow waters on a sunny day. The evaluation methodology
order to model the non-uniform illumination, a Gaussian- for the comparisons is qualitative. The best results have
smoothed version of the image is proposed. The smoothed been obtained by the homomorphic filtering and the point-
image is intended to be an estimate of how much the by-point correction by the smoothed image. The authors
illumination field (and camera sensitivity) affects every pixel. emphasize that both methods consider the illumination field
The acquired image is corrected by a point-by-point division is multiplicative and not subtractive.
10 EURASIP Journal on Advances in Signal Processing

(a) (b)

Figure 9: Original image (a), corrected by Petit et al.’ algorithm (b). Image courtesy of Petit et al. [29].

6. Quality Assessment slope of edges. They use wavelet transforms to remove the
effect of scattering when locating edges and further apply
In the last years many different methods for image quality the transformed results in restraining the perceptual metric.
assessment have been proposed and analyzed with the goal Images are first decomposed by a wavelet transform to
of developing a quality metric that correlates with perceived remove random and medium noise. Sharpness of the edges
quality measurements (for a detailed review see [36]). Peak is determined by linear regression, obtaining the slope angle
Signal to Noise Ratio and Mean Squared Error are the most between grayscale values of edge pixels versus location. The
widely used objective image quality/distortion metrics. In the overall sharpness of the image is the average of measured
last decades however, a great effort has been made to develop grayscale angles weighted by the ratio of the power of the
new objective image quality methods which incorporate high frequency components of the image to the total power
perceptual quality measures by considering human visual of the image (WGSA metric). The metric has been used in
system characteristics. Wang et al. [37] propose a Structural their automated image restoration program and the results
Similarity Index that does not treat the image degradation demonstrate consistency for different optical conditions and
as an error measurement but as a structural distortion attenuation ranges.
measurement. Focusing on underwater video processing algorithms,
The objective image quality metrics are classified in three Arredondo and Lebart [39] propose a methodology to
groups: full reference (there exists an original image with quantitative assess the robustness and behavior of algorithms
which the distorted image is to be compared), no-reference in face of underwater noises. The principle is to degrade
or “blind” quality assessment and reduced-reference quality test images with simulated underwater perturbations and the
assessment (the reference image is only partially available, in focus is to isolate and assess independently the effects of the
the form of a set of extracted features). different perturbations. These perturbations are simulated
In the present case of underwater image processing, no with varying degrees of severity. Jaffe and McGlamery’ model
original image is available to be compared, and therefore, is used to simulate blur and unequal illumination. Different
no-reference metrics are necessary. Within the above cited levels of blurring are simulated using the forward-scattered
methods for enhancement and restoration, many of the component of images taken at different distances from the
authors use subjective quality measurements to evaluate the scene: Rc in (6) is increased varying from R1 to R2 meters
performance of their methods. In what follows we focus to the scene at intervals ΔR. The non-uniform lighting is
on the quantitative metrics used by some of the authors to simulated placing the camera at distances between d1 and
evaluate the algorithm performance and image quality in the d2 meters, at intervals of Δd. In order to isolate the effect
specific case of underwater images. of non-uniform lighting, only the direct component is taken
Besides visual comparison, Hou and Weidemann [38] into account. The lack of contrast is simulated by histogram
also propose an objective quality metric for the scattering- manipulation. As a specific application, different optical
blurred typical underwater images. The authors measure flow algorithms for underwater conditions are compared.
the image quality by its sharpness using the gradient or A well known ground-truth synthetic sequence is used
EURASIP Journal on Advances in Signal Processing 11

Table 1: Brief description of the algorithms.

Model’s characteristics and


Algorithm Experiments and data set Image quality evaluation
assumptions
Computer modeling. Image as
linear superposition of direct, Simulation and utility of
Jaffe [2] 1990 forward and scattered different imaging and lighting Visual inspection.
components. Essentially for configurations are evaluated.
artificial lit scenes.
Image restoration methods
Measurement of PSF of water
and automated restoration Visual inspection. Image quality
Two water types (clear and
Hou et al. scheme. Natural and artificial metric: Weighted Gray Scale
turbid), morning and afternoon.
[9, 10] 2007 lighting. Blurring caused by Angles (WGSA).
Target between 3.7 and 7.1 m.
strong scattering due to water
and floating particles.
Visual inspection. Quantitative
tests on frames from real mission
Self tuning restoration filter. Ocean images in shallow water,
Trucco and videos. Improvement to
Uniform illumination Only direct sunlight illumination.
Olmos [13] classification tasks for subsea
forward scatter is considered. Some images with high
2006 operations (detecting man-made
Limited backscatter. backscatter.
objects on the seafloor).

Measurements on controlled
Measurement of PSF of water
environment. Set up: light
Liu et al. [16] and image restoration. Standard Visual inspection.
source, slit images at 1–3 m in
2001 and parametric Wiener filter
water tank. Restoration of images
deconvolution.
taken in turbid water.
Visual inspection. Quantitative
Polarization associated with the
Schechner and Polarizer used to analyze the estimate for the visibility
prime visibility disturbance to be
Karpel [17] scene. Experiments in the sea improvement. Estimation of the
deleted (backscatter). Natural
2005 (scene 26 m deep). distance map of the scene.
lighting.
Polarization-based method for
Treibitz and Experiments in real underwater Visual inspection. Quantitative
visibility enhancement and
Schechner [19] scenes: Mediterranean sea, Red estimate for the visibility
distance estimation in scattering
2009 Sea and lake of Galilee. improvement.
media. Artificial illumination.
Image enhancement and color correction methods
Visual inspection. Quantitative
index: closeness of histogram to
Automatic pre-processing. Deep marine habitats. Scenes
Bazeille et al. exponential distribution and
Natural and artificial with man-made objects in the
[20] 2006 tests for object recognition in the
illumination. sea floor.
sea floor.

Images taken in aquariums. Tests


Chambah et al. Underwater color constancy. Visual inspection.
on fish segmentation and fish
[23] 2004 Artificial lighting.
recognition.
Enhancement based on slide Visual inspection and histogram
Iqbal et al. [25]
stretching. Natural and artificial Marine habitats. analysis.
2007
illumination.
Automatic free denoising. Visual inspection. Quantitative
Arnold-Bos Backscatter is considered as the criteria based on closeness of
Marine habitats with unknown
et al. [22, 26] first noise. Adaptive smoothing histogram to exponential
turbidity characteristics.
2005 filter. Natural and artificial distribution.
lighting.
Training data set: marine
Residual error is computed
Torrez-Mendez Color recovery using an energy habitats (ground truth is known)
between ground truth and
and Dudek [27] minimization formulation. and frames from videos in the
corrected images.
2005 Natural and artificial lighting. deep ocean (no ground truth
available).
12 EURASIP Journal on Advances in Signal Processing

Table 1: Continued.
Model’s characteristics and
Algorithm Experiments and data set Image quality evaluation
assumptions
Test image: colored plate at 6 m
Ahlen et al. [28] Hyperspectral data for color
depth in the sea. Coral reefs and Visual inspection.
2007 correction. Natural illumination.
marine habitats.
Enhancement method: color
Petit et al. [29] space contraction using Marine habitats at both shallow
Visual inspection.
2009 quaternions. Natural and and deep waters.
artificial lighting
Compensating for lighting Shallow waters on a sunny day.
Garcia et al. [30]
problems: non uniform Shallow waters at sun down Visual inspection.
2002
illumination. (simulating deep ocean).
Visual inspection. Quantitative
Test images are degraded with
Video processing algorithms. evaluation: mean angular error is
Arredondo and simulated perturbations.
Simulations of perturbations. measured in motion estimation
Lebart [39] 2005 Simulations in shallow (1–7 m)
Natural and artificial lighting for different methods as a
and deep waters.
function of Gaussian noise.

for the experiments. The true motion of the sequence is implementation of an objective image quality measure). The
known and it is possible to measure quantitatively the majority of the algorithms here reviewed have been evaluated
effect of the degradations on the optical flow estimates. In using subjective visual inspection of their results.
[39] different methods available are compared. The angular
deviation between the estimated velocity and the correct one
is measured. An attenuation coefficient typical of deep ocean 7. Conclusions
is used. It is shown that the angular error increases linearly
with the Gaussian noise for all the methods compared. The difficulty associated with obtaining visibility of objects at
In order to assess the quality of their adaptive smoothing long or short distance in underwater scenes presents a chal-
method for underwater image denoising, Arnold-Bos et al. lenge to the image processing community. Even if numerous
[26] proposed a simple criterion based on a general result approaches for image enhancement are available, they are
by Pratt [40]: for most well contrasted and noise free mainly limited to ordinary images and few approaches
images, the distribution of the gradient magnitude histogram have been specifically developed for underwater images.
is closely exponential, except for a small peak at low In this article we have reviewed some of them with the
gradients corresponding to homogeneous zones. They define intention of bringing the information together for a better
a robustness index between 0 and 1 (it is linked to the comprehension and comparison of the methods. We have
variance of the linear regression of the gradient magnitude summarized the available methods for image restoration and
histogram) that measures the closeness of the histogram with image enhancement, focusing on the conditions for which
an exponential distribution. The same index was also used each of the algorithms has been originally developed. We
by Bazeille et al. [20] to evaluate the performance of their have also analyzed the methodology used to evaluate the
algorithm. algorithms’ performance, highlighting the works where a
In Table 1 we summarize the articles above reviewed quantitative quality metric has been used.
indicating the model assumptions and imaging conditions As pointed by our analysis, to boost underwater imaging
for which they have been developed and tested as well as processing, a common suitable database of test images for
the image quality assessment method used to evaluate the different imaging conditions together with standard criteria
corresponding results. for qualitative and/or quantitative assessment of the results is
To make a quantitative comparison of the above cited still required.
methods, judging which of them gives the best/worst results Nowadays, leading advancements in optical imaging
is beyond the scope of this article. In fact, in order to do technology [41, 42] and the use of sophisticated sensing
such a quantitative comparison of results, a common data techniques is rapidly increasing the ability to image objects
base should be available in order to test the corresponding in the sea. Emerging underwater imaging techniques and
algorithms according to specific criteria. To our knowledge, technologies make it necessary to adapt and extend the above
no such underwater database exist at present and therefore, cited methods to, for example, handle data from multiple
to build this database could be one of the future research sources that can extract 3-dimensional scene information.
lines from which the underwater community would certainly On the other hand, studying the vision system of underwater
beneficiate. However, we have pointed out how each of animals (their physical optics, photoreceptors and neuro-
the algorithms has been evaluated by the own authors: physiological mechanisms) will certainly give us new insights
subjectively (by visual inspection) or objectively (by the to the information processing of underwater images.
EURASIP Journal on Advances in Signal Processing 13

Acknowledgments [17] Y. Y. Schechner and N. Karpel, “Recovery of underwater


visibility and structure by polarization analysis,” IEEE Journal
The authors acknowledge Gianluigi Ciocca for a critical of Oceanic Engineering, vol. 30, no. 3, pp. 570–587, 2005.
reading of the manuscript and the reviewers for their critical [18] G. Koennen, Polarized Light in Nature, Cambridge University
observations. We also gratefully acknowledge F. Petit, W. Press, Cambridge, UK, 1985.
Hou, K. Iqbal, Y. Schechner, A. Rizzi, M. Chambah and S. [19] T. Treibitz and Y. Y. Schechner, “Active polarization descat-
Bazeille who kindly made their figures available upon our tering,” IEEE Transactions on Pattern Analysis and Machine
request. Intelligence, vol. 31, no. 3, pp. 385–399, 2009.
[20] S. Bazeille, I. Quidu, L. Jaulin, and J. P. Malkasse, “Automatic
underwater image pre-processing,” in Proceedings of the Car-
References acterisation du Milieu Marin (CMM ’06), 2006.
[21] S. Bazeille, Vision sous-marine monoculaire pour la reconnais-
[1] B. McGlamery, “A computer model for underwater camera sance d’objets, Ph.D. thesis, Université de Bretagne Occiden-
system,” in Ocean Optics VI, S. Q. Duntley, Ed., vol. 208 of tale, 2008.
Proceedings of SPIE, pp. 221–231, 1979. [22] A. Arnold-Bos, J. P. Malkasse, and G. Kerven, “A pre-
[2] J. S. Jaffe, “Computer modeling and the design of optimal processing framework for automatic underwater images
underwater imaging systems,” IEEE Journal of Oceanic Engi- denoising,” in Proceedings of the European Conference on
neering, vol. 15, no. 2, pp. 101–111, 1990. Propagation and Systems, Brest, France, March 2005.
[3] C. Funk, S. Bryant, and P. Heckman, “Handbook of underwa- [23] M. Chambah, D. Semani, A. Renouf, P. Courtellemont,
ter imaging system design,” Tech. Rep. TP303, Naval Undersea and A. Rizzi, “Underwater color constancy: enhancement
Center, San Diego, Calif, USA, 1972. of automatic live fish recognition,” in Color Imaging IX:
[4] T. H. Dixon, T. J. Pivirotto, R. F. Chapman, and R. C. Tyce, Processing, Hardcopy, and Applications, vol. 5293 of Proceedings
“A range-gated laser system for ocean floor imaging,” Marine of SPIE, pp. 157–168, San Jose, Calif, USA, January 2004.
Technology Society Journal, vol. 17, 1983. [24] A. Rizzi, C. Gatta, and D. Marini, “A new algorithm for
[5] J. McLean and K. Voss, “Point spread functions in ocean water: unsupervised global and local color correction,” Pattern
comparison between theory and experiment,” Applied Optics, Recognition Letters, vol. 24, pp. 1663–1677, 2003.
vol. 30, pp. 2027–2030, 1991.
[25] K. Iqbal, R. Abdul Salam, A. Osman, and A. Zawawi Talib,
[6] K. Voss, “Simple empirical model of the oceanic point spread
“Underwater image enhancement using an integrated color
function,” Applied Optics, vol. 30, pp. 2647–2651, 1991.
model,” International Journal of Computer Science, vol. 34, p.
[7] J. Jaffe, K. Moore, J. McLean, and M. Strand, “Underwater
2, 2007.
optical imaging: status and prospects,” Oceanography, vol. 14,
[26] A. Arnold-Bos, J.-P. Malkasset, and G. Kervern, “Towards
pp. 66–76, 2001.
a model-free denoising of underwater optical images,” in
[8] J. Mertens and F. Replogle, “Use of point spread and beam
Proceedings of the IEEE Europe Oceans Conference, vol. 1, pp.
spread functions for analysis of imaging systems in water,”
527–532, Brest, France, June 2005.
Journal of the Optical Society of America, vol. 67, pp. 1105–
1117, 1977. [27] L. A. Torres-Mendez and G. Dudek, “Color correction of
[9] W. Hou, D. J. Gray, A. D. Weidemann, G. R. Fournier, and underwater images for aquatic robot inspection,” in Proceed-
J. L. Forand, “Automated underwater image restoration and ings of the 5th International Workshop on Energy Minimization
retrieval of related optical properties,” in Proceedings of the Methods in Computer Vision and Pattern Recognition (EMM-
IEEE International Geoscience and Remote Sensing Symposium CVPR ’05), A. Rangarajan, B. C. Ve-muri, and A. L. Yuille,
(IGARSS ’07), pp. 1889–1892, 2007. Eds., vol. 3757 of Lecture Notes in Computer Science, pp. 60–
[10] W. Hou, A. D. Weidemann, D. J. Gray, and G. R. Fournier, 73, Springer, St. Augustine, Fla, USA, November 2005.
“Imagery-derived modulation transfer function and its appli- [28] J. Ahlen, D. Sundgren, and E. Bengtsson, “Application of
cations for underwater imaging,” in Applications of Digital underwater hyperspectral data for color correction purposes,”
Image Processing, vol. 6696 of Proceedings of SPIE, San Diego, Pattern Recognition and Image Analysis, vol. 17, no. 1, pp. 170–
Calif, USA, August 2007. 173, 2007.
[11] W. Hou, D. J. Gray, A. D. Weidemann, and R. A. Arnone, [29] F. Petit, A.-S. Capelle-Laizé, and P. Carré, “Underwater image
“Comparison and validation of point spread models for enhancement by attenuation inversion with quaternions,” in
imaging in natural waters,” Optics Express, vol. 16, no. 13, pp. Proceedings of the IEEE International Conference on Acoustics,
9958–9965, 2008. Speech and Signal Processing (ICASSP ’09), pp. 1177–1180,
[12] W. Wells, Theory of Small Angle Scattering, North Atlantic Taiwan, 2009.
Treaty Organization, 1973. [30] R. Garcia, T. Nicosevici, and X. Cufi, “On the way to solve
[13] E. Trucco and A. Olmos, “Self-tuning underwater image lighting problems in underwater imaging,” in Proceedings of
restoration,” IEEE Journal of Oceanic Engineering, vol. 31, no. the IEEE Oceans Conference Record, vol. 2, pp. 1018–1024,
2, pp. 511–519, 2006. 2002.
[14] A. Olmos and E. Trucco, “Detecting man-made objects in [31] H. Singh, J. Howland, D. Yoerger, and L. Whitcomb, “Quanti-
unconstrained subsea videos,” in Proceedings of the British tative photomosaicing of underwater imaging,” in Proceedings
Machine Vision Conference, pp. 517–526, 2002. of the IEEE Oceans Conference, vol. 1, pp. 263–266, 1998.
[15] A. Olmos, E. Trucco, and D. Lane, “Automatic man-made [32] R. Eustice, H. Singh, and J. Howland, “Image registration
object detection with intensity cameras,” in Proceedings of the underwater for fluid flow measurements and mosaicking,” in
IEEE Conference Oceans Record, vol. 3, pp. 1555–1561, 2002. Proceedings of the IEEE Oceans Conference Record, vol. 3, pp.
[16] Z. Liu, Y. Yu, K. Zhang, and H. Huang, “Underwater 1529–1534, 2000.
image transmission and blurred image restoration,” Optical [33] S. M. Pizer, E. P. Amburn, J. D. Austin, et al., “Adaptive
Engineering, vol. 40, no. 6, pp. 1125–1131, 2001. histogram equalization and its variations,” Computer Vision,
14 EURASIP Journal on Advances in Signal Processing

Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368,


1987.
[34] K. Zuidervel, “Contrast limited adaptive histogram equaliza-
tion,” in Graphics Gems IV, P. Heckbert, Ed., Academic Press,
1994.
[35] Y. Rzhanov, L. M. Linnett, and R. Forbes, “Underwater
video mosaicing for seabed mapping,” in Proceedings of IEEE
International Conference on Image Processing, vol. 1, pp. 224–
227, 2000.
[36] Z. Wang and A. Bovik, Modern Image Quality Assessment,
Morgan & Claypool, 2006.
[37] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli,
“Image quality assessment: from error visibility to structural
similarity,” IEEE Transactions on Image Processing, vol. 13, no.
4, pp. 600–612, 2004.
[38] W. Hou and A. D. Weidemann, “Objectively assessing under-
water image quality for the purpose of automated restoration,”
in Visual Information Processing XVI, vol. 6575 of Proceedings
of SPIE, Orlando, Fla, USA, April 2007.
[39] M. Arredondo and K. Lebart, “A methodology for the system-
atic assessment of underwater video processing algorithms,” in
Proceedings of the IEEE Europe Oceans Conference, vol. 1, pp.
362–367, 2005.
[40] W. Pratt, Digital Image Processing, John Wiley & Sons, New
York, NY, USA, 1991.
[41] D. M. Kocak and F. M. Caimi, “The current art of underwater
imaging—with a glimpse of the past and vision of the future,”
Marine Technology Society Journal, vol. 39, no. 3, pp. 5–26,
2005.
[42] D. M. Kocak, F. R. Dalgleish, F. M. Caimi, and Y. Y. Schechner,
“A focus on recent developments and trends in underwater
imaging,” Marine Technology Society Journal, vol. 42, no. 1, pp.
52–67, 2008.
Photographȱ©ȱTurismeȱdeȱBarcelonaȱ/ȱJ.ȱTrullàs

Preliminaryȱcallȱforȱpapers OrganizingȱCommittee
HonoraryȱChair
The 2011 European Signal Processing Conference (EUSIPCOȬ2011) is the MiguelȱA.ȱLagunasȱ(CTTC)
nineteenth in a series of conferences promoted by the European Association for GeneralȱChair
Signal Processing (EURASIP, [Link]). This year edition will take place AnaȱI.ȱPérezȬNeiraȱ(UPC)
in Barcelona, capital city of Catalonia (Spain), and will be jointly organized by the GeneralȱViceȬChair
Centre Tecnològic de Telecomunicacions de Catalunya (CTTC) and the CarlesȱAntónȬHaroȱ(CTTC)
Universitat Politècnica de Catalunya (UPC). TechnicalȱProgramȱChair
XavierȱMestreȱ(CTTC)
EUSIPCOȬ2011 will focus on key aspects of signal processing theory and
TechnicalȱProgramȱCo
Technical Program CoȬChairs
Chairs
applications
li ti as listed
li t d below.
b l A
Acceptance
t off submissions
b i i will
ill be
b based
b d on quality,
lit JavierȱHernandoȱ(UPC)
relevance and originality. Accepted papers will be published in the EUSIPCO MontserratȱPardàsȱ(UPC)
proceedings and presented during the conference. Paper submissions, proposals PlenaryȱTalks
for tutorials and proposals for special sessions are invited in, but not limited to, FerranȱMarquésȱ(UPC)
the following areas of interest. YoninaȱEldarȱ(Technion)
SpecialȱSessions
IgnacioȱSantamaríaȱ(Unversidadȱ
Areas of Interest deȱCantabria)
MatsȱBengtssonȱ(KTH)
• Audio and electroȬacoustics.
• Design, implementation, and applications of signal processing systems. Finances
MontserratȱNájarȱ(UPC)
Montserrat Nájar (UPC)
• Multimedia
l d signall processing andd coding.
d
Tutorials
• Image and multidimensional signal processing. DanielȱP.ȱPalomarȱ
• Signal detection and estimation. (HongȱKongȱUST)
• Sensor array and multiȬchannel signal processing. BeatriceȱPesquetȬPopescuȱ(ENST)
• Sensor fusion in networked systems. Publicityȱ
• Signal processing for communications. StephanȱPfletschingerȱ(CTTC)
MònicaȱNavarroȱ(CTTC)
• Medical imaging and image analysis.
Publications
• NonȬstationary, nonȬlinear and nonȬGaussian signal processing. AntonioȱPascualȱ(UPC)
CarlesȱFernándezȱ(CTTC)
Submissions IIndustrialȱLiaisonȱ&ȱExhibits
d i l Li i & E hibi
AngelikiȱAlexiouȱȱ
Procedures to submit a paper and proposals for special sessions and tutorials will (UniversityȱofȱPiraeus)
be detailed at [Link]. Submitted papers must be cameraȬready, no AlbertȱSitjàȱ(CTTC)
more than 5 pages long, and conforming to the standard specified on the InternationalȱLiaison
EUSIPCO 2011 web site. First authors who are registered students can participate JuȱLiuȱ(ShandongȱUniversityȬChina)
in the best student paper competition. JinhongȱYuanȱ(UNSWȬAustralia)
TamasȱSziranyiȱ(SZTAKIȱȬHungary)
RichȱSternȱ(CMUȬUSA)
ImportantȱDeadlines: RicardoȱL.ȱdeȱQueirozȱȱ(UNBȬBrazil)

P
Proposalsȱforȱspecialȱsessionsȱ
l f i l i 15 D 2010
15ȱDecȱ2010
Proposalsȱforȱtutorials 18ȱFeb 2011
Electronicȱsubmissionȱofȱfullȱpapers 21ȱFeb 2011
Notificationȱofȱacceptance 23ȱMay 2011
SubmissionȱofȱcameraȬreadyȱpapers 6ȱJun 2011

Webpage:ȱ[Link]

View publication stats

You might also like