FPGA-Based Implementation of Iris Recognition SystemsUntitled
FPGA-Based Implementation of Iris Recognition SystemsUntitled
By
Supervised By
2012
Acknowledgments
i
Abstract
Abstract
Iris is touch-less automated real-time biometric system for user
authentication. Pattern recognition approaches suffer from high cost, long
development times, and computationally intensive. General Purpose Systems
are low speed and not portable; FPGA-based system prototype implemented by
using VHDL language.
Contents
Acknowledgments………………………………………………………………….. i
Abstract…………………………………………………………………………….. ii
Contents…………………………………………………………………………….. iii
List of Tables……………………………………………………………………….. vi
List of Figures………………………………………………………………………. vii
List of Abbreviations………………………………………………………………. ix
Chapter-1: Introduction…………………………………………………………… 1
1.1 Introduction………………………………………………… ……………… 1
1.1.1 Problem Statements…………………………………………………………. 4
1.1.2 Current Research…………………………………………………………….. 5
1.2 Work Aims………………………………………………………………….. 8
1.3 Work Organization………………………………………………………….. 9
iii
Table of Contents
v
List of Tables
List of Tables
Page
Table No. Table Caption
No.
vi
List of Figures
List of Figures
Page
Figure No. Figure Caption
No.
Figure (2.1) Examples of biometric characteristics 14
Figure (2.2) A components of biometric system 18
Figure (2.3) Biometric system error rates 28
Figure (2.4) Receiver operating characteristic (ROC) 28
Figure (3.1) Anatomy of the human eye 29
Figure (3.2) The human iris front-on view 30
Figure (3.3) Anatomy of the iris visible in an optical image 31
Figure (4.1) Example iris images in CASIA-IrisV1 38
Figure (4.2) Example iris images in ICE-2006 38
Figure (4.3) Example iris images in MMU 1 39
Figure (4.4) Example iris images in UPIRIS 40
Figure (5.1) Stages of iris recognition algorithm 42
Figure (5.2) Block diagram of our proposed scheme 43
Figure (5.3) Pupil boundary detection steps 44
Figure (5.4) Iris localization steps: 49
Figure (5.5) Result of the proposed segmentation algorithm 49
Figure (5.6) Implementation of unwrapping step. 52
Figure (5.7) Unwrapping and normalization 52
A sample results of unwrapping and normalization
Figure (5.8) 53
implementation
Figure (6.1) Enhanced normalized iris template with histogram 55
Figure (6.2) 1D-Log Gabor filter and encoding idea 59
Figure (6.3) Iris code generation 59
Figure (6.4) Encoded iris texture after DCT transform 61
False Accept and False Reject Rates for two distributions
Figure (6.5) 63
with a separation Hamming distance of 0.35
Probability distribution curves for matching and nearest non
Figure (6.6) 68
matching Hamming distances of 1D Log-Gabor method.
Probability distribution curves for matching and nearest non
Figure (6.7) 68
matching Hamming distances of DCT method.
vii
List of Figures
Figure (6.11) FAR and FRR versus Hamming distances for DCT approach. 71
Figure (6.12) ROC of both 1D Log-Gabor and DCT approaches. 72
FRR versus Hamming distances of both 1D Log-Gabor and
Figure (6.13)
DCT approaches.
72
viii
List of Abbreviations
List of Abbreviations
ROM Read Only Memory
FPGA Field Programmable Gate Array
PCI Peripheral Component Interconnect bus
ASIC Application Specific Integrated Circuit
DSP Digital Signal Processing
CHT Circular Hough transform
DCT Discrete Cosine Transform
HD Hamming Distance
IP Intellectual Property
LED Light-Emitting Diode
LCD Liquid Crystal Display
FDCT Fast Discrete Cosine Transform
ATM Automatic Teller Machines
FNMR False Non-Match Rate
FMR False Match Rate
PIN Personal Identification Number
ROC Receiver Operating Characteristic
FTC Failure to Capture
FTE Failure to Enroll
NIR Near Infrared illumination
CASIA Chinese Academy of Sciences Institute of Automation
ICE Iris Challenge Evaluations
LEI Lions Eye Institute
MMU Multimedia University
UBIRIS University of Bath Iris Image Database
CHT Circular Hough Transform
PLD Programmable Logic Devices
GPP general purpose processors
PLA Programmable Logic Arrays
ix
List of Abbreviations
x
Chapter 1 Introduction
Chapter 1
Introduction
1.1 Introduction
Within the last decade, governments and organizations around the world
have invested heavily in biometric authentication for increased security at
critical access points, not only to determine who accesses a system and/or
service, but also to determine which privileges should be provided to each user.
For achieving such identification, biometrics technology is emerging as a
technology that provides a high level of security, as well as being convenient
and comfortable for the citizen [3]. For example, the United Arab Emirates
employs biometric systems to regulate the people traffic across their borders.
Subsequently, several biometrics systems have attracted much attention, such
as facial recognition and iris recognition [4]. Iris recognition is more abstract.
-1-
Chapter 1 Introduction
Prototypes can help unveiling design bugs, which might be hidden in the
simulation stage because they allow exploring the behavior of a "real" product.
In particular, FPGA prototypes are suited to explore the behavior of hardware
components. FPGA prototypes allow designers to estimate parameters, which
are typical for design portions to be implemented in an Application Specific
Integrated Circuit (ASIC) or FPGA, like real time algorithms with extensive,
repetitive multiplications. One such parameter is the area consumption of the
component, which is strongly influenced, besides design complexity, by the
utilization of FPGA specific hard macros, like multipliers, block RAMs or
DSP-slices. Even timing issues, as long critical paths, can be analyzed, because
they will not be much different in the final product . Another reason for building
a prototype is to convince potential customers of the capabilities of the product,
which might be far from completion. However, a drawback of prototypes is
that for nowadays-high complexity systems their implementation is costly and
time consuming [10].
The properties of the human eye iris are stable throughout the life of an
individual, and therefore the iris a suitable biometric modality. The biometric
properties of every iris are unique [18].
The iris identification using analysis of the iris texture has attracted a lot
of attention, and researchers have presented a variety of approaches. Daugman
[19] has presented the most promising 2-D Gabor filter-based approach for the
iris identification system. He used an Integro-differential operator to find the
pupillary boundary and the limbus boundary as circles. Then, Rubber Sheet
Model is used to normalize the iris. Hamming Distance (HD) was his classifier
operator to match the templates. Daugman's overall system has an excellent
performance and accuracy. It uses a binary representation for iris code.
Moreover, this speeds the matching through HD. In addition, ease handling of
rotation of iris. In addition, interpretation of matching as a result of statistical
test of independence. On the other hand the system is Iterative and
computationally expensive. In addition, evaluation of iris image quality reduces
to the estimation of a single or a pair of factors such as defocus blur, motion
blur, and occlusion.
-5-
Chapter 1 Introduction
will be the method to localize the iris. Localized iris will be normalized by
Daugman's Rubber Sheet Model. In order to enhance the contrast, the
histogram equalization will be used. The coding methods based on 1D log-
Gabor transform and Discrete Cosine Transform (DCT) could be used to
extract the discriminating features. Finally, Hamming Distance (HD) operator
was used in the template matching process. In this part we will compare
between the performance of system by using 1D log-Gabor as a feature
extraction algorithm against DCT algorithm.
FPGAs are fully customizable and the designer can prototype, simulate,
and implement a parallel logic function without the process of having a new
integrated circuit manufactured from scratch. FPGAs are commonly
-7-
Chapter 1 Introduction
There are several important reasons, which have motivated the current
research using this method, these are: (i) An iris has features that make this
modality appropriate for recognition purposes will be discussed later in chapter
2 and chapter3; (ii) This modality has shown in tests the robustness of the
algorithms for recognition. At the same time, some of the algorithms involved
are relatively straightforward [5].
-8-
Chapter 1 Introduction
- 10 -
Chapter 1 Introduction
- 11 -
Chapter 2 Biometric Security Systems
Chapter 2
Biometric Security Systems
2.1 Introduction
- 12 -
Chapter 2 Biometric Security Systems
passwords, which were not unique. However, password can be forgotten, and
identification cards can be lost or stolen.
By definition, there are two key words in it: “automated” and “person”.
The word “automated” differentiates biometrics from the larger field of human
identification science. Biometric authentication techniques are done completely
by machine, generally (but not always) a digital computer [36]. The second key
word is “person”. Statistical techniques, particularly using fingerprint patterns,
have been used to differentiate or connect groups of people or to
probabilistically link persons to groups, but biometrics is interested only in
recognizing people as individuals. All of the measures used contain both
physiological and behavioral components, both of which can vary widely or be
quite similar across a population of individuals. No technology is purely one or
the other, although some measures seem to be more behaviorally influenced
and some more physiologically influenced [37, 38].
- 13 -
Chapter 2 Biometric Security Systems
Fig. 2.1: Examples of biometric characteristics: (a) DNA, (b) ear, (c) face, (d) facial
thermogram, (e) hand thermogram, (f) hand vein, (g) fingerprint, (h) gait, (i) hand geometry,
(j) iris, (k) palmprint, (l) retina, (m) signature, and (n) voice [34].
- 14 -
Chapter 2 Biometric Security Systems
The science of using humans for the purpose of identification dates back
to the 1870s and the measurement system of Alphonse Bertillon. Bertillon’s
system of body measurements, including skull diameter and arm and foot
length, was used in the USA to identify prisoners until the 1920s [40]. Before
that, Henry Faulds, William Herschel and Sir Francis Galton proposed
quantitative identification through fingerprint and facial measurements in the
1880s. The development of digital signal processing techniques in the 1960s
led immediately to work in automating human identification. Speaker [41] and
fingerprint [42] recognition systems were among the first to be applied. The
potential for application of this technology to high-security access control,
personal locks and financial transactions was recognized in the early 1960s.
The 1970s saw development and deployment of hand geometry systems [43],
the start of large-scale testing and increasing interest in government use of
these “automated personal identification” technologies. Then, Retinal [44] and
signature verification [45] systems came in the 1980s, followed by face
systems [46]. Lastly, Iris recognition [21] systems were developed in the 1990s
[33].
(FMR) - also known as “Type II error”– the probability that a submitted sample
will match the enrollment image of another user [49]. Availability is measured
by the “failure to enroll” rate, the probability that a user will not be able to
supply a readable measure to the system upon enrollment. Accessibility can be
quantified by the “throughput rate” of the system, the number of individuals
that can be processed in a unit time, such as a minute or an hour. Acceptability
is measured by polling the device users [33, 50].
- 17 -
Chapter 2 Biometric Security Systems
- 18 -
Chapter 2 Biometric Security Systems
(i) Facial, hand, and hand vein infrared thermogram, A pattern of radiated heat
from human body considers a characteristic of an individual. These samples of
patterns can be captured by an infrared camera in an unobtrusive manner like a
regular (visible spectrum) photograph. The technology could be used for covert
recognition. A thermogram-based system is noninvasive as it does not require
contact, a problem facing image acquisition is challenging in uncontrolled
environments, where heat emanating surfaces (e.g., room heaters and vehicle
exhaust pipes) are present in the vicinity of the body. A related technology
using near infrared imaging is used to scan the back of a clenched fist to
determine hand vein structure. Also, Infrared sensors are prohibitively
expensive which is a factor make wide spread use of the thermograms less[39].
(ii) Odor, for each individual (organism), an odor as a result of its chemical
composition spreads around. Acts as it is characteristic and could be used for
distinguishing various objects. Acquisition would be done with an array of
chemical sensors, each sensitive to a certain group of compounds. Deodorants
and parfumes could lower the distinctiveness leading to bad capturing or
enrollment[32].
- 19 -
Chapter 2 Biometric Security Systems
(iii) Ear, many researchers suggested that the shape of the ear to be a
characteristic. Studying the structure of the approaches is based on matching
the distance of salient points on the pinna from a landmark location on the ear.
The features of an ear are not expected to be very distinctive in establishing the
identity of an individual [39]. No commercial applications based on ear done
until now.
(iv) Hand and finger geometry, One of the earliest automated biometric
systems was installed during late 1960s and it used hand geometry and stayed
in production for almost 20 years. most measurements declare hand geometry
is the dimensions of fingers and the location of joints, shape and size of palm.
The technique is very simple, relatively easy to use and inexpensive. Hand
geometry operates in verification mode well, it cannot be used for identification
of an individual from a large population, because hand geometry is not very
distinctive. Dry weather or individual anomalies such as dry skin do not appear
to have any negative effects on the verification accuracy. This method can find
its commercial use in laptops rather easy, but using it in multimodal gives
better performance. There are even verification systems available that are based
on measurements of only a few fingers instead of the entire hand. These
devices are smaller than those used for hand geometry [32,39]. Further, hand
geometry information may not be invariant during the growth period of
children. Limitations in dexterity (arthritis) or even jewelry may influence
extracting the correct hand geometry information.
(v) Fingerprint, A fingerprint is the pattern of ridges and valleys on the surface
of a fingertip, the formation of which is determined during the first seven
months of fetal creation. Fingerprints of identical twins are different and so are
the prints on each finger of the same person. Humans have used fingerprints for
personal identification for many centuries and the matching accuracy using
fingerprints has been shown to be very high [52]. Nowadays, a fingerprint
scanner costs less, when ordered in large quantities and the marginal cost of
- 20 -
Chapter 2 Biometric Security Systems
(vi) Face, Facial images are probably the most common biometric
characteristic used by humans to make a personal recognition; It is a non-
intrusive method, The most popular approaches to face recognition are based
on either: (i) the location and shape of facial attributes such as the eyes,
eyebrows, nose, lips and chin, and their spatial relationships, or (ii) the overall
(global) analysis of the face image that represents a face as a weighted
combination of a number of canonical faces [32]. While the verification
performance of the face recognition systems that are commercially available is
reasonable, they impose a number of restrictions on how the facial images are
obtained, sometimes requiring a fixed and simple background or special
illumination. These systems also have difficulty in recognizing a face from
images captured from two drastically different views and under different
illumination conditions. It is questionable whether the face itself, without any
contextual information, is a sufficient basis for recognizing a person from a
large number of identities with an extremely high level of confidence. In order
for a facial recognition system to work well in practice, it should
automatically[53]: (i) detect whether a face is present in the acquired image;
- 21 -
Chapter 2 Biometric Security Systems
(ii) locate the face if there is one; and (iii) recognize the face from a general
viewpoint (i.e., from any pose) [39]. The applications of facial recognition
range from a static, controlled verification to a dynamic, uncontrolled face
identification in a cluttered background (e.g., airport)[54].
(vii) Retina, Since the retina is protected in an eye itself, and since it is not easy
to change or replicate the retinal vasculature; this is one of the most secure
biometric. Retinal recognition creates an eye signature from the vascular
configuration of the retina, which is supposed to be a characteristic of each
individual and each eye, respectively. Image acquisition requires a person to
look through a lens at an alignment target, therefore it implies cooperation of
the subject. Also some medical conditions make retinal scan can reveal as such
hinders public acceptance [39].
(viii) Iris, it is the thin circular region of the eye bounded by the pupil and the
sclera on either side. The visual texture of the iris is formed during fetal
development and stabilizes during the first two years of life. The complex iris
texture carries very distinctive information useful for personal recognition.
Each iris is distinctive and, like fingerprints, even the irises of identical twins
are different [30]. It is extremely difficult to surgically tamper the texture of the
iris. Further, it is rather easy to detect artificial irises (e.g., designer contact
lenses). The accuracy and speed of currently deployed iris-based recognition
systems is promising and point to the feasibility of large-scale identification
systems based on iris information [32]. Although, the early iris-based
recognition systems required considerable user participation and were
expensive, the newer systems have become more user-friendly and cost
effective [39]. A commercial iris recognition systems is now available.
(ix) Palmprint, palms of the human hands contain unique pattern of ridges and
valleys as the same as fingerprints. Since palm is larger then a finger, palmprint
is expected to be even more reliable than fingerprint. Palmprint scanners need
- 22 -
Chapter 2 Biometric Security Systems
to capture larger area with similar quality as fingerprint scanners, so they are
more expensive[55]. A highly accurate biometric system could be combined by
using a high-resolution palm print scanner that would collect all the features of
the palm such as hand geometry, ridge and valley features, principal lines, and
wrinkles[32]. Typical feature as fingerprints have.
(x) Voice, The features of an individual’s voice are based on the shape and size
of the appendages (e.g., vocal tracts, mouth, nasal cavities, and lips) that are
used in the synthesis of the sound. These physiological characteristics of
human speech are invariant for an individual, but the behavioral part of the
speech of a person changes over time due to age, medical conditions (such as a
common cold), and emotional state, etc. Accordingly, Voice is a combination
of physiological and behavioral biometrics. Voice is also not very distinctive
and may not be appropriate for large-scale identification [31]. Two types of
voice systems produced. A text-dependent voice recognition system is based on
the utterance of a fixed predetermined phrase. A text-independent voice
recognition system recognizes the speaker independent of what he speaks. A
text-independent system is more difficult to design than a text-dependent
system but offers more protection against fraud. Speaker recognition is most
appropriate in phone-based applications but the voice signal over phone is
typically degraded in quality by the microphone and the communication
channel [32]. A disadvantage of voice-based recognition is that speech features
are sensitive to a number of factors such as background noise.
(xi) DNA, except for the fact that identical twins have identical DNA patterns
[32], deoxyribonucleic acid (DNA) is the unique code for one’s individuality. It
is one-dimensional (1–D) ultimate code. however, currently used mostly in the
context of forensic applications for person recognition. Three issues limit the
utility of this biometrics for other applications [39]:
1. Contamination and sensitivity: it is easy to steal a piece of DNA from an
unsuspecting subject that can be subsequently abused for an ulterior purpose;
- 23 -
Chapter 2 Biometric Security Systems
(i) Gait, Basically, gait is the peculiar way one walks and it is a complex
spatio-temporal biometrics. This is one of the newer technologies and is yet to
be researched in more detail. Gait is a behavioral biometric and may not
remain the same over a long period of time, due to change in body weight or
serious brain damage. Acquisition of gait is similar to acquiring a facial picture
and may be an acceptable biometric. Since video sequence is used to measure
several different movements this method is computationally expensive [32]. It
is not supposed to be very distinctive but can be used in some low-security
applications.
(iii) Signature, The way a person signs his / her name is known to be
characteristic of that individual. Signature is a simple, concrete expression of
the unique variations in human hand geometry. Collecting samples for this
- 24 -
Chapter 2 Biometric Security Systems
Table 2.1 provides a brief comparison of the above biometric techniques based
on seven factors.
Circumvention
Distinctiveness
Collectability
Acceptability
Performance
Permanence
Universality
Biometric Characteristic
Facial thermogram H H L H M H L
Hand vein M M M M M M L
Gait M L L H L H M
Keystroke L L L M L M M
Odor H H H L L M L
Ear M M H M M H M
Hand geometry M M M H M M M
Fingerprint M H H M H M M
Face H L M H L H H
Retina H H M L H L L
Iris H H H M H L L
Palm print M H H M H M M
Voice M L L M L H H
Signature L L L H L H H
DNA H H H L H L L
* (H: High, M: Medium, L: Low)
- 25 -
Chapter 2 Biometric Security Systems
Two samples of the same biometric characteristic from the same person
(e.g., two impressions of a user’s right index finger) are not exactly the same
due to some reasons like [33,39]:
(i) Acquiring sensor (e.g. finger placement).
(ii) Imperfect imaging conditions (e.g. sensor noise and dry fingers).
(iii) Environmental changes (e.g. temperature and humidity).
(iv) Changes in the user’s physiological or behavioral characteristics (e.g.
cuts and bruises on the finger).
(v) Noise and bad user's interaction with the sensor (e.g. finger placement).
- 26 -
Chapter 2 Biometric Security Systems
The higher the score, the more certain is the system that the two
biometric measurements come from the same person. The threshold (t)
regulates the system decision. The distribution of scores generated from pairs
of samples from different persons is called an impostor distribution, and the
score distribution generated from pairs of samples of the same person is called
a genuine distribution [32]. Fig.2.3 illustrates that fact.
There are two other recognition error rates that can be also used and they
are: Failure to Capture (FTC) and Failure to Enroll (FTE). FTC denotes the
percentage of times the biometric device fails to automatically capture a sample
when presented with a biometric characteristic. This usually happens when
system deals with a signal of insufficient quality. The FTE rate denotes the
percentage of times users cannot enroll in the recognition system [32, 51].
- 27 -
Chapter 2 Biometric Security Systems
Fig.2.3: Biometric system error rates. (a) FMR and FNMR for a given threshold t are
displayed over the genuine and impostor score distributions; FMR is the percentage of non
mate pairs whose matching scores are greater than or equal to t, and FNMR is the percentage
of mate pairs whose matching scores are less than t.[32].
- 28 -
Chapter 3 Human Vision System
Chapter 3
Human Vision System
The chapter introduces the human vision system, the iris recognition
system phases, and the effect of medical conditions upon the iris capturing. The
system challenges, advantages, and disadvantages of the iris recognition system
will be under observation to present the importance difficulties of iris
recognition system.
We will move quickly over only some of the well-known parts of the
human eye. The cornea is a clear, transparent portion of the outer coat of the
eyeball through which light passes to the lens. The lens helps to focus light on
the retina, which is the innermost coat of the back of the eye, formed of light
sensitive nerve endings that carry the visual impulse to the optic nerve. The
retina [59] acts as a film of a camera in its operation and tasks.
- 29 -
Chapter 3 Human Vision System
The iris is a thin circular ring lies between cornea and the lens of the
human eye. A front-on view of the iris is shown in Fig. 3.2; in which iris
encircles the pupil; the dark centered portion of the eye. The function of iris is
to control the amount of light entering through the pupil, this done by the
sphincter and dilators muscles, which adjust the size of the pupil [60].
The sphincter muscle lies around the very edge of the pupil. In bright
light, the sphincter contracts, causing the pupil to constrict. The dilator muscle
runs radially through the iris, like spokes on a wheel. This muscle dilates the
eye in dim lighting [30].
- 30 -
Chapter 3 Human Vision System
The human iris begins to form in the 3rd month of gestation and the
structures creating the pattern are complete by the 8th month; the color and
pigmentation continue to build through the first year of birth [61]. This pattern
contains many distinctive features such as arching ligaments, furrows, ridges,
crypts, rings, corona, freckles, and zigzag collarette [62]. As shown in Fig. 3.3.
The color of the iris can change as the amount of pigment in the iris increases
during childhood. Nevertheless, for most of a human’s lifespan, the appearance
of the iris is relatively constant. Therefore, this pattern remains stable through a
person's life.
The iris is composed of several layers; the visual appearance of the iris
is a direct result of its multilayered structure [33]. Iris color results from the
differential absorption of light impinging on the pigmented cells in the anterior
border layer, posterior epithelium and is scattered as it passes through the
stroma to yield a blue appearance. Progressive levels of anterior pigmentation
lead to darker colored irises [62].
The average diameter of the iris is nearly 11 mm and the pupil radius
can range from 0.1 to 0.8 of the iris radius [62]. It shares high-contrast
boundaries with the pupil but less-contrast boundaries with the sclera [61].
- 31 -
Chapter 3 Human Vision System
Formation of the unique patterns of the iris is random and not related to any
genetic factors [21]. The only characteristic that is dependent on genetics is the
pigmentation of the iris, which determines its color. Due to that the two eyes of
an individual contain completely independent iris patterns (left eye is not the
same as right one), and it should not verified by an example [63] even they
were twins. The false accept probability can be estimated at one in 1031 [62].
The idea of using the iris as a biometric is over 100 years old. However,
the idea of automating iris recognition is more recent. In 1987, Flom and Safir
[64] obtained a patent for an unimplemented conceptual design of an automated
iris biometrics system [30].
A cataract is a clouding of the lens, the part of the eye responsible for
focusing light and producing clear, sharp images. Cataracts are a natural result
of aging: ‘‘about 50% of people aged 65–74 and about 70% of those 75 and
older have visually significant cataracts’’ [ 65]. Eye injuries, certain
medications, and diseases such as diabetes and alcoholism have also been
known to cause cataracts. Cataracts can be removed through surgery. Patients
who have cataract surgery may be advised to re-enroll in iris biometric systems.
- 33 -
Chapter 3 Human Vision System
born without an iris, or with a partial iris. The pupil and the sclera are present
and visible, but there is no substantial iris region. Aniridia is estimated to have
an incidence of between 1 and 50,000 and 1 and 100,000. This may seem rare
especially in our country (Egypt).
- 34 -
Chapter 3 Human Vision System
For such reasons beside the briefly ones discussed in chapter two; the
iris chosen as our biometric technology for recognition system; as it is the most
accurate and reliable as recent researches demonstrate.
- 35 -
Chapter 4 Iris Database and Dataset
Chapter 4
Iris Database and Dataset
There is not any public iris database. Lacking of iris data may be a block
to the research of iris recognition. To promote the research, National
Laboratory of Pattern Recognition (NLPR), Institute of Automation (IA),
Chinese Academy of Sciences (CAS) provide iris database freely for iris
recognition researchers. Table 4.1 summarizes information on a number of
famous iris datasets.
- 36 -
Chapter 4 Iris Database and Dataset
All images tested are taken from the Chinese Academy of Sciences
Institute of Automation (CASIA) iris database, apart from being the oldest
[73]; this database is clearly the most known and widely used by the majority
of researchers. Beginning with a 320×280 pixel photograph of the eye took
from 4 cm away using a near infrared camera. The NIR spectrum (850 nm)
emphasizes the texture patterns of iris making the measurements taken during
iris recognition more precise.
CASIA database (version.1) includes 756 iris images from 108 eyes,
hence 108 classes. For each eye, 7 images were captured; in two sessions,
where 3 samples are collected in the first and 4 samples in the second session
[74]. Images have been captured under highly constrained environment. They
present very close and homogeneous characteristics and their noise factors are
exclusively related with iris obstructions by eyelids and eyelashes; as shown in
- 37 -
Chapter 4 Iris Database and Dataset
Fig. 4.1 the pupil regions of all iris images were automatically detected and
replaced with a circular region of constant intensity to mask out the
specular reflections from the NIR illuminators (8 illuminators) before
public release.
Fig. 4.1: Example iris images in CASIA-IrisV1: from two different sessions [74].
The iris image datasets used in the Iris Challenge Evaluations (ICE) in
2005 and 2006 [75] were acquired at the University of Notre Dame, and
contain iris images of a wide range of quality, including some off-axis images.
The ICE 2005 database is currently available, and the larger ICE 2006 database
also released. One unusual aspect of these images is that the intensity values
are automatically contrast-stretched by the LG 2200 to use 171 gray levels
between 0 and 255. Samples are shown in Fig. 4.2.
- 38 -
Chapter 4 Iris Database and Dataset
The Lions Eye Institute (LEI) database [76] consists of 120 greyscale
eye images taken using a slit lamp camera. Since the images were captured,
using natural light, specular reflections are present on the iris, pupil, and cornea
regions. Unlike the CASIA database, the LEI database was not captured
specifically for iris recognition.
- 39 -
Chapter 4 Iris Database and Dataset
However, the second version of the UBIRIS database has over 11000
images (and continuously growing) and more realistic noise factors. Images
were actually captured at-a-distance and on-the-move and on the visible
wavelength), with corresponding more realistic noise factors. See fig. 4.4, some
random samples are in present.
- 40 -
Chapter 4 Iris Database and Dataset
In this thesis, the CASIA (version 1) iris image database is used for the
testing and experimentation. Such the images, taken in almost perfect imaging
conditions, are noise-free (photon noise, sensors noise & electrons, reflections,
focus, compression, contrast levels, and light levels).
- 41 -
Chapter 5 Image Processing Algorithm
Chapter 5
Image Preprocessing Algorithm
- 42 -
Chapter 5 Image Processing Algorithm
The phases of software system is shown in Fig. 5.2; the original eye
image was presampled to (260×320) pixels to crop the unneeded parts of
the eye image, as well as to decrease the processing time during the pupil
boundary (iris inner boundary) detected [80]. Through feature extraction
process, 1D log-Gabor used and so the verification associated results compared
by the ones handled by using DCT to achieve the best accurate method.
Normalized Iris
Feature Extraction
Template Matching Feature Vector
1D Log-Gabor Enhancement
(Hamming Distance)
DCT
the precision of the iris localization step. Most of previous iris segmentation
approaches assume that the boundary of pupil and iris is a circle. So to detect
the boundary, the center and radius needed for the pupil circle and for iris
circle. However, note that the two centers do not have the same coordinates.
Since the pupil generally is the darkest region in the image, this
approach applies threshold segmentation method to find the region [79, 81].
Firstly, standout iris-pupil by contrasting, using thresholding linear
transformation. Then filter bright pixels by using threshold brightness (in our
implementation is 200). After that, approximate center of pupil using weight
centriod of pixels and radius calculated from the maximum circular summation
of gradient points into the circle. Steps illustrated in Fig. 5.3 sequentially.
Fig. 5.3: Pupil boundary detection steps: (a) Original eye image (260*320). (b) Pupil after
threshold (200). (c) The segmented pupil (centre coordinate (163,135) and radius (36 pixels)).
The pupil center can be used to detect the approximate inner and outer
iris boundaries. Wildes [21], Kong and Zhang, and Ma et al. [82] use Hough
- 44 -
Chapter 5 Image Processing Algorithm
Transform on the binary edge map to localize iris. Daugman [62] uses an
integro-differential operator on the raw image to isolate the iris, also in pupil
detection.
(5.1)
where Gσ (r) is a smoothing function and I(x, y) is the image of the eye. This
function finds for an image I(x,y), the maximum of the absolute value of the
convolution of a smoothing function Gσ with the partial derivative, with respect
to ( r ), of the normalized contour integral of the image along an arc (ds) of a
circle. The symbol * denotes convolution and Gσ is a smoothing function such
as a Gaussian of scale σ [30, 62].
This operator searches for the circular path where there is maximum
change in pixel intensity, by varying the radius r and center x and y position of
the circular contour. Then applied iteratively with the amount of smoothing
progressively reduced in order to attain precise localization.
The deficiency of this approach, in cases where there is noise in the eye
image, such as from reflections, this integro-differential operator fails; this is
due to the fact that it works only on a local scale. Another deficiency of this
operator is that it is too computationally exhaustive. However, since The
integro-differential works with raw derivative information, it does not suffer
from the thresholding problems of the Hough transform [30].
- 45 -
Chapter 5 Image Processing Algorithm
- 46 -
Chapter 5 Image Processing Algorithm
parameters of circles passing through each edge point, in order to search for the
desired contour from the edge map. The Hough transform for a circular
boundary and a set of recovered edge point (xj, yj): j=1, 2, … n is defined as
[82]:
n
H(xc, yc, r) h(xj , yj , xc, yc, r) (5.2)
j 0
where
1, if g(xj , y j , xc , yc , r) 0
h(xj , y j , xc , yc , r)
0, otherwise
and
- 47 -
Chapter 5 Image Processing Algorithm
After running the segmentation phase, iris isolation done giving pupil
circle (center coordinate and radius) and iris circle too. These parameters used
in iris normalization and unwrapping next stage. Daugman's rubber sheet
model used instead of Wildes normalized spatial correlation for matching using
image registration technique [21] based on its simplicity. Result of localization
processes is illustrated in Fig. 5.4 and some random samples of iris
segmentation test are shown in Fig. 5.5.
- 48 -
Chapter 5 Image Processing Algorithm
Fig. 5.5: Result of the proposed segmentation algorithm. (The upper images are a pupil and
iris detection, and the lower ones represent the final iris localization for each one
respectively).
Once the iris region is segmented and isolated, the next stage is to
normalize this part to enable the generation of the iris code and their
comparisons between different irises. We should transform the extracted iris
region so that it has fixed dimensions [62]. Also, the normalization is useful as
that the representation is common to all, with similar dimensions.
- 49 -
Chapter 5 Image Processing Algorithm
I(x(r,),y(r,))I(r,) (5.3)
where x(r, θ) and y(r, θ) are defined as linear combinations of both the set of
pupillary boundary points (xp(θ),yp(θ)) and limbos boundary points (xi(θ),yi(θ))
where[91]:
- 50 -
Chapter 5 Image Processing Algorithm
This model defines iris code in two ways of polar coordinates [60, 89,90], a
number of data points are selected along each radial line defined as the radial
resolution, and The number of radial lines going around the iris region is
defined as the angular resolution.
r ' 2 ri
2
(5.6)
with
ox o y
2 2
o
cos arctan y
o
x
where ox, oy represent the displacement of the centre of the pupil relative to the
centre of the iris, and (r') represents the distance between the edge of the pupil
and edge of the iris at an angle θ around the region, and r i is the radius of the
iris .
- 51 -
Chapter 5 Image Processing Algorithm
(a) (b)
Fig. 5.7: Unwrapping and normalization: (a) Daugman Rubber Sheet model[90] and (b)
Unwrapped iris image (angular resolution of 512 and radial resolution of 46).
- 52 -
Chapter 5 Image Processing Algorithm
- 53 -
Chapter 6 Iris Code Generation and Matching
Chapter 6
Iris Code Generation and Matching
The normalized iris image still has low contrast and may have non-
uniform illumination caused by the position of light sources [93]; In order to
obtain more well distributed texture image we make enhancement. This can be
reached by histogram equalization. The histogram of gray scale image consists
of the histogram of its grey levels; that is, a graph indicating the number of
times each grey level occurs in the image. Histogram equalization is a
technique for adjusting image intensities to enhance contrast. Images with such
poor intensity distributions can be helped with this technique, which in essence
redistributes intensity distributions [79, 94].
- 54 -
Chapter 6 Iris Code Generation and Matching
fi, j
where floor() rounds down to the nearest integer. Enhanced normalized iris
templates shown in fig. 6.1, the upper image is the normalized iris and the
lower is its histogram.
(a) (b)
Fig. 6.1: Enhanced normalized iris template with histogram: (a) original template with its
histogram, (b) template after histogram equalization applied.
features of the iris must be encoded; so we can compare the stored features to
the any unknown iris features to see if they are the same or not [30].
Most of iris coding systems like Daugman's system [19, 62] make use of
Gabor filters, which have proved to be very efficient for image texture, and
high accuracy in recognition rate. Gabor filter’s impulse response is defined by
a harmonic function multiplied by a Gaussian function [79], It is constructed by
modulating a sine/cosine wave with a Gaussian. It provides optimum
localization in both spatial and frequency domains.
The Gabor filter in any encoding and an even symmetric will have a DC
component whenever the bandwidth is larger than one octave[110]. Therefore
the Log-Gabor filters have been recently suggested [111, 90] for phase
encoding because of presence the zero DC-component caused by background
brightness [112]. It can be obtained for any bandwidth by using a Gabor filter,
which is Gaussian on a logarithmic scale. This is known as the Log-Gabor
filter.
The performance from the Log-Gabor filter is the best when followed by
the Haar wavelet, discrete cosine transform (DCT), and Fast Fourier
Transform (FFT) [110]. The Log-Gabor filters having extended tails at the high
frequency end are expected to offer more efficient encoding of natural images.
The Log-Gabor function has singularity in the log function at the origin,
therefore the analytic expression for the shape of the Log-Gabor filter cannot
be constructed in spatial domain. Therefore the filter is implemented in
frequency domain, with frequency response defined as follows:
(log(f / f ))2
G( f ) exp 0
(6.3)
2(log( f / f0 ))2
with f0 is the central frequency and σf is the scaling factor of the radial
bandwidth B[113]. The radial bandwidth in octaves is expressed as follows
[24]:
- 57 -
Chapter 6 Iris Code Generation and Matching
B 2 2 / ln 2 * ln( f / f 0 ) (6.4)
The selected parameters to achieve the best performance were the center
wavelength of 18 and ratio σf /f0 of 0.55. This approach compresses the data to
obtain significant data [79]; The compressed data can be stored and processed
affectivity.
Fig. 6.2 shows the decomposition of the normalized image and the phase
coding. Fig. 6.3 shows the real part of the iris code after log-Gabor filter ( since
modulation of the sine with a Gaussian provides localization in Space, though
with loss of localization in frequency). Total number of bits in iris code
generated using 1D Log-Gabor is 512*46*2 bits.
- 58 -
Chapter 6 Iris Code Generation and Matching
Even
Odd
(a (b
Fig. 6.3: Iris code
) generation: (a) Normalized iris and (b) Encoded
) iris texture after 1D
log-Gabor filter
However, the use of cosine rather than sine functions turns out that
cosine functions are much more efficient; due to the following important
properties: (i) energy compaction; (ii) decorrelation; (iii) separability; (iv)
symmetry; and (v) Orthogonality.
- 59 -
Chapter 6 Iris Code Generation and Matching
( N 1) ( N 1)
(2 x 1)u (2 y 1)v
F (u , v) C (u )C (v) f ( x, y ) cos cos (6.5)
x 0 y 0 2N 2N
Where:
1
C (u ) C (v) , foru , v 0
N
and
2
C (u ) C (v) , foru , v 0
N
In addition, of its strong energy compaction property, the feature
extraction capabilities of the DCT coupled with well-known fast computation
techniques [117]. It compresses all the energy of the image and concentrates it
in a few real valued coefficients located in the upper-left corner of the resulting
real-valued M*N DCT/frequency matrix. A coefficient’s usefulness is
determined by its variance over a set of images as in video’s case. If a
coefficient has a lot of variance over a set, then it cannot be removed without
affecting the picture quality. Zero or low-level pixel values except at the top-
left corner. These low-frequency, high-intensity coefficients are therefore, the
most important coefficients in the frequency matrix and carry most of the
information about the original image [58].
- 60 -
Chapter 6 Iris Code Generation and Matching
Both real and imaginary parts of The templates generated from the feature
extraction stage are each quantized, converting number feature vector to binary
code. As Boolean vectors are always easier to compare and to manipulate.
Thus, it is easier to find the difference between two binary codes than between
- 61 -
Chapter 6 Iris Code Generation and Matching
two number vectors. In addition, It is useful to store a small number of bits for
each iris code.
1 N
HD X j ( XOR)Yj (6.7)
N j1
where X and Y are the two bit patterns; that we compared and N is the total
number of bits. The larger the hamming distance (closer to 1), the more the two
patterns are different and the closer this distance is to 0; the more probable the
two patterns are to be identical [61]. Therefore, a threshold is set to define the
imposter. Daugman set this threshold equal 0.32 [62]. The optimum threshold
in our system based on 1D Log-Gabor and DCT is 0.45212 and 0.48553
respectively.
This technique for matching is fast because the templates vectors are in
binary format. The execution time for exclusive-OR comparison of two
templates is approximately 10µs [62]. In addition, it is simple and suitable for
comparisons of millions of template in large database [79]. It Need not to pre-
process before matching between CASIA samples.
- 62 -
Chapter 6 Iris Code Generation and Matching
Fig. 6.5: False Accept and False Reject Rates for two distributions with a separation
Hamming distance of 0.35 [90].
The block diagram of our proposed iris recognition system (in chapter 5)
illustrates the system phases. The system implemented using 1D-Log Gabor
filter, then reimplementation by 2-D DCT in feature extraction phase.
Comparing the verification results according to each method, then the
recognition system based on DCT tested in real time and simulated by using
FPGA devices (later in chapter 7). These results were obtained using CASIA
version 1.0 (see chapter 4). The performance of the previously discussed
methods was tested and simulated using MATLAB (2009a) version 7.8.0.347
- 63 -
Chapter 6 Iris Code Generation and Matching
(video and image processing tool box, .m files, and Simulink) , using a
Personal Computer (PC) of the following specifications: (i) operating system
WINDOWS XP, (ii) processor Dual-Core (1.6 GHZ/2MB Cache), (iii) RAM
2GB, (iv) Hard Disk 120 GB;
A random subset database of nine different persons eyes are tested, and
for each iris image, seven images are used (images from the two sessions are
used). This makes up a total of 63 experiments for iris images was selected
randomly from the original CASIA 1 database. The result of verification is
3969 matching occurs. Table 6.1 and Table 6.2 illustrate the obtained results of
the average HD distance values of that test for both 1D Log-Gabor and DCT
respectively. The diagonal values represent the distance of matching between
the same iris images. Fig. 6.6 shows the distribution of intra-class and inter-
class matching distances of 1D Log-Gabor proposed method. In addition, fig.
6.7 does for DCT method. The mean and standard deviation of each
distribution is attached in figure. In each figure, the top left one shows the
intra-class distribution and the left bottom is inter-class distribution. While the
most right is the combination of them in one figure showing the overlap of the
two regions. Table 6.3 shows the results of the verification test of 1D Log-
Gabor. In addition, as similar Table 6.4 shows the same for DCT. At each
hamming distance value we make a test considering a threshold value and
calculated the FAR and FRR percentage value. The accuracy rate then
- 64 -
Chapter 6 Iris Code Generation and Matching
calculated based on these FAR and FRR values. The optimum threshold is one
gives highest accuracy rate.
For 1D Log-Gabor method, the test shows that the optimum threshold
for the proposed algorithm is 0.45212. This gives the highest recognition rate to
become 98.94708 %. Fig. 6.8 shows a Receiver Operating Characteristic
(ROC) curve of the proposed method. The ROC curve plots the false reject rate
(FRR) on the Y axis and the false accept rate (FAR) on the X axis. It measures
the accuracy of the iris matching process and shows the overall performance of
algorithm. Increasing the FAR is decreasing in FRR value. Lower FAR is
suitable in high security applications. However, the lower FRR is more suitable
in forensic like applications. The trade-off region is best choice in civilian
applications. The associated error values is FAR = 0 % and FRR = 1.052923
%. Fig. 6.9 shows the 1D Log-Gabor based approaches error versus their
hamming distances. The EER is the intersection point between FAR curve and
FRR one. Where the FAR and FRR are equal in value. It equals 0.869% at
hamming distance value 0.4628.
However, For 2-D DCT method, our test shows that the optimum
threshold for the proposed approach is 0.48553. Which gives the highest
recognition rate equals 93.07287 %. Fig 6.10 shows ROC curve of the
proposed method. The associated error is FAR = 0.886672 % and FRR =
6.040454 %. Fig. 6.11 shows the DCT based approaches error versus their
hamming distances. The EER is 4.485% at hamming distance value 0.48775.
- 65 -
Chapter 6 Iris Code Generation and Matching
- 66 -
Chapter 6 Iris Code Generation and Matching
Iris-1 0.33626 0.48350 0.48368 0.48993 0.48118 0.48724 0.48694 0.48874 0.48899
Iris-2 0.48350 0.30261 0.48713 0.48832 0.48726 0.48433 0.47874 0.48475 0.48819
Iris-3 0.48368 0.48713 0.41838 0.47767 0.48701 0.48589 0.47777 0.48280 0.48644
Iris-4 0.48993 0.48832 0.47767 0.35238 0.48663 0.48886 0.47978 0.48906 0.48751
Iris-5 0.48118 0.48726 0.48701 0.48663 0.38007 0.48483 0.48061 0.48817 0.48686
Iris-6 0.48724 0.48433 0.48589 0.48886 0.48483 0.44462 0.48465 0.48815 0.48690
Iris-7 0.48694 0.47874 0.47777 0.47978 0.48061 0.48465 0.32729 0.48848 0.48218
Iris-8 0.48874 0.48475 0.48280 0.48906 0.48817 0.48815 0.48848 0.38166 0.48288
Iris-9 0.48899 0.48819 0.48644 0.48751 0.48686 0.48690 0.48218 0.48288 0.36204
Iris-1 0.48819 0.49209 0.49236 0.49145 0.49124 0.49205 0.49118 0.49240 0.49197
Iris-2 0.49209 0.48190 0.49320 0.49187 0.49155 0.49207 0.49192 0.49278 0.49305
Iris-3 0.49236 0.49320 0.48977 0.49055 0.49225 0.49193 0.49192 0.49232 0.49136
Iris-4 0.49145 0.49187 0.49055 0.48264 0.49173 0.49207 0.49149 0.49121 0.49198
Iris-5 0.49124 0.49155 0.49225 0.49173 0.48395 0.49078 0.49224 0.49220 0.49257
Iris-6 0.49205 0.49207 0.49193 0.49207 0.49078 0.49013 0.49161 0.49250 0.49239
Iris-7 0.49118 0.49192 0.49192 0.49149 0.49224 0.49161 0.48589 0.49264 0.49234
Iris-8 0.49240 0.49278 0.49232 0.49121 0.49220 0.49250 0.49264 0.48803 0.49206
Iris-9 0.49197 0.49305 0.49136 0.49198 0.49257 0.49239 0.49234 0.49206 0.48397
- 67 -
Chapter 6 Iris Code Generation and Matching
Intra-class distribution
µ 0.36726
σ 0.058932
Inter-class distribution
µ 0.48534
σ 0.007392
Fig. 6.6: Probability distribution curves for matching and nearest non matching Hamming
distances of 1D Log-Gabor method.
Intra-class distribution
µ 0.48605
σ 0.007033
Inter-class distribution
µ 0.49198
σ 0.002176
Fig. 6.7: Probability distribution curves for matching and nearest non matching Hamming
distances of DCT method.
Rate (%)
0.40197 0 3.047936 96.95206
0.412 0 2.493766 97.50623
0.42203 0 2.050429 97.94957
0.43206 0 1.66251 98.33749
0.44209 0 1.385425 98.61457
0.45212 0 1.052923 98.94708
0.46215 0.55417 0.886672 98.55916
0.47218 5.486284 0.609587 93.90413
0.48221 36.99086 0.498753 62.51039
0.49224 80.63175 0.110834 19.25741
TABLE 6.4: RESULTS OF VERIFICATION TEST FOR DCT METHOD.
Recognition
Threshold FAR (%) FRR (%)
Rate (%)
0.46539 0 10.30756 89.69244
0.46722 0 10.30756 89.69244
0.46905 0 9.032973 90.96703
0.47088 0 8.922139 91.07786
0.47272 0 8.645054 91.35495
0.47455 0 8.423386 91.57661
0.47638 0 8.367969 91.63203
0.47812 0 8.201718 91.79828
0.48004 0.055417 7.98005 91.96453
0.48187 0.110834 7.370463 92.5187
0.4837 0.277085 6.760876 92.96204
0.48553 0.886672 6.040454 93.07287
0.48736 3.103353 4.876697 92.01995
0.48919 10.36298 2.826268 86.81075
0.49102 28.31809 1.385425 70.29648
0.49285 60.9033 0.665004 38.4317
0.49468 89.49848 0.055417 10.44611
- 69 -
Chapter 6 Iris Code Generation and Matching
Fig. 6.9: FAR and FRR versus Hamming Distances of 1D Log-Gabor approach.
- 70 -
Chapter 6 Iris Code Generation and Matching
Fig. 6.11: FAR and FRR versus Hamming distances for DCT approach.
- 71 -
Chapter 6 Iris Code Generation and Matching
Fig. 6.13: FAR and FRR versus Hamming distances of both 1D Log-Gabor and DCT
approaches.
- 72 -
Chapter 6 Iris Code Generation and Matching
- 73 -
Chapter 7 System Hardware Implementation
Chapter 7
System Hardware Implementation
7.1 Introduction
- 74 -
Chapter 7 System Hardware Implementation
- 75 -
Chapter 7 System Hardware Implementation
- 76 -
Chapter 7 System Hardware Implementation
together. PALs are also extremely fast [127]. PALs come in both mask
and field versions. In the mask version, the manufacturer configures the
chip, while field version allows end users to program the chips. PAL is
suitable for small logic circuits, while the Mask- Programmable Gate
Array (MPGA) handles larger logic circuits, and
(iv) CPLDs and FPGAs: CPLDs are as fast as PALs but more complex.
FPGAs approach the complexity of Gate Arrays but are still
programmable. PALs are short lead time, programmable, and need no
NRE charges. In dead, gate arrays distinguishes by high density,
relatively fast, and can implement many logic functions; CPLDs and
FPGAs bridge the gap between PALs and Gate Arrays [127]. Complex
Programmable Logic Devices (CPLDs): Essentially, they are
designed to appear just like a large number of PALs in a single chip.
The devices are programmed using programmable elements that,
depending on the technology of the manufacturer, can be EPROM cells,
EEPROM cells, or Flash EPROM cells. When considering a CPLD for
use in a design, the following issues should be taken into account [127]:
(i) The programming technology: This will determine whether they can
be programmed only once or many times; (ii) The function block
capability: No. of function blocks are there in the device, and
additional logic resources are there such as XNORs, ALUs, etc, and (iii)
The I/O capability: No. of I/O are independent, used for any function,
and how many are dedicated for clock input, master reset, etc. Field
Programmable Gate Arrays (FPGAs): The first static memory-based
FPGA (commonly called an SRAM-based FPGA) was proposed by
Wahlstrom in 1967. It is likely this issue delayed the introduction of
commercial static memory-based programmable devices until the mid-
1980’s, when the cost per transistor was sufficiently lowered [9].
FPGAs are structured very much like a gate array ASIC. This makes
FPGAs very nice for use in prototyping ASICs, or in places where and ASIC
will eventually be used. For example, an FPGA may be used in a design that
needs to get to market quickly regardless of cost. Later an ASIC can be used in
place of the FPGA when the production volume increases, in order to reduce
cost [3, 127].
gates or XOR gates; (iii) N-input Lookup tables; (iv) Multiplexers, and (v)
Wide fan in And-OR structure.
However, custom ICs have their own disadvantages. They are relatively
very expensive to develop, and delay introduced for product to market (time to
market) because of increased design time. FPGAs were introduced as an
alternative to custom ICs for implementing entire system on one chip and to
provide flexibility of reporogramability to the user. Another advantage of
FPGAs over Custom ICs is that with the help of computer aided design (CAD)
tools circuits could be implemented in a short amount of time [128]. Table 7.1
summarizes briefly the main comparison between CPLD and FPGA.
Table 7.1: the main comparison between CPLD and FPGA. [127]
CPLD FPGA
Architecture PAL-like Gate array-like
Low to medium (12 Medium to high (up to 1
Density
22v10s or more ) million gates)
Speed Fast, predictable Application dependent
Interconnection Crossbar Routing
Power Consumption High Medium
- 80 -
Chapter 7 System Hardware Implementation
conversion to ASICs. On contrast the later have fewer levels of logic and less
interconnect delay.
- 81 -
Chapter 7 System Hardware Implementation
be customized to exactly suit the needs of the application to gain back some of
the lost performance and area-efficiency [9]. Processors, hard or soft, support
user-configurable peripheral devices implemented on the FPGA and run
common operating systems such as Linux [130]. Signal processing programs
used on a PC allow for rapid development of algorithms, as well as equally
rapid debug and test application. Matlab is such an environment treating an
image as a matrix, which allows optimized matrix operations for implementing
algorithms. However, even specialized image processing programs running on
a PC cannot adequately handle huge amounts of high-resolution images, since
PC processors are produced for general use. Further optimization should take
place on hardware devices [129]; (ii) Full custom circuits – ASICs: However,
once designed, these systems are cheaper than other solutions with respect to
the manufacturing process. The investment can be recovered by the massive
production of these systems, as the cost per unit is greatly reduced when
compared to microprocessor architectures. At the same time, the hardware area
required for these systems is smaller than other solutions, which are used to
perform the same task; this makes this solution suitable for small devices and
helps to reduce the cost per unit. Further, except in large volume commercial
application, ASICs are considered too costly for many designs [129]. The
upgrading possibility is variable and depends on the hardware developed, but in
most cases, it is not possible as the hardware may not be rebuilt. As a result of
this, these solutions are considered to be closed designs. Finally, one major
advantages to this type of solution is the time reduction in the performance
process [5]. However, the circuit is fixed once fabricated, so it is impossible to
modify the function or even optimize it; (iii) Digital Signal Processors
(DSPs): Digital Signal Processors (DSPs) such as those available from Texas
Instruments are a class of hardware devices that fall somewhere between
ASICs and PCs in terms of performance and design complexity. They can be
programmed in different languages, such assembly codes and C language.
Hardware knowledge is required, but is much easier for designers to learn
compared with some other design choices. However, algorithms designed for a
- 82 -
Chapter 7 System Hardware Implementation
DSP cannot be highly parallel without using multiple DSPs. One area where
DSPs are particular powerful is the design of floating point systems, while for
ASICs and FPGAs, floating point operations are difficult to implement [5,
129], and (iv) Combining solutions: By using this combination, the inherent
advantages of both systems are obtained: such as reduced time, reduced area
and also low power consumption. The FPGA can be used to implement any
logical function that (ASIC) can perform, but the ability to update the
functionality after manufacturing offers advantages for many applications. In
the past few years, the trend has been to connect the traditional logic blocks to
the embedded microprocessors within the chip. This provides the possibility for
the development of combined solutions. These are commonly referred to as
System on Chip (SoC) solutions. As regards the microprocessor used in FPGAs
or SoCs, two possibilities may be found: a hard processor, i.e. a processor that
is physically embedded in the system, and a soft processor, the latter is
implemented using the FPGA logic blocks, providing additional functions if
desired by including extra features in the system [5]. One of the benefits of
FPGA is its ability to execute operations in parallel, resulting in remarkable
improvement in efficiency. Considering availability, cost, design cycle and
ease to handle [129], FPGA is chosen to implement image processing
algorithms in this work.
- 83 -
Chapter 7 System Hardware Implementation
quickly as they are completed. On the other hand, FPGAs are not well suited to
perform inherently serial operations. In this case, GPPs will outperform FPGAs
due to their higher clock speeds [11]. GPPs on the other hand are
microprocessors that are designed to perform a wide range of computing tasks
[15].
Modern FPGAs have superior logic density, low chip cost and
performance specifications comparable to low end microprocessor. With
multimillion programmable gates per chip, current FPGAs can be used to
implement digital systems capable of operating at frequencies up to 550 MHz
[123]. GPPs are also generally cheaper than FPGAs. Hence, if a GPP can meet
application requirements (performance, power, etc.), it is almost always the
best choice. In general, FPGAs are well suited to applications that demand
extremely high performance and reprogrammability [15].
FPGA have the potential for higher performance and lower power
consumption than microprocessors and compared with (ASICs), offer lower
non-recurrent engineering (NRE) costs, reduced development time, shorter
time to market, easier debugging and reduced risk [130].
- 85 -
Chapter 7 System Hardware Implementation
DSPs are also microprocessors that are specifically optimized for the
efficient execution of common signal processing tasks. DSPs are not as
specialized as ASICs, so they are usually not as efficient in terms of speed,
power consumption and price. DSPs are characterized by their flexibility and
ease of programming relative to the FPGA. In a DSP system, the programmer
does not need to understand the hardware architecture [18]; the hardware
implementation is hidden from the user. The DSP programmer uses either C or
assembly language. With respect to the performance criterion, the speed is
limited by the clock speed of the DSPs, given that the DSPs operate in a
sequential manner and accordingly cannot be fully parallelized. FPGAs, on the
other hand, can work very fast if an appropriate parallelized architecture is
designed. Reconfigurability in DSPs can be achieved by changing the memory
content of its program. This is in contrast to FPGAs where reconfigurability
can be performed by downloading reconfiguration data to the RAM. Power
consumption in a DSP depends on the number of memory elements used
regardless of the size of the executable program. For FPGA, the power
consumption depends on the circuit design. FPGAs are important when there is
a need to implement a parallel algorithm, that is, when different components
operate in parallel to implement the system functionality. Thus the speed of
execution is independent of the number of modules. This is in contrast to DSP
systems where the execution speed is inversely proportional to the number of
- 86 -
Chapter 7 System Hardware Implementation
application for that matter), is the extremely low level programming model
which it supports. Normally FPGAs are programmed in a hardware description
language such as VHDL, which is hardware oriented rather than algorithm
oriented [126].
- 88 -
Chapter 7 System Hardware Implementation
The disadvantages of FPGAs are that the same application needs more
space (transistors) on chip and the application runs slower on a FPGA as
modern as the ASIC counterpart. Due to the increase of transistor density
FPGA were getting more powerful over the years [132].
- 90 -
Chapter 7 System Hardware Implementation
One trend that has recently emerged is the use of flash storage in
combination with SRAM programming technology. In these devices from
Altera, Xilinx and Lattice, on-chip flash memory is used to provide non-volatile
storage while SRAM cells are still used to control the programmable elements
in the design. This addresses the problems associated with the volatility of
pure-SRAM approaches, such as the cost of additional storage devices or the
possibility of configuration data interception, while maintaining the infinite
reconfigurability of SRAM-based devices [9, 127].
Table 7.2: the main differences between FPGA programming technologies. [9]
SRAM Flash Anti-fuse
Volatile? Yes No No
Reprogrammable? Yes Yes No
Area (storage High Moderate Low
element size) (6 transistors) (1 transistor ) ( 0 transistors)
Manufacturing needs special
Standard CMOS Flash process
process development
In-system Yes Yes No
programmable?
- 93 -
Chapter 7 System Hardware Implementation
- 94 -
Chapter 7 System Hardware Implementation
- 95 -
Chapter 7 System Hardware Implementation
programmed for fast or slow rise and fall times. In addition, there is often a
flip-flop on outputs so that clocked signals can be output directly to the pins
without encountering significant delay. It is done for inputs so that there is not
much delay on a signal before reaching a flip-flop which would increase the
device hold time requirement [123, 127]; (iii) Programmable Interconnect:
Multiple copies of CLB slices are arranged in a matrix on the surface of the
chip. The CLBs are connected column-wise and row-wise. At the intersections
of columns and rows are Programmable Switch Matrices (PSMs) [132]. In Fig.
7.5, a hierarchy of interconnect resources can be seen. There are long lines,
which can be used to connect critical CLBs that are physically far from each
other on the chip without inducing much delay. They can also be used as buses
within the chip. There are also short lines, which are used to connect individual
CLBs, which are located physically close to each other. There are often one or
several switch matrices, like that in a CPLD, to connect these long and short
lines together in specific ways. Programmable switches inside the chip allow
the connection of CLBs to interconnect lines and interconnect lines to each
other and to the switch matrix. Three-state buffers are used to connect many
CLBs to a long line, creating a bus. Special long lines, called global clock lines,
are specially designed for low impedance and thus fast propagation times.
These are connected to the clock buffers and to each clocked element in each
CLB. This is how the clocks are distributed throughout the FPGA [123, 127],
and (iv) Clock Circuitry: These buffers are connect to clock input pads and
drive the clock signals onto the global clock lines described above. These clock
lines are designed for low skew times and fast propagation times [15, 17].
- 97 -
Chapter 7 System Hardware Implementation
- 98 -
Chapter 7 System Hardware Implementation
reduces the cost of production [128]. For a homogeneous FPGA array (one that
employs just one type of logic block) the fundamental area trade-offs of an
architecture are as follows [9]: (i) As the functionality of the logic block
increases, fewer logic blocks are needed to implement a given design. Up to a
point, fewer logic blocks reduce the total area required by a design; (ii) As the
functionality of the logic block increases, its size (and the amount of routing it
needs per block) increases stored on embedded static RAM within the chip, this
controlling the contents of the Logic Cells (LCs) and multiplexers that perform
routing. Early FPGAs used a logic cell consisting of a 4-input lookup table
(LUT) and register. Since area increases with the number of inputs but logic
depth decreases, the trend for larger LUTs reflect the increased interconnect to
logic delay in modern integrated circuit (IC) technology [130].
For a homogeneous FPGA array that employs just one type of logic
block, fundamental architectural effects on speed include [9]: (i) As the
functionality of the logic block increases, fewer logic blocks are used on the
critical path of a given circuit, resulting in the need for fewer logic levels and
higher overall speed performance. A reduction in logic levels reduces the
required amount of inter-logic block routing, which contributes a sub-stantial
portion of the overall delay, and (ii) As the functionality of the logic block
increases, its internal delay increases, possibly to the point where the delay
increase offsets the gain due to the reduced logic levels and reduced inter-logic
block routing.
Xilinx incorporation first created FPGAs in 1984. since that time, many
other companies have marketed FPGAs, the major companies being Xilinx,
Actel and Altera. Xilinx FPGA uses SRAM technology to implement hardware
designs. Commonly used xilinx FPGAs today are Sparatn-3A family, Spartan-
3E, and Virtex family. Examples of Programmable System-on-a-Chip (PSoC)
are the Xilinx Virtex-II Pro, Virtex-4 and Virtex-5 FPGA families, which
include one or more hard-core PowerPC processors embedded along with the
FPGA’s logic fabric. Alternatively, soft processor cores that are implemented
using part of the FPGA logic fabric are also available. Many soft processor
cores are now available such as: Xilinx 32-bit MicroBlaze and PicoBlaze, and
the Altera Nios and the 32-bit Nios II processor [15].
earlier Spartan-3 family by increasing the amount of logic per I/O, significantly
reducing the cost per logic cell. The Spartan-3A family builds on the success of
the earlier Spartan-3E and Spartan-3 FPGA families by increasing the amount
of I/O per logic, significantly reducing the cost per I/O. The Spartan-3A DSP
FPGA is built by extending the Spartan-3A FPGA family by increasing the
amount of memory per logic and adding XtremeDSP DSP48A slices. The
XtremeDSP DSP48A slices replace the 18x18 multipliers found in the Spartan-
3A devices. The Spartan-3AN FPGA family combines all the features of the
Spartan-3A FPGA family plus leading technology in-system flash memory for
configuration and non-volatile data storage. It is excellent for applications such
as blade servers, medical devices, automotive infotainment, and GPS etc.
Extended Spartan-3A FPGA includes non-volatile Spartan-3AN devices, which
combine leading edge FPGA and flash technologies to provide a new evolution
in security, protection and functionality, ideal for space-critical or secure
applications [131]. Particularly, the Spartan-3A and Spartan-3E are used as a
target technology in this study; (ii) Spartan 3A family: The new Spartan 3A
XC3S700A FPGA family delivers up to 700K system gates (13,248 logic
cells). This new family includes five devices. Offering system performance
greater than 66 MHZ(Wide frequency range 5 MHz to over 300 MHz), and
featuring 1.2 to 3.3 volt internal operation with 4.6 volt I/Os to allow optimum
performance and compatibility with existing voltage standard. and (iii) 3D-
FPGA Architecture: Although the two-dimensional (2D)-FPGA architecture
discussed so far has several advantages such as high degree of flexibility and
inherent parallelism, it suffers from a major problem of long interconnect
delays. almost 80% of the total power is dissipated in interconnects and clock
networks. To reduce the interconnect delay. The model of 3D-FPGA is based
on 2D-FPGA architecture that are vertically stacked and interconnects are
provided between vertically adjacent 3D-switch blocks. The vertical stacking
results in reduction of total interconnect length which eventually results in
reduced interconnect delay, improved performance and speed [123].
- 101 -
Chapter 7 System Hardware Implementation
The three key factors that play an important role in FPGA based
designs are FPGA architecture, electronic design automation (EDA) tools
and design techniques employed at the algorithmic level using hardware
description languages [123]. The EDA tools like Xilinx Integrated Software
Environment (ISE), Altera’s Quartus II and Mentor Graphics’ FPGA
Advantage plays a very important role in obtaining an optimized digital circuit
using FPGA [15].
language allow the designer to model, simulate and ultimately synthesize into
hardware logic complex digital designs commonly encountered in modern
electronic devices [139].
- 103 -
Chapter 7 System Hardware Implementation
project is open; (ii) Checks if all project resources are available and up –to –
date; (iii) Shows the design process flow; (iv) Provides buttons for lunching
applications involved in the design process; (v) Provides interface to external
third party programs; (vi) Place all errors and status messages in the message
window; (vii) Provides automated data transfer between tools involved in
processing your designs; (viii) Provides design status information, and (ix) ISE
is designed to work with one project at a time.
HDL: (i) Ease of Use: this factor includes both Ease of Learning ( this relates to
how easy it is to learn the language without prior experience with HDLs) and
ease of Use ( means once the first time user has learned the language, how easy
will it be to use the language for their specific design requirements).
Additionally, Future Usability, means although the language may be sufficient
for today's requirements, what about tomorrow's requirements; (ii)
Adaptability: Another important factor is how the HDL can integrate into the
current design environment and the existing design philosophy, and (iii) The
Reality Factor: The last factor is one of general reality. Does the HDL support
the specific technical methodologies and strategies that the first time user
requires [142]?
route. The Translate step essentially flattens the output of the synthesis tool
into a large single netlist. A netlist in general is a big list of gates (typically
NAND/ NOR) and is compressed at this stage to remove any hierarchy. In map
step, the EDA tool transforms a netlist of technology independent logic gates
into one comprised of logic cells and IOBs in the target FPGA architectures.
Technology mapping plays a significant role on the quality of the implemented
circuits. Placement follows technology mapping and places each of these
physical components onto the FPGA chip. The next step is routing. It is the last
step in the design flow prior to generating the bit-stream to program the FPGA.
It connects them through the switch matrix and dedicated routing lines. FPGA
routing is a tedious process, because it has to use only the prefabricated routing
resources such as wire segments, programmable switches and multiplexers.
Then, Timing simulation validates the logical correctness of the design. timing
information is generated in log files that indicate both the propagation delay
through each building block in the architecture, as well as the actual routing
delay of the wires connecting the building blocks together [15, 125]. The ISE
Implementation stage outputs a NGD (native generic database) file. Just as the
synthesis, tools output an HDL simulation netlist, so do the ISE
Implementation tools. However, this time these simulation files contain all of
the timing information that was generated in the Translate, Map and Place &
Route stage. These files can be used for two purposes. First, they can be read
back into the ModelSim simulator just as before. This is called back annotated
timing simulation. This type of simulation is much more time consuming and
difficult, since all of the propagation and wiring delays are evident on each
signal. Second, they can be used for static timing i.e. timing analysis that does
not depend on stimulus to the design circuit [135], and (iv) Bit stream
generation. Bit stream generation and downloading the generated bit file in
the FPGA is the final step of the FPGA design flow [15].
Once the place and route process is finished, the resulting choices for
the configuration of each programmable element in the FPGA chip, be it logic
- 109 -
Chapter 7 System Hardware Implementation
(a) (b)
Fig. 7.8: FPGA design flow.(a) simulation steps penetration. (b) the flow of
implementation.[15]
- 110 -
Chapter 7 System Hardware Implementation
- 111 -
Chapter 7 System Hardware Implementation
The resulting data will be a model (.mdl) file for the complete system,
and a second model file for the blocks for processing. This second model file
that is processed to create the VHDL code: described in terms of VHDL
entities and architectures. Two stages in the conversion are considered. The
first, primarily described here, shows the first stage in conversion that maps
Simulink blocks to VHDL entities and architectures. The second, performs an
optimization routine to map the functions to a predefined architecture. Both
solutions may be considered in order to determine a solution that attains the
required functionality whilst occupying a small silicon area. Once conversion
and optimization have been completed, the VHDL (.vhd) files generated by
using HDL coder tool are used within a suitable design flow to compile the
entities and architectures into VHDL design units. Additionally, prior to
synthesis, it may also be a requirement for the user to intervene and modify the
VHDL architecture code in order to guide the synthesis of certain circuit
architecture styles that could be required [139].
scripts that invoke and control downstream tools such as HDL simulators, RTL
logic synthesizers and Xilinx ISE implementation tools. The most critical and
also the most time consuming procedure is floating-point to fixed-point
conversion [129].
System Generator block set offers other a black box. The black box
feature allows the user to develop a custom block whose functionality is
specified using (HDL), either Verilog or VHDL. A very convenient feature of
the System Generator block set was the GatewayIn block. This block took a
double precision floating point value from MATLAB and converted it to a
desired fixed point format, in this case a signed 16-bit number with 15 bits to
the right of the decimal point. Similarly, the Gateway Out block converted the
fixed-point results back to floating point values for display and analysis using
MATLAB. the use of 16-bit fixed-point math did not result in a noticeable
change in the accuracy of the output [124].
Critical however with this design flow are: (i) quality-of-results, (ii)
sophistication of Simulink block library, (iii) compile time, (iv) cost and
availability of development boards, and (v) cost, functionality, and ease-of-use
of the FPGA vendor provided design tools [143].
- 113 -
Chapter 7 System Hardware Implementation
ASIC prototyping with FPGAs enables fast and accurate SoC system
modeling and verification as well as accelerated software and firmware
development. Data centers are evolving rapidly to meet the expanding
computing, networking, and storage requirements for enterprise and cloud
ecosystems. For medical imaging systems, For diagnostic, monitoring, and
therapy applications, FPGA can be used to meet a range of processing, display,
and I/O interface requirements.
- 114 -
Chapter 7 System Hardware Implementation
The 1D Discrete Cosine Transform helps separate the image into parts
(or spectral sub-bands) of differing importance - with respect to the spatial
quality of the image. It is similar to the Discrete Fourier Transform since, it
transforms a signal or image from the spatial domain to the frequency domain.
However, one primary advantage of the DCT over the DFT is that the former
involves only real multiplications, which reduces the total number of required
multiplications, unlike the latter. Another advantage lies in the fact that for
most images much of the signal energy lies at low frequencies, and are often
small enough to be neglected with little visible distortion. The DCT does a
better job of concentrating energy into lower order coefficients than does the
DFT for image data [16].
- 115 -
Chapter 7 System Hardware Implementation
The transformed image needs to be broken into 8×8 blocks. Each block
(tile) contains 64 pixels. When the process of converting an image into basic
frequency elements completed, image with gradually varying patterns will have
low spatial frequencies, and those with much detail and sharp edges will have
high spatial frequencies. DCT uses cosine waves to represent the signal. Each
8×8 block will result in an 8×8 spectrum that is the amplitude of each cosine
term in its basic function [148].
- 117 -
Chapter 7 System Hardware Implementation
1 / 2 , k 0
Where C k
1, k 0
Due to the Symmetry of the (8 x 8) multiplication matrix, it can be replaced
by two (4x4) x (4x4) matrices, which can be computed in parallel, as, can the
sums and differences forming the vectors below [16, 150]:
Y0 A A A A X 0 X 7
Y
2 B C C B X1 X 6 (7.3)
Y4 A A A A X 2 X 5
Y6 C B B C X 3 X 4
Y1 D E F G X 0 X 7
Y
3 E G D F X1 X 6 (7.4)
Y5 F D G E X 2 X 5
Y7 G F E D X 3 X 4
1 3 3
Where: A cos( ) , B cos( ) , C cos( ) , D cos( ) , E cos( ) ,
4 8 8 16 16
5 7
F cos( ) , and G cos( ) .
16 16
The algorithm then requires an addition butterfly and a number of 4-
input Multiply-Accumulators (MACs) that can be realized with only one LUT
- 118 -
Chapter 7 System Hardware Implementation
- 119 -
Chapter 7 System Hardware Implementation
This research uses the VHDL to develop and implement the iris
recognition system; the VHDL files are generated directly from Simulink
programming environment, using Embedded Matlab and HDL Simulink coder,
these tools allow easy use of [16,133]: complex signals, overflow, underflow
and generation of test benches, among other facilities. All operations in these
programming environments have Simulink fixed-point number representation.
To the system hardware architecture of after being programmed, its
applies the tools available in the Simulink HDL coder, which are: compatibility
- 120 -
Chapter 7 System Hardware Implementation
- 121 -
Chapter 7 System Hardware Implementation
window is shown in fig. 7.12, includes the following items: main menu,
simulator toolbar, objects and waveform window, workspace, and transcript.
Functional testing and timing analysis were carried out for the proposed
system. The results are verified and synthesized using Spartan-3E. The
proposed architecture was tested with 100 ps clock for each 1-D DCT block
and HD matching one. It was found to be working satisfactorily. The simulated
results are shown in Fig. 7.13. The HD value accumulated every clock to out
the final result value after 2944 clock pulse. Each clock accepts 8 pixels values
and compute the HD of them, then the second clock pulse enters another 8
pixels with adding the new HD value to the previous. Increasing the total
number of different binary bits. Until the accumulator equal the threshold value
the decision signal changes to show the final decision (imposter or authorized).
The decision (authorized signal ) changed to binary '1' as the distance value
reached at the threshold, indicating that the entered irises was different.
- 122 -
Chapter 7 System Hardware Implementation
Fig. 7.13: Simulation of the iris hardware architecture with fixed point using ModelSim.
- 123 -
Chapter 7 System Hardware Implementation
- 124 -
Chapter 7 System Hardware Implementation
From the reported results, we can conclude that all investigated FPGA
implementations can speed up the iris recognition system dramatically.
However, for computationally intensive algorithms like DCT better results can
be achieved by coarser-grained reconfigurable logic, like the one realized by
the Spartan-3E of Xilinx.
One reason for this considerable data processing speed is the utilization of
coarse-grain reconfigurable resources available in the FPGA. In particular, the
usage of hardwired multipliers and fast carry chains lead to a severe
acceleration of the implemented computations.
- 125 -
Chapter 7 System Hardware Implementation
indicates 16.229 ns on-chip total delay time (11.453 ns for Logic, and 4.776 ns
for route).
The portable nature of this system will require it to consume little power
and to be relatively small in size. Additionally, embedded vision systems will
need extremely large data IO bandwidth and the computational capacity to
parse this data at speeds fast enough to realize real time system requirements.
Using FPGAs to accelerate image processing algorithms presents several
challenges. One simply relates to Amdahl’s law: a large proportion of the
algorithm must lend itself to parallelization to achieve substantial speedup.
Therefore, it is important to develop an appropriate algorithm to exploit
available parallelism. The problem is made more difficult due to an FPGA’s
general structure, which is not limited to two, or four fixed processors such as
on current dual or quad-core chips [11, 134].
- 126 -
Chapter 7 System Hardware Implementation
must exploit parallelism rather than relying solely upon a high rate of
processing; (iii) Sequential processing of software code avoids contention for
system resources. An FPGA’s potential for massive parallelism frequently
complicates arbitration and creates contention for memory and shared
processors, and (iv) Lack of an operating system complicates management of
‘thread’ scheduling, memory, and system devices, which must be managed
manually.
- 127 -
Chapter 7 System Hardware Implementation
when necessary, to design the chip. Second, it allows flexibility in the design.
Sections can be removed and replaced with a higher-performance or optimized
designs without affecting other sections of the chip, and (vi) Debugging: The
problem stems from the large volume of data contained within an image. With
complex algorithms, it is extremely difficult to design test vectors that exercise
all of the functionality of the system, especially when there may be complex
interactions. In image processing, the problem is even more difficult, because
the algorithm may be working perfectly as designed, but it may not be
appropriate or adequate for the task to which it is applied [134].
- 129 -
Chapter 8 Conclusions and Future Work
Chapter 8
Conclusions and Future Work
8.1 Conclusion
- 130 -
Chapter 8 Conclusions and Future Work
General Purpose Systems are low speed and not portable; FPGA-based
system prototype implemented by using VHDL language and Xilinx Integrated
Software Environment (ISE 12.1) platform. Hardware systems are small
enough and fast. Fast DCT-based feature extraction (butterfly network needs 29
additions and 13 multiplications) and Hamming Distance implemented and
Simulated by ModelSimTM SE from Model Technology, version 6.4.a tool.
The proposed approach implemented and synthesized using Xillinx FPGA chip
Spartan-3E (XC3S1200E-4fg320), 50 MHZ clock frequency and occupied 1%
of chip CLBs and 85% of RAMB16s. The implementation needs 58.88 µs and
delay time on-chip needs 16.229 ns for processing the input values and
presenting a result. This is a very fast implementation when compared with
current system implemented by software (needs feature extraction and
matching average time 1.926794 second).
- 131 -
Chapter 8 Conclusions and Future Work
- 132 -
References
References
[1] M. Lo ´ pez, J. Daugman, E. Canto, "Hardware–software co-design of an
iris recognition algorithm", The Institution of Engineering and Technology
(IET Inf. Secur.), Vol. 5, Issue. 1, pp. 60–68, 2011.
[2] K. Grabowski, W. Sankowski, M. Napieralska, M. Zubert, A. Napieralski, "
Iris Recognition Algorithm Optimized for Hardware Implementation",
Computational Intelligence and Bioinformatics and Computational Biology,
CIBCB '06. IEEE Symposium, Toronto, Ont., Print ISBN: 1-4244-0623-4
pp. 1 – 5, 28-29 Sept. 2006.
[3] Bradley J. Ulis and Randy P. Broussard and Ryan N. Rakvic and Robert W.
Ives and Neil Steiner and Hau Ngo, " Hardware Based Segmentation in Iris
Recognition and Authentication Systems", IEEE Transactions on
Information Forensics and Security, vol. 4, no. 4, pp.812–823, 2009.
[4] L. Kennell, R. W. Ives, and R. M. Gaunt, “Binary morphology and local
statistics applied to iris segmentation for recognition,” in Proceedings of the
IEEE International Conference on Image Processing (ICIP ’06), Atlanta, Ga,
USA, Print ISBN: 1-4244-0480-0, pp. 293 – 296, 8-11 October 2006.
[5] Tammy Noegaard. "Embedded System Architecture: A Comprensive Guide
for Engineers and Programmers (Embedded Technology)", Ed. Newness,
ISBN-13: 978-0750677929, 24 Feb. 2005.
[6] R. N. Rakvic, H. Ngo, R. P. Broussard, Robert W. Ives., "Comparing an
FPGA to a Cell for an Image Processing Application," EURASIP Journal on
Advances in Signal Processing, ISSN:1110-8657, vol. 2010, Article ID
764838, p. 1-7, 2010.
[7] M. Moradi, M. Pourmina, and F. Razzazi, "A New method of FPGA
implementation of Farsi handwritten digit recognition," European Journal of
Scientific Research, vol. 39, N0.3, pp. 309-315, 2010.
[8] B. Draper, W. Najjar, W. Bohm, et al., "Compiling and optimizing image
processing algorithms for FPGAs", in:
Computer Architectures for Machine Perception, 2000. Proceedings. Fifth
- 133 -
References
- 134 -
References
- 135 -
References
on Pattern Analysis and Machine Intelligence, vol. 29, No. 4, pp. 596-606,
Apr. 2007.
[24] J. Huang, T. Tan, L. Ma, Y. Wang, “Phase correlation based iris image
registration model,” Journal of Computer Science and Technology, Vol. 20,
No. 3, pp. 419-425, May 2005.
[25] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on
iris texture analysis,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 25, No. 12, pp. 1519-1533, 2003.
[26] L. Ma, T. Tan, Y. Wang and D. Zhang. “Efficient Iris Recognition by
characterizing Key Local Variations”, IEEE Transactions on Image
Processing, vol. 13, No. 6, pp. 739-750, June 2004.
[27] Dhaval Modi, Harsh Sitapara, Rahul Shah, Ekata Mehul, Pinal Engineer,"
Integrating MATLAB with Verification HDLs for Functional Verification of
Image and Video Processing ASIC", International Journal of Computer
Science & Emerging Technologies (E-ISSN: 2044-6004), Volume 2, Issue
2, pp. 258-265, April 2011.
[28] D. Bhowmik, B. P. Amavasai and T. Mulroy, "Real-time object
classification on FPGA using moment invariants and Kohonen neural
networks", Proc. IEEE SMC UK-RI Chapter Conference 2006 on Advances
in Cybernetic Systems, Sheffield, UK., pp. 43-48, 7-8 September 2006.
[29] Ryan N. Rakvic, Bradley J. Ulis, Randy P. Broussard, and Robert W. Ives,
"Iris Template Generation with Parallel Logic",
Signals, Systems and Computers, 2008 42nd Asilomar Conference on
Pacific Grove, CA, Print ISBN: 978-1-4244-2940-0, pp. 1872 - 1875 , 26-29
Oct. 2008.
[30] K.W. Bowyer, K. Hollingsworth, and P.J. Flynn, "Image understanding for
iris biometrics: A survey", Computer Vision and Image Understanding,
Vol. 110, No. 2, PP. 281-307, May 2008.
[31] Rozeha A. Rashid, Nur Hija Mahalin, Mohd Adib Sarijari, Ahmad
Aizuddin Abdul Aziz, "Security System Using Biometric Technology:
Design and Implementation of Voice Recognition System (VRS)",
- 136 -
References
- 138 -
References
- 139 -
References
- 140 -
References
- 141 -
References
- 142 -
References
- 143 -
References
[106] Zhenan Sun, Yunhong Wang, Tieniu Tan, Jiali Cui. "Improving iris
recognition accuracy via cascaded classifiers", IEEE Trans. Syst.Man Cyber.
Vol. 35, no. 3, pp. 435–441, August 2005.
[107] Peng-Fei Zhang, De-Sheng Li, Qi Wang, "A novel iris recognition
method based on feature fusion", in: International Conference on Machine
Learning and Cybernetics, pp. 3661–3665, 2004.
[108] Mayank Vatsa, Richa Singh, Afzel Noore, "Reducing the false rejection
rated of iris recognition using textural and topological features", Int. J.
Signal Process. Vol. 2, no. 2, pp. 66–72, 2005.
[109] A. Oppenheim, J. Lim. "The importance of phase in signals",
Proceedings of the IEEE, vol. 69, pp. 529-541, 1981.
[110] A. Kumar, A. Passi, "Comparison and Combination of Iris Matchers for
Reliable Personal Identification", IEEE Computer Society Conference on
Computer Vision and Pattern Recognition Workshops, Anchorage, AK, pp.
1-7, 23-28 June 2008.
[111] C. H. Daouk, L.A. Esber, F. O. Kanmoun, and M. A. Alaoui. “Iris
Recognition", Proc. ISSPIT, pp. 558-562, 2002.
[112] P. Yao, J. Li, X. Ye, Z. Zhuang, and B. Li, “Iris Recognition Algorithm
Using Modified Log-Gabor Filters”, Proceedings of the 18th International
Conference on Pattern Recognition, 2006.
[113] D. Field. "Relations Between the Statistics of Natural Images and the
Response Properties of Cortical Cells". Journal of the Optical Society of
America, 1987.
[114] M. T. Heideman, D. H. Johnson, and C. S. Burrus. "Gauss and the
history of the Fast Fourier Transform", Archive for History of Exact
Sciences, Vol. 34, pp. 265-267, 1985.
[115] A.B. Watson, "Image compression using the DCT", Mathematical
Journal, Vol.4, No.1, pp. 81-88, 1994.
[116] N. I. Cho and S.U. Lee, "Fast Algorithm and Implementation of 2-D
DCT", IEEE Transactions on Circuits and Systems, Vol. 38 p. 297, March
1991.
- 144 -
References
- 145 -
References
- 146 -
References
- 147 -
References
- 148 -
References
- 149 -
Author's Publications
LIST OF PUBLICATIONS
ﻣﻠﺧص اﻟرﺳﺎﻟﺔ
ﯾﻌﺗﺑر اﻟﻧظﺎم اﻷوﺗوﻣﺎﺗﯾﻛﻰ ﻟﻠﺗﻌـرف ﻋﻠـﻰ ﻫوﯾـﺔ اﻻﺷـﺧﺎص ﺑﺎﺳـﺗﺧدام ﻗزﺣﯾـﺔ اﻟﻌـﯾن اﻟﺑﺷـرﯾﺔ
ﻓــﻰ اﻟــزﻣن اﻟﺣﻘﯾﻘــﻰ ﻣــن أﻗــوى أﻧظﻣــﺔ اﻷﻣــﺎن .إﻻ أن ﻣــﺎ ﯾﻌﯾــب اﻟﺧوارزﻣﯾــﺎت اﻟﺗــﻰ ﺗُﺑﻧــﻰ ﺑﻬــﺎ ﻫــذﻩ
اﻷﻧظﻣــﺔ ﻫــﻰ اﻟﺣﺳــﺎﺑﺎت اﻟﻣﻛﺛﻔــﺔ ,واﻟﺗﺷــﻐﯾل ﻋﻠــﻰ ﺣﺎﺳــﺑﺎت آﻟﯾــﺔ ﺑﻣﻌﺎﻟﺟــﺎت ﺑطﯾﺋــﺔ ﻧﺳــﺑﯾﺎً ,وﻛــذﻟك
ﻏﯾــر ﻣﺣﻣوﻟــﺔ ؛ ﻣﻣــﺎ ﺟﻌﻠﻧــﺎ ﻧﻘــوم ﺑﺗﻧﻔﯾــذﻫﺎ ﻋﻠــﻰ ﺑواﺑــﺔ اﻟﻣﺻــﻔوﻓﺎت اﻟﻘﺎﺑﻠــﺔ ﻟﻠﺑرﻣﺟــﺔ ﺣﻘﻠﯾـ ـﺎً FPGA
ﺑﺎﺳﺗﺧدام ﻟﻐﺔ .VHDL
ﻟﻠﺗﻐﻠ ــب ﻋﻠ ــﻰ ﻣﺷ ــﺎﻛل ﻗﻠ ــﺔ اﻟﻛﻔ ــﺎءة ﻓ ــﻰ اﻟ ــزﻣن اﻟﺣﻘﯾﻘ ــﻰ ؛ ﺗ ــم اﻟﺗﻧﻔﯾ ــذ ﺑﺎﻟﻧظ ــﺎم اﻟﻣﺑ ــرﻣﺞ software
ﻟﻠﺣﺻول ﻋﻠﻰ دﻗﺔ واﻋﺗﻣﺎدﯾﺔ ﻋﺎﻟﯾﺔ ,وﺳرﻋﺔ وﻧظﺎم أﻣﻧﻰ ﺑدرﺟﺔ ﻓﺎﺋﻘﺔ .وﺑﺎﺳـﺗﺧدام ﻣﺑـﺎدئ ﻣﻌﺎﻟﺟـﺔ
اﻟﺻــور اﻟرﻗﻣﯾــﺔ ﺛــم ﻛﺎﺷــف اﻟﺣــدود ﺗــم إﯾﺟـ ـﺎد ﺣــدود اﻟﻘزﺣﯾــﺔ اﻟداﺧﻠﯾــﺔ .وﻣــن ﺛــم اﻟﺣــدود اﻟﺧﺎرﺟﯾــﺔ
ﺑﺎﺳــﺗﺧدام ﺧـوارزم ﺗﺣوﯾــل ﻫــﺎف اﻟــداﺋرى CHTاﻟــذى طﺑﻘــﻪ واﯾﻠــد ﻓــﻰ ﻧظﺎﻣــﻪ ﺑﻌــد إﺿــﺎﻓﺔ ﺗﻌــدﯾﻼت
ﻋﻠﻰ ﻫذﻩ اﻟطرﯾﻘﺔ ﻓﻰ اﻟﻧظﺎم اﻟﻣﻘﺗرح .وﻟﻌﻣل ﺗوﺣﯾـد ﻟﻠﺻـور ﻗﺑـل اﻟﺗﺻـﻧﯾف واﻟﻣﻘﺎرﻧـﺔ ؛ ﺗـم ﺗﺣوﯾﻠﻬـﺎ
إﻟﻰ ﻣوﺿﻊ اﻹﺣداﺛﯾﺎت اﻟﻌﻣودﯾﺔ ,ﺑﺈﺳـﺗﺧدام ﻣﻌـﺎدﻻت دوﺟﻣـﺎن اﻟﻣﺷـﻬورة ﻓـﻰ ﺣﯾـز ﻣﺣـدود ﻣﺳـﺑﻘﺎً.
وأﺧﯾ ـ اًر ..اﻟﺣﺻــول ﻋﻠــﻰ اﻟﺷــﻔرة اﻟﻣﻌﺑ ـرة واﻟﻣﺣﺳــﻧﺔ ﻟﻠﻘزﺣﯾــﺔ ,ﺑﺎﺳــﺗﺧدام ﺗﺣــوﯾﻼت ﺟــﺎﺑور اﻟﻣوﺟﯾــﺔ
1Dوطرﯾﻘـ ــﺔ ﺗﺣوﯾـ ــل ﺟﯾـ ــب اﻟﺗﻣـ ــﺎم اﻟﻣﻧﻔﺻـ ــل . 2D-DCT ذات اﻟﺑﻌـ ــد اﻟواﺣـ ــد Log-Gabor
واﻟﻣﻘﺎرﻧﺔ ﺑﺎﺳﺗﺧدام ﻣﻌﺎﻣل ﻣﺳﺎﻓﺔ ﻫﺎﻣﻧﺞ . Hamming Distance
ﺗم اﺧﺗﺑﺎر اﻟﻧظﺎم اﻟﻣﻘﺗرح ﺑﺎﺳﺗﺧدام ﻣﺟﻣوﻋﺔ ﻣن ﺻور ﻟﻘزﺣﯾﺎت ﻋدد ﻣن اﻷﺷﺧﺎص ,ﻣﺄﺧوذة ﻣن
ﻗﺎﻋدة ﺑﯾﺎﻧﺎت اﻟﻣﻌﻬد اﻟﺻﯾﻧﻰ ﻟﻠﻌﻠوم CASIAاﻹﺻدار اﻷول .وأوﺿﺣت اﻟﻧﺗﺎﺋﺞ أن ﺗﻧﻔﯾذ اﻟﻧظـﺎم
ﺑﺎﺳﺗﺧدام طرﯾﻘﺔ ﺗﺣوﯾﻼت ﺟﺎﺑور اﻟﻣوﺟﯾﺔ ذات اﻟﺑﻌد اﻟواﺣـد 1D Log-Gaborأﻋﻠـﻰ دﻗـﺔ 98.94
%ﻣن اﻟﻧظﺎم اﻟﻣﻌﺗﻣد ﻋﻠﻰ ﺧوارزم , % 93.07 DCTوﺑﻧﺳﺑﺔ ﺧطﺄ أﻗل.
-أ-
اﻟﻤﻠﺨﺺ اﻟﻌﺮﺑﻲ
ﺗم ﺗﻧﻔﯾذ ﻫـذا اﻟﻧظـﺎم اﻟﻣﻘﺗـرح ﺑﺎﺳـﺗﺧدام ﺷـراﺋﺢ ﻣـن إﻧﺗـﺎج ﺷـرﻛﺔ Xilinxوﻗـد أﺧـذ اﻟﺗﺻـﻣﯾم ﻣﺳـﺎﺣﺔ
ﻗــدرﻫﺎ % 1ﻣــن اﻟﻣﺳــﺎﺣﺔ اﻟﻛﻠﯾــﺔ ﻟﻠﺷ ـرﯾﺣﺔ ﻣــن اﻟﻧــوع FPGA XC3S1200E-FG320ﻣﺣﻘﻘ ـﺎً
ﺳــرﻋﺔ ﻗــدرﻫﺎ 58.88ﻣﯾﻛروﺛﺎﻧﯾــﺔ ,ﻣﻘﺎرﻧــﺔ ﺑﺳــرﻋﺔ اﻟﻧظــﺎم اﻟﻣﺑــرﻣﺞ اﻟــذى ﯾﺄﺧــذ 1.92ﺛﺎﻧﯾــﺔ ﻹظﻬــﺎر
ﻗـ ـرار اﻟﺗﺻـ ــﻧﯾف واﻟﻣﻘﺎرﻧـ ــﺔ .وﺑـ ــﺎﻟرﻏم ﻣـ ــن أن دﻗـ ــﺔ اﻟﻧظـ ــﺎم اﻟﻣﻌﺗﻣـ ــد ﻋﻠـ ــﻰ ﺧـ ـوارزم ﺗﺣـ ــوﯾﻼت ﺟـ ــﺎﺑور
اﻟﻣوﺟﯾــﺔ ذات اﻟﺑﻌــد اﻟواﺣــد 1D Log-Gaborأﻋﻠــﻰ وأﻛﺛــر أﻣﺎﻧ ـﺎً ,إﻻ أن اﻟﻧظــﺎم اﻟﻣﻌﺗﻣــد ﻋﻠــﻰ
ﺧوارزم DCTأﻛﺛر اﻋﺗﻣﺎدﯾﺔ ,وأﻗل زﻣﻧﺎً ﻓﻰ اﻟﺗﻧﻔﯾذ ,وأﻓﺿل ﻓﻰ اﻟﺗﺻﻧﯾف ﺑﯾن اﻷﺷﺧﺎص اﻟﻐﯾر
ﻣﺻرح ﺑﻬم .وﻛذﻟك ﻧظﺎم اﻟﺗﻌرف اﻟﻣﻌﺗﻣد ﻋﻠﻰ ﺑواﺑﺔ اﻟﻣﺻﻔوﻓﺎت اﻟﻣﺑرﻣﺟﺔ ﺣﻘﻠﯾﺎً أﺳرع ﻣن اﻟﻧظـﺎم
اﻟﻣﺑرﻣﺞ وأﻗل ﺣﺟﻣﺎً.
اﻟﻬدف ﻣن اﻟرﺳﺎﻟﺔ:
اﻟﻬدف اﻟرﺋﯾﺳﻰ ﻣن اﻟرﺳﺎﻟﺔ ﻫو ﺗﺻـﻣﯾم وﺗﻧﻔﯾـذ ﻧظـﺎم ﻟﻠﺗﻌـرف ﻋﻠـﻰ اﻷﺷـﺧﺎص واﻟﺗﺄﻛـد ﻣـن
اﻟﻬوﯾﺔ ﻣن ﺧﻼل ﻗزﺣﯾﺔ اﻟﻌﯾن ؛ ﻻﺳـﺗﺧداﻣﻪ ﻓـﻰ أﻧظﻣـﺔ اﻟﺣﻣﺎﯾـﺔ واﻷﻣـن ,وﻋﻣـل اﻟﺑﻧـﺎء اﻟﻣـﺎدى ﻟـﻪ.
وﻓﻰ ﻫذﻩ اﻟرﺳﺎﻟﺔ ﺗم اﻟﻌﻣل ﻋﻠﻰ اﻟﻧﺣو اﻟﺗﺎﻟﻰ:
أوﻻً :ﺗـ ــم ﺗﻧﻔﯾـ ــذ اﻟﻧظـ ــﺎم اﻟﻣﺑـ ــرﻣﺞ ﺑﺎﺳـ ــﺗﺧدام ﺣـ ــزم ﺑـ ـراﻣﺞ اﻟﻣـ ــﺎﺗﻼب Matlabوﻋﻣـ ــل ﻣﺣـ ــﺎﻛﻰ ﻟﺗﻧﻔﯾـ ــذ
اﻻﺧﺗﺑﺎر .وﻣن ﺛم ﺑرﻣﺟﺔ اﻟواﺟﻬﺔ اﻟرﺳوﻣﯾﺔ ﻟﻬذا اﻟﻧظﺎم.
ﺛﺎﻧﯾﺎً :ﺗم ﻋﻣـل د ارﺳـﺔ ﻟﻠﻣﻘﺎرﻧـﺔ ﺑـﯾن ﻧظـﺎم اﻟﺗﻌـرف ﻋﻠـﻰ اﻷﺷـﺧﺎص اﻟـذى ﯾﻌﺗﻣـد ﻋﻠـﻰ ﺧـوارزم 1D
, Log-Gaborوأﯾﺿـﺎً اﻟــذى ﯾﻌﺗﻣــد ﻋﻠــﻰ ﺧـوارزم , DCTﺑﻧــﺎءاً ﻋﻠــﻰ ﻣﻌــﺎﯾﯾر اﻟﻛﻔــﺎءة ﻣــن ﺣﯾــث
اﻟدﻗﺔ واﻹﻋﺗﻣﺎدﯾﺔ واﻟﺳرﻋﺔ.
ﺛﺎﻟﺛــﺎ :ﺗــم ﻋﻣــل اﻟﻣﺣﺎﻛــﺎة داﺧــل إﺣــدى ﻣﺻــﻔوﻓﺎت اﻟﺑواﺑــﺎت اﻟﻘﺎﺑﻠــﺔ ﻟﻠﺑرﻣﺟــﺔ ﺣﻘﻠﯾ ـﺎً ,ﺑﺎﺳــﺗﺧدام ﺑﯾﺋــﺔ
Xilinx ISE 12.1وﺗﻧﻔﯾذﻫﺎ وﻣﻘﺎرﻧﺗﻬﺎ ﺑﺎﻟﻧظﺎم اﻟﻣﺑرﻣﺞ ﺑﺣزم اﻟﻣﺎﺗﻼب Matlabﻣن ﺣﯾث ﻋﺎﻣل
اﻟﺳرﻋﺔ.
-ب-
اﻟﻤﻠﺨﺺ اﻟﻌﺮﺑﻲ
اﻟﻔﺻـــل اﻟﺛـــﺎﻧﻰ :وﻫــذا اﻟﻔﺻــل ﯾﺷــﺗﻣل ﻋﻠــﻰ ﻣﻘدﻣــﺔ ﻋﺎﻣــﺔ ﻋــن طــرق اﻟﺗﻌــرف ﻋﻠــﻰ اﻷﺷــﺧﺎص
وﺧﺻﺎﺋﺻــﻬﺎ .وﻣﺗطﻠﺑ ــﺎت ﻫ ــذﻩ اﻟط ــرق وأﻧظﻣ ــﺔ ﻋﻣﻠﻬــﺎ وأﺷ ــﻬرﻫﺎ اﺳ ــﺗﺧداﻣﺎً .وأﺧﯾـ ـ اًر ﻣﻔ ــﺎﻫﯾم اﻟﻛﻔ ــﺎءة
وﻋواﻣﻠﻬﺎ ﻟﻠﻣﻘﺎرﻧﺔ ﺑﯾن ﻫذﻩ اﻟطرق وأﺳﺑﺎب اﺧﺗﯾﺎر ﻗزﺣﯾﺔ اﻟﻌﯾن ﻣن ﺑﯾن ﺗﻠك اﻟطرق.
اﻟﻔﺻــل اﻟﺛﺎﻟــث :ﯾﻘــدم ﻫــذا اﻟﻔﺻــل ﻣﻔــﺎﻫﯾم ﻧظــﺎم اﻟرؤﯾــﺔ اﻟﺑﺷ ـرﯾﺔ ,وﻣﻛوﻧــﺎت اﻟﻧظــﺎم اﻷﺗوﻣــﺎﺗﯾﻛﻰ
ﻟﻠﺗﻌــرف ﻋﻠــﻰ اﻟﻬوﯾــﺔ .وأﯾﺿــﺎ ﺑﻌــض اﻷﻣ ـراض اﻟﺗــﻰ رﺑﻣــﺎ ﺗــؤﺛر ﻋﻠــﻰ ﻗزﺣﯾــﺔ اﻟﻌــﯾن ,ﺑﻣــﺎ ﻓــﻰ ذﻟــك
اﻟﻣﻣﯾزات واﻟﺻﻌوﺑﺎت اﻟﺗﻰ ﺗواﺟﻪ ﻫذا اﻟﻧظﺎم -ﺧﺎﺻﺔ ﻓﻰ ﻣرﺣﻠﺔ اﻹﻟﺗﻘﺎط -ﻓﻰ ﺗﻧﻔﯾذﻩ.
اﻟﻔﺻل اﻟراﺑـﻊ :ﯾﺗﺿـﻣن ﻣﺟﻣوﻋـﺔ ﻗواﻋـد اﻟﺑﯾﺎﻧـﺎت اﻟﻌﺎﻟﻣﯾـﺔ ﻟﺻـور ﻗزﺣﯾـﺎت اﻟﻌـﯾن واﻟﻣوﺟـودة ﻋﻠـﻰ
اﻻﻧﺗرﻧ ــت ﻟﻠﺑ ــﺎﺣﺛﯾن .وﺧﺻ ــﺎﺋص أﺷـ ــﻬر ﻫ ــذﻩ اﻟﻣﺟﻣوﻋ ــﺎت اﻟﻣﺳ ــﺗﺧدﻣﺔ ,ﻣـ ــﻊ اﻟﺗرﻛﯾ ــز ﻋﻠـ ـﻰ ﻣ ــﺎ ﺗـ ــم
اﺳﺗﺧداﻣﻪ ﻓﻰ ﻫذا اﻟﺑﺣث.
اﻟﻔﺻل اﻟﺧﺎﻣس :ﯾﺣﺗوى ﻫذا اﻟﻔﺻل ﺧوارزﻣﯾﺎت اﻟﻧظﺎم اﻟﻣﻘﺗـرح .ﻓﻬـو ﯾﺳـﺗﻌرض ﺧطـوات ﻓﺻـل
واﺳﺗﺧﻼص اﻟﻘزﺣﯾﺔ ,وﺗﺣوﯾﻠﻬﺎ إﻟﻰ ﻧظﺎم اﻹﺣداﺛﯾﺎت اﻟﻌﻣودﯾﺔ وﻋرض ﻧﺗﺎﺋﺞ ﺗﻧﻔﯾذ ﺧوارزﻣﯾﺎت ﻛل
ﻣرﺣﻠﺔ.
اﻟﻔﺻل اﻟﺳﺎدس :ﯾوﺿﺢ ﻫذا اﻟﻔﺻل طرق ﺗﺣﺳﯾن ﺻور اﻟﻘزﺣﯾـﺔ .واﺳـﺗﺧﻼص اﻟﺳـﻣﺎت اﻟﻣﻣﯾـزة
ﻟﻬﺎ ﻟﺗﺻﻧﯾﻔﻬﺎ .وﻣﻘﺎرﻧﺔ اﻟﻧظﺎم اﻷوﺗوﻣﺎﺗﯾﻛﻰ اﻟذى ﺗم ﺗﻧﻔﯾذﻩ ﺑﺎﺳﺗﺧدام 1D Log-Gaborﺑﺎﻟـذى ﺗـم
ﺗﻧﻔﯾذﻩ ﺑﺎﺳﺗﺧدام . DCTوﻛذﻟك ﺑرﻣﺟﺔ ﺧوارزم اﻟﺗﺻﻧﯾف واﻟﻣﻘﺎرﻧﺔ وﻋرض اﻟﻧﺗﺎﺋﺞ إو ظﻬﺎرﻫﺎ.
اﻟﻔﺻل اﻟﺳﺎﺑﻊ :ﯾﻌرض ﺗﻧﻔﯾذ ﻧظﺎم اﻟﺗﻌرف ﻋﻠﻰ ﻗزﺣﯾﺔ اﻟﻌﯾن ﺑﺎﺳﺗﺧدام ﺑواﺑﺔ اﻟﻣﺻـﻔوﻓﺎت اﻟﻘﺎﺑﻠـﺔ
ﻟﻠﺑرﻣﺟﺔ ﺣﻘﻠﯾﺎً .وﻓﯾﻪ ﯾﺗم ﻋـرض ﻧﺑـذﻩ ﺗﺎرﯾﺧﯾـﺔ ﻋـن اﻷﻧظﻣـﺔ واﻟـدواﺋر اﻟﻣﺎدﯾـﺔ اﻟﻘﺎﺑﻠـﺔ ﻟﻠﺑرﻣﺟـﺔ وطـرق
ﺑرﻣﺟﺗﻬﺎ .وﯾﺗﻌرض اﻟﻔﺻل ﻟﻠﺗرﻛﯾب اﻟداﺧﻠﻰ ﻟﺷراﺋﺢ ، FPGAوﺧطوات ﺑرﻣﺟﺗﻬﺎ ،وﻛذﻟك ﻋرض
ﻧﺗﺎﺋﺞ اﻟﻣﺣﺎﻛﺎة واﺧﺗﯾﺎر أﻧﺳب ﺷرﯾﺣﺔ اﻟﻛﺗروﻧﯾﺔ ﻟﺗﻧﻔﯾذ اﻟﺗطﺑﯾق.
اﻟﻔﺻل اﻟﺛﺎﻣن :ﯾﻘدم ﻫذا اﻟﻔﺻل أﻫم اﻟﻧﻘـﺎط اﻟﻣﺳﺗﺧﻠﺻـﺔ ﻣـن ﻫـذﻩ اﻟد ارﺳـﺔ ,وﻛـذﻟك ﺧطـﺔ اﻟﻌﻣـل
اﻟﻣﺳ ــﺗﻘﺑﻠﻰ .وُذﯾﻠ ــت ﻫ ــذﻩ اﻟرﺳ ــﺎﻟﺔ ﺑ ــﺎﻟﻣراﺟﻊ اﻟﻣﺳ ــﺗﺧدﻣﺔ ,وﻛ ــذﻟك ﻣﻠﺧ ــص ﻟﻠرﺳ ــﺎﻟﺔ ﺑﺎﻟﻠﻐ ــﺔ اﻟﻌرﺑﯾ ــﺔ
ﻟﻣوﺿوع اﻟﺑﺣث.
-ت-
ﺟﺎﻣﻌﺔ اﻟﻣﻧوﻓﯾﺔ
ﻛﻠﯾﺔ اﻟﻬﻧدﺳﺔ اﻹ ﻟﻛﺗروﻧﯾﺔ ﺑﻣﻧوف
ﻗﺳم ﻫﻧدﺳﺔ وﻋﻠوم اﻟﺣﺎﺳﺑﺎت
ﺗﻧﻔﯾذ ﻣﻧظوﻣﺎت اﻟﺗﻌرف ﻋﻠﻰ ﻗزﺣﯾﺔ اﻟﻌﯾن ﺑﺎﺳﺗﺧدام ﺑواﺑﺔ اﻟﻣﺻﻔوﻓﺎت اﻟﻘﺎﺑﻠﺔ
ﻟﻠﺑرﻣﺟﺔ ﺣﻘﻠﯾﺎ
ﻣن اﻟﻣﻬﻧدس
ﻟﺟﻧﺔ اﻹﺷراف
2012