0% found this document useful (0 votes)
12 views5 pages

Implementation of Security Management System Using Face Recognition

Uploaded by

Cteve Khim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views5 pages

Implementation of Security Management System Using Face Recognition

Uploaded by

Cteve Khim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

International Journal of Innovative Research in Engineering

Volume 3, Issue 6 (November-December 2022), PP: 158-162.


www.theijire.com ISSN No: 2582-8746

Implementation of Security Management System Using Face


Recognition

Hemant Gehlod1, Govind Thakur2, HarshitChoudhary3, Harsh Mahajan4


1,2,3,4
Department of Computer Science And Engineering, Acopolis Institute of Technology and Research, Indore, M.P., India.
How to cite this paper:
Hemant Gehlod1, Govind Thakur2,
Abstract: A face recognition system is one of the biometric information processes, its
HarshitChoudhary3, Harsh Mahajan4, applicability is easier and working range is larger than others, i.e.; fingerprint, iris scanning,
“Implementation of Security Management System signature, etc. A face recognition system is designed, implemented and tested at Atılım
Using Face Recognition”, University, Mechatronics Engineering Department. The system uses a combination of
IJIRE-V3I06-158-162. techniques in two topics; face detection and recognition. The face detection is performed on live
acquired images without any application field in mind. Processes utilized in the system are
white balance correction, skin like region segmentation, facial feature extraction and face image
Copyright © 2022 by author(s) and5th Dimension extraction on a face candidate. Then a face classification method that uses FeedForward Neural
Research Publication. Network is integrated in the system. The system is tested with a database generated in the
This work is licensed under the Creative
Commons Attribution International License
laboratory with 26 people. The tested system has acceptable performance to recognize faces
(CC BY 4.0). within intended limits. System is also capable of detecting and recognizing multiple faces in live
http://creativecommons.org/licenses/by/4.0/ acquired images

I. INTRODUCTION
Face recognition systems are part of facial image processing applications and their significance as a research area are
increasing recently. They use biometric information of humans and are applicable easily instead of fingerprint, iris, signature
etc., because these types of biometrics are not much suitable for non-collaborative people. Face recognition systems are usually
applied and preferred for people and security cameras in metropolitan life. These systems can be used for crime prevention,
video surveillance, person verification, and similar security activities. Face recognition system is a complex image-processing
problem in real world applications with complex effects of illumination, occlusion, and imaging condition on the live images.
It is a combination of face detection and recognition techniques in image analyzes. Detection application is used to find
position of the faces in a given image. Recognition algorithm is used to classify given images with known structured
properties, which are used commonly in most of the computer vision applications. Recognition applications use standard
images, and detection algorithms detect the faces and extract face images which include eyes, eyebrows, nose, and mouth. That
makes the algorithm more complicated than single detection or recognition algorithm. The first step for face recognition
system is to acquire an image from a camera. Second step is face detection from the acquired image. As a third step, face
recognition that takes the face images from output of detection part. Final step is person identity as a result of recognition part.
An illustration of the steps for the face recognition system is given in Figure 1. Acquiring images to computer from camera and
computational medium (environment) via frame grabber is the first step in face recognition system applications. The input
image, in the form of digital data, is sent to face detection algorithm part of a software for extracting each face in the image.
Many methods are available for detecting faces in the images in the literature [1 – 9]. Available methods could be classified
into two main groups as; knowledge-based [1 – 4] and appearance-based [5 – 9] methods. Briefly, knowledge-based methods
are derived from human knowledge for features that makes a face. Appearance-based methods are derived from training and/or
learning methods to find faces.

After faces are detected, the faces should be recognized to identify the persons in the face images. In the literature,
most of the methods used images from an available face library, which is made of standard images [10 – 17]. After faces are
detected, standard images should be created with some methods. While the standard images are created, the faces could be sent
to recognition algorithm. In the literature, methods can be divided into two groups as 2D and 3D based methods. In 2D
methods, 2D images are used as input and some learning/training methods are used to classify the identification of people [1 –

158 | P a g e
Implementation of Security Management System Using Face Recognition

15]. In 3D methods, the three dimensional data of face are used as an input for recognition. Different approaches are used for
recognition, i.e. using corresponding point measure, average half face, and 3D geometric measure [16, 17]. Details about the
methods will be explained in the next section. Methods for face detection and recognition systems can be affected by pose,
presence or absence of structural components, facial expression, occlusion, image orientation, imaging conditions, and time
delay (for recognition). Available applications developed by researchers can usually handle one or two effects only, therefore
they have limited capabilities with focus on some well-structured application. A robust face recognition system is difficult to
develop which works under all conditions with a wide scope of effect.

II.DESIGN OF A FACE RECOGNITION SYSTEM


A throughout survey has revealed that various methods and combination of these methods can be applied in
development of a new face recognition system. Among the many possible approaches, we have decided to use a combination
of knowledge-based methods for face detection part and neural network approach for face recognition part. The main reason in
this selection is their smooth applicability and reliability issues. Our face recognition system approach is given in Figure 2.

2.1. Input Part


Input part is prerequisite for face recognition system. Image acquisition operation is performed in this part. Live
captured images are converted to digital data for performing image-processing computations. These captured images are sent
to face detection algorithm.

2.2. Face Detection Part


Face detection performs locating and extracting face image operations for face recognition system. Face detection part
algorithm is given in Figure 3.
Our experiments reveal that skin segmentation, as a first step for face detection, reduces computational time for
searching whole image. While segmentation is applied, only segmented region is searched weather the segment includes any
face or not.

For this reason, skin segmentation is applied as a first step of detection part. RGB color space is used to describe skin
like color [4]. White balance of images differs due to change in lighting conditions of the environment while acquiring image.
This situation creates non-skin objects that belong to skin objects. Therefore, white balance of the acquired image should be
corrected before segmenting it [18]. Results of segmentation on original image and white balance corrected image is given in
Figure 4 and 5.

159 | P a g e
Implementation of Security Management System Using Face Recognition

After “and operation” is applied on segmented images, some morphological operations are applied on final skin image
to search face candidate. Noisy like small regions elimination, closing operations are performed. Then, face candidates are
choosen with two conditions which are ratio of bounding box of candidate and covering some gaps inside the candidate region.
Ratio of bounding box should lie between 0.3 and 1.5

Based on these conditions, face candidates are extracted from input image with modified bounding box from original
bounding box. The height of bounding box modified as 1.28 times bigger than width of bounding box because chest and neck
parts will be eliminated if candidate includes them This modification value have been determined experimentally. These face
candidates will be sent to facial feature extraction part to validate the candidates. Final verification of candidate and face image
extraction, facial feature extraction process is applied. Facial feature is one of the most significant features of face. Facial
features are eyebrows, eyes, mouth, nose, nose tip, cheek, etc. The property is used to extract the eyes and mouth which, two
eyes and mouth generate isosceles triangle, and distance between eye to eye and midpoint of eyes distance to mouth is equal
[2]. Laplacian of Gaussian (LoG) filter and some other filtering operations are perfomed to extract facial feature of face
candidate [19].

Figure 6 shows that, facial features can be selected easily. After obtaining filtered image, labeling operation is applied
to determine which labels are possible to be facial features.
After face cover corner points are calculated, face image can be extracted. Facial feature extraction, covering and face
image extraction are given in Figure 7.

Up to here, face detection part is completed, and face images are found in the acquired images. This algorithm is
implemented using MATLAB and tested for more than hundred images. This algorithm detects not only one face but also more
than one face. Small amount of oriented face are acceptable. Results are satisfactory for all purpose.

2.3. Face Recognition Part


Modified face image which is obtained in the Face recognition system, should to be classified to identify the person in
the database. Face recognition part is composed of preprocessing face image, vectorizing image matrix, database generation,
and then classification. The classification is achieved by using Feed Forward Neural Network (FFNN) [13]. Face recognition
part algorithm is given in Figure 8.

160 | P a g e
Implementation of Security Management System Using Face Recognition

Before classifying the face image, it should be preprocessed. Preprocessing operations are histogram equalizing of
grayscale face image, resizing to 30-by-30 pixels, and finally vectorizing the matrix image. In classifier, Feed Forward Neural
Network (FFNN) is used [19]. FFNN is the simplest structure in the neural network. This type of network structure is generally
used for pattern recognition applications. System network properties are: input layer has 900 inputs, hidden layer has 41
neurons and output layer has 26 neurons. Output layer has 26 neuron since the number of people in database is 26. After
structure is generated, then network should be trained to classify the given images with respect to face database. Therefore,
face database is created before any tests. A database is created for 26 people with 4 samples for each person. This results 104
training sample. Due to that, 900-by-104 size matrix will be training matrix. Training matrix vector element is arranged with
four groups due to the number of samples for each person. Though, first 26 vector element belongs to first samples of 26
people, and it continues. Training matrix’s columns are made from preprocessing image and then vectorizing to face image
which generate database. After training matrix and target matrix is created, then training of NN can be performed. Back
propagation is used to train the network. Training performance and goal errors are set to 1e-17 to classify given image
correctly.
III.DEEP SIAMESE FOR IMAGE VERIFICATION
Siamese nets were first introduced in the early 1990s by Bromley and Le Cun to solve signature verification as an
image matching problem. A Siamese neural network consists of twin networks which accept distinct inputs but are joined by
an energy function at the top. This function computes some metric between the highest-level feature representation on each
side (Figure 3).The parameters between the twin networks are tied. Weight tying guarantees that two extremely similar images
could not possibly be mapped by their respective networks to very different locations in feature space because each network
computes the same function. Also, the network is symmetric, so that whenever we present two distinct images to the twin
networks, the top conjoining layer will compute the same metric as if we were to we present the same two images but to the
opposite twins.

Model:
Our standard model is a Siamese convolutional neural net-work with L layers each with Nl units, where h1,lrepre-
sents the hidden vector in layer l for the first twin, and h2,ldenotes the same for the second twin. We use exclusively rectified
linear (ReLU) units in the first L2 layers and sigmoid l units in the remaining layers.

Experiment:
We trained our model on a subset of the Omnig lot dataset, which we first describe. We then provide details with re-
spect to verification and one-shot performance.

The Omniglot Dataset:


The Omniglot data set was collected by Brenden Lake and his collaborators at MIT via Amazon’s Mechanical Turk to
produce a standard benchmark for learning from few examples in the hand written character recognition domain(1Omniglot
contains examples from 50 alphabets ranging from well-established international languages.

The Omnig lot dataset containsa variety of


different images from alphabets across the
world.

Verification
To train our verification network, we put together three dif-ferentdatasetsizeswith30,000,90,000,and150,000train-
ingexamplesbysamplingrandomsameanddifferentpairs.We set aside sixty percent of the total data for training: 30alphabets
out of 50 and12 drawers out of 20.
We fixed a uniform number of training examples per alphabet so that each alphabet receives equal representation during
optimization, although this is not guaranteed to the individual character classes with in each alphabet. By adding affine
distortions, we also produced an additional copy of the data set corresponding to the augmented version of each of these sizes.
We added eight transforms for each training example, so the corresponding data sets have 270,000,810,000, and1,350,000
effective examples.
To monitor performance during training, we used two strategies. First, we created a validation set for verification
with 10,000 example pairs taken from 10 alphabets and 4additional drawers. We reserved the last 10 alphabets
and4drawersfortesting, where we constrained these to be the same ones used in Lake et al. Our other strategy leveraged the
same alphabets and drawers to generate a set of 320 one-shot recognition trials for the validation set which mimic the target
task on the evaluation set. In practice, this second method of determining when to stop was at least as effective as the
validation error for the verification task so we used it a sour termination criterion.
For our project we list the final verification results for each of the six possible training sets, where the listed test
accuracy is reported at the best validation check-point and threshold. We report results across six different training runs,
varying the training set size and toggling distortions.

161 | P a g e
Implementation of Security Management System Using Face Recognition

In Figurewehaveextractedthefirst32filtersfrombothof our top two performing networks on the verification task, which
were trained on the 90k and 150k data sets with affine distortions and the architecture shown in Figure 3.Whilethereissomeco-
adaptationbetweenfilters,itiseasyto see that some of the filters have assumed different roles with respect to the original input
space.

IV.CONCLUSION
Face recognition systems are part of facial image processing applications and their significance as a research area are
increasing recently. Implementations of system are crime prevention, video surveillance, person verification, and similar
security activities. The face recognition system implementation will be part of humanoid robot project at Atılım University.
The goal is reached by face detection and recognition methods. Knowledge-Based face detection methods are used to find,
locate and extract faces in acquired images. Implemented methods are skin color and facial features. Neural network is used for
face recognition. RGB color space is used to specify skin color values, and segmentation decreases searching time of face
images. Facial components on face candidates are appeared with implementation of LoG filter. LoG filter shows good
performance on extracting facial components under different illumination conditions. FFNN is performed to classify to solve
pattern recognition problem since face recognition is a kind of pattern recognition. Classification result is accurate.
Classification is also flexible and correct when extracted face image is small oriented, closed eye, and small smiled. Proposed
algorithm is capable of detect multiple faces, and performance of system has acceptable good results.

References
1. L. Zhi-fang, Y. Zhi-sheng, A.K.Jain and W. Yun-qiong, 2003, “Face Detection And Facial Feature Extraction In Color Image”, Proc. The
Fifth International Conference on Computational Intelligence and Multimedia Applications (ICCIMA’03), pp.126-130, Xi’an, China.
2. C. Lin, 2005, “Face Detection By Color And Multilayer Feedforward Neural Network”, Proc. 2005 IEEE International Conference on
Information Acquisition, pp.518-523, Hong Kong and Macau, China.
3. S. Kherchaoui and A. Houacine, 2010, “Face Detection Based On A Model Of The Skin Color With Constraints And Template Matching”,
Proc. 2010 International Conference on Machine and Web Intelligence, pp. 469 - 472, Algiers, Algeria.
4. P. Peer, J. Kovac and F. Solina, 2003, “Robust Human Face Detection in Complicated Color Images”, Proc. 2010 The 2nd IEEE
International Conference on Information Management and Engineering (ICIME), pp. 218 – 221, Chengdu, China.
5. M. Ş. Bayhan and M. Gökmen, 2008, “Scale And Pose Invariant Real-Time Face Detection And Tracking”, Proc. 23rd International
Symposium on Computer and Information Sciences ISCIS '08, pp.1-6, Istanbul, Turkey.

162 | P a g e

You might also like