Approche Multimodale basée sur l’Apprentissage Profond pour l’Identification Biométrique
No Thumbnail Available
Files
Date
2023
Authors
ABDESSALAM HATTAB
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Due to the growing need for user identification in various modern applications, experts highly recommend
incorporating biometric technology in the application development field. Recently, many
recognition systems using face and iris traits obtained remarkable performance, particularly
since these biometric traits are captured from a distance without physical contact with sensors.
This feature reduces the potential spread of the COVID-19 pandemic and other diseases by touch
and makes biometrics more convenient and user-friendly. However, the recognition systems’ performance
is significantly reduced when these traits are captured under uncontrolled conditions,
including occlusion, poses and illumination variation. Because the handcrafted approach used
by some recognition systems extracts local features from the global image, where the image’s
regions affected by the uncontrolled conditions often influence the quality of extracted features.
This thesis proposed a robust face recognition system that extracts features from important
facial regions, using Scale-invariant feature transform (SIFT) to identify significant facial regions
and adaptive Local Ternary Pattern (ALTP) to extract features. The proposed system achieved
promising results on two benchmark face datasets captured under unconstrained conditions,
achieving 99.75% on the ORL dataset and 95.12% on the FERET dataset. In addition, a new face
recognition system based on Deep Learning has been proposed using the pre-trained AlexNet-v2
model. The proposed system achieved excellent results on the ORL and FERET face datasets,
reaching 100% on the first and 99.89% on the second. Moreover, we proposed a novel face
recognition system to address the issue of illumination variation. The system used a novel model
inspired by the pre-trained VGG16 model. The proposed system achieved state-of-the-art results,
reaching 99.32% on the Extended Yale B dataset and 99.79% on the AR dataset.
Another contribution of this work is developing a novel iris recognition system based on
Transfer Learning to achieve high accuracy rates. The proposed system used the Yolov4-tiny
model to localize the iris region, while a novel Deep Convolutional Neural Network (CNN) model
inspired by the pre-trained Inception-v3 model was used for features extraction. The performance
of this system was evaluated on four different iris databases captured under non-cooperative
conditions, where it achieved a new state-of-the-art accuracy rate reaching 99.91%, 99.60%,
99.91%, and 99.19% on the IITD, CASIA-Iris-v1, CASIA-Iris-Interval, and CASIA-Iris-Thousand,
respectively.
The proposed unimodal systems achieved high accuracy compared to state-of-the-art methods.
However, relying solely on a unimodal biometric trait is inadequate for high-security requirements
in military and government applications.
Finally, three face-iris multimodal biometric systems have been proposed in this thesis.
The first employs the fusion of images, the second utilizes feature-level fusion, and the third
is based on score-level fusion. The proposed systems used Yolov4-tiny to detect the face and
both iris regions. In addition, they applied a new deep CNN model inspired by the pre-trained
Xception model to extract features. To evaluate the performance of the proposed systems, a twofold
cross-validation protocol is employed on the CASIA-ORL and SDUMLA-HTM multimodal
benchmark databases. The results showed that our systems achieved a perfect score of 100%
on both databases. Remarkably, the system utilizing score-level fusion outperformed the other
systems, achieving outstanding results of 100% on the CASIA-ORL database and over 99% on
the SDUMLA-HTM database, with only one sample used for training.