dc.description.abstract |
In recent years the importance and need for computer vision systems increased due to security demands, self-driving cars, cell phone logins, forensic identification, banks, etc. In security, the idea is to distinguish individuals correctly by utilizing facial recognition, iris recognition, or other means suitable for identification. Cell phones use face recognition to unlock the screen and authorization. Face recognition systems perform tremendously well, however, they still face challenges of classification. Their major challenge is the ability to identify or recognize individuals in an image or images. The causes of this challenge could be lighting (illumination) conditions, the place or environment where the image is taken and this can be associated with the background environment of the image, posing, and facial gestures or expressions. This study investigates a possible method to bring a solution. The method proposes a combination of the Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. Firstly, apply PCA to reduce dataset dimensions, enable smaller network usage and training, remove redundancy, maintain quality, and produce Eigenfaces. Secondly, apply PCA output to K-Means clustering to select centres with better characteristics, and produce initial input data for CNN. Lastly, take K-Means clustering output as the input of the CNN and train the network. It is trained and evaluated using the ORL dataset. This dataset comprises 400 different faces with 40 classes of 10 face images per class. The performance of this technique was tested against (PCA), Support Vector Machine (SVM), and K-Nearest Neighbour (KNN). This method’s accuracy after 90 epochs achieved 99% F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% F1-Score and KNN with 84% F1-Score during the experiments. Therefore, this method proved to be efficient in identifying faces in the images. |
en |