Face recognition using Eigen face based technique utilizing the concept of principal component analysis

Face recognition has been an active research area since late 1980s [1]. It has numerous applications like security-based system, forensic identification, facial expression detection, gender classification etc. It has limited scope in biometric application but they are more useful for surveillance applications like activity tracking and recognition, abnormality detection. Face recognition system can be implemented using various types of methods but in this paper we use appearance-based approach. Eigenface approach is one of the earliest appearance-based face recognition methods, which was developed by M. Turk and A. Pentland [1] in 1991. In this approach we have to perform a lots of computations, which are not feasible with respect to time in many real time system. The concept of principal component analysis (PCA) is used in this approach to reduce the dimension and hence reducing the computation time. Principal component analysis [4] decomposes face images into a small set of characteristic feature images called eigenfaces. Recognition of face image can be done by projection the test image onto a low dimensional linear face space or eigenfaces. After that we compute the distance between the resultant position of the test face image in the face space and those of known face classes. Now we compare the distance against a threshold value, and recognize the face as a known or unknown face.


INTRODUCTION
Among various biological features, face plays an important role to uniquely identify a person. Furthermore, not only the identity, human face can also be used for expression detection, age detection and many more applications. For a human being it is easy to recognize a face at a glance even after years though there are large changes in the visual appearance, due to expression, aging or may be the presence of some objects like glasses, beard etc. Hence developing an automated face recognition system is not an easy task. Now before going into the implementation details, we would like to discuss some important concept other than face recognition that may be considered as preprocessing tasks and without them the process of face recognition is impossible to implement. Whenever we consider an image, it may contain some noise, which has to be removed before going into the next phase. Also an image can be generated by various input sources, hence we need to resize the image into standard size so that the required computational time can be optimized. After that we have to apply the face detection process [9], which will give us a face region. Then we need to normalize the face image so that the head position (level of head) of the face is same as of the face images stored in the database. Therefore the success of all these processes will enhance the performance of the face recognition process [8]. Now come to the concept of face recognition process, which can be either an entry-level based recognition in which, we determine whether a given test face is present in our face database or not. Other type of recognition is classification-based recognition, which take an input face image and categorize according to the requirements. In our paper, we will discuss about the implementation details of the entry-level based recognition system.
Face recognition approaches can be classified as three categories: template matching-based, feature-based, and appearance-based [7].
In template-matching based technique, we represent the face as a template considering the relative distance between various facial features like distance between two eyes, distance between eye and nose, distance between nose and mouth. Instead of the whole face we can think each individual facial feature as a template like eye template, nose template and mouth template. Then we compare their property with the database image. Though these types of recognition methods are easy to implement but the memory requirement is very high. Feature-based approaches have smaller memory requirement and a higher recognition speed than template-based method. However, perfect extraction of various facial features is very difficult to implement. In the last approach, that is appearance-based approach, we project the face image onto a lower dimensional linear subspace. This subspace is constructed by principal component analysis on a set of training images, with eigenfaces as its eigenvectors. In this paper we will focus only on the appearance-based approach.

FACE RECOGNITION
One of the simplest and most effective ways to recognize a face is eigenface approach [1]- [3]. In this approach we use the concept of principal component analysis (PCA) to transform a face image onto a lower dimensional subspace that consists eigenfaces. PCA [5] [6] technique always finds out a set of projection vector such that the projected data retains the most important information about the original data. One common way to do this is to consider those eigenvectors corresponding to the highest eigenvalues of the covariance matrix. This method reduces the dimensionality of data space by projecting data from M-dimensional space to P-dimensional space, where P M. Now recognition can be done by projecting a new face onto the lower dimensional subspace of eigenfaces [10]. After that we compare its position in the subspace with the position of known individuals. Then we measure their distance, which can be Mahalanobis distance or Euclidean distance. In our approach we use Euclidean distance. If the Euclidean distance is greater than a threshold value, the person is unknown otherwise; it is a known person [1].

Eigenfaces for Face Recognition
In early 1990s, M. Turk and A. Pentland have realized that using some encoding and decoding techniques with the help of the significant local and global "features" can represent information content of face images. These features may or may not be related to our intuitive notion of face features such as the eyes, nose, lips, and hair. Hence in the face recognition process we first do some encoding on the test face and then compare the result with the other encoded information produced by the database images using the same encoding technique. A simple way to represent the face information is to use the variation in a collection of face images, independent of any judgment of features, and use this information to encode and compare individual face images.
Principal component analysis finds the principal components of the distribution of faces in terms of eigenvectors of the covariance matrix of the set of face images. These eigenvectors can be thought of as a set of features, which together characterize the variation between face images. Each image location contributes more or less to each eigenvector, so that we can display the eigenvector as a sort of ghostly face called an eigenface. Some of these faces are shown in Figure 4. Each face image in the training set can be represented exactly in terms of a linear combination of the eigenfaces. The number of possible eigenfaces is equal to the number of face images in the training set. However, we can consider less number of eigenfaces, which have the largest eigenvalues and labeled them as 'best' eigenfaces. Thus, we can reduce the computation time.

The eigenface approach for face recognition involves the following steps:
Step 1: Obtain face images I1,I2,..., IM that form the training set. Step 2: Represent every image Ii as a vector , where  is an N 2 x 1 vector, corresponding to an N x N face image I. Step 4: Subtract the mean face from each face images in the training set: Step 5: Compute the covariance matrix C: Hence after the above operation the dimension of matrix C will be N 2 by N 2 . But for a large value of N it is almost infeasible to calculate such amount of calculations. Hence we need a computationally feasible method to find these eigenvectors. To compute the eigenvectors ui of AA T we do the following steps: Step 5.1: Compute the eigenvectors vi of A T A. . This is the main idea behind principal components analysis, where we consider only the best components from a domain of objects to optimize the computation time. In the above approach we have consider only M largest eigenvalues among N 2 eigenvalues. Step

Representation of face images in the training set
After generating the eigenvectors we can represent each face image  i in the training set as a weighted sum of K eigenvectors among all these M eigenvectors.

Face recognition using Eigenfaces
In this stage a given face image  will be verified with the help of a training set. If the person is an authorized person, the module will show the recognized face and for an unauthorized person it will show an error message. The test face image should be in a normalized form (centered and of the same size like the training faces).
Step 1: Read the test face image and convert it into a vector. Step 2: The test face image  is transformed into its eigenface components (projected onto "face space"). Step 4: Now we find which face image in the training set best matches the test face image. We use euclidean distance to measure the distance between two images. If the distance (miminum euclidean distance) is greater than a theshold value, the images are of two different persons and if the distance is less than or equal to the threshold value, the images are of the same person. The euclidean distance between two images is determined by the following equation: if  <=  S e p t 0 5 , 2 0 1 3

Matrix 'wtest' contains the weights that result
The person is known and return the k th face. else print "person is unknown"..

EXPERIMENTAL RESULTS
We use MATLAB GUI based interface to design our face recognition system. Here we use two interfaces. The first one is dynamic in nature and will show the test face and the recognized face. The second one will show the information, which are static in nature like the database images, the eigenfaces of the database images, the average image. In our system we use three types of images.

Case 1:
If the person is known and the test face is exactly looks same as stored in the database.   In the second interface we show some static information but the output will vary only after image addition and deletion operation in the database.

CONCLUSION
In this paper we have introduced an eigenface-based face recognition approach. To recognize a face, we first transform all the database images onto lower-dimensional linear subspace called "face space" or eigenfaces. After that we need to transform the test face onto the same "face space". Now we compute the distance between the relative position of these two images onto the space. The distance can be measured using different techniques, but here we use euclidean distance. The above approach is simple and ease to implement because we do not need any knowledge of geometry or

Number of test images 30
Number of correctly recognized images 29 Number of incorrectly recognized images 1 Success rate 96% S e p t 0 5 , 2 0 1 3 facial feature information. Furthermore we need to perform some preprocessing on the face images to transform it onto the "face space".
There are some limitations in our face recognition approach. First, the algorithm is sensitive to head scale. Second, it is applicable only to frontal view face. Third, this approach some time may fails in case of different background or natural scene. Forth, we consider all the eigenvectors instead of using only K best eigenvectors and this causes greater computation time.