AN EFFECTIVE COLOR FACE RECOGNITION BASED ON BEST COLOR FEATURE SELECTION ALGORITHM USING WEIGHTED FEATURES FUSION SYSTEM

This paper aims to achieve the best color face recognition performance. The newly introduced feature selection method takes advantage of novel learning which is used to find the optimal set of color-component features for the purpose of achieving the best face recognition result. The proposed color face recognition method consists of two parts namely color-component feature selection with boosting and color face recognition solution using selected color component features. This method is better than existing color face recognition methods with illumination, pose variation and low resolution face images. This system is based on the selection of the best color component features from various color models using the novel boosting learning framework. These selected color component features are then combined into a single concatenated color feature using weighted feature fusion. The effectiveness of color face recognition method has been successfully evaluated by the public face databases.


INTRODUCTION
For improving Facial Recognition performance, facial color information can be used rather than grayscale information [2]. There are three components of a color that can be defined in many different ways leading to a wide variety of color spaces [4]. An optimal subset of color components may not be unique for different classification or pattern recognition problems. The CID models [2] seek to unify the color image representation and recognition tasks into one framework. A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database.
The existing face recognition systems utilize only two or three color space models like YQCr, YCbCr and YIQ. The existing methods may have the limitation to achieve the best Face Recognition result for the given FR task. The selection of color components from various color models is the important issue. The existing methods could not work well for other Face Recognition problems under other Face Recognition operating conditions, for example illumination variations. The existing system extracts multiple features in the color image discrimination (CID) [1] color space, where three new color component images D1, D2 and D3 are derived using an iterative algorithm. The currently used color-component choices are made through a combination of intuition and empirical comparison [3] without any systematic selection strategy.

COLOR COMPONENT FEATURE SELECTION
The proposed method takes advantage of "boosting" learning [5] as a feature selection mechanism which finds the optimal set of color-component features for best Face Recognition. The given image can be transformed into various color space models. The optimal set of color-component features can be retrieved to achieve best face recognition result. These selected color component features are then combined into a single concatenated color feature using weighted feature fusion. The test image is recognized by means of nearest neighbor classifier.

Figure 1. Proposed color Face Recognition block diagram
The proposed color Face Recognition method from the Figure 1 consists of two parts: The first one is Colorcomponent feature selection with boosting, and the second one is Color Face Recognition solution using selected color component features. The proposed selection criterion is used highly for achieving a low generalization classification error. The selected color component features are fused by weighted feature fusion scheme depending upon the associated confidence of each color component feature. The selection criterion is in the form of penalty based objective function with its associated weighting parameter for the purpose of selecting color component features which not only produce small classification errors, but also keep their mutual dependence low.
The color-component features are chosen via boosting framework which is combined at the feature level. Specifically, selected color-component features are fused by weighted feature fusion scheme depending upon the associated confidence of each color component feature for achieving better Face Recognition performance. The proposed method is impressively better than the Face Recognition challenges including highly uncontrolled illumination, moderate pose variation, and small resolution face images. Some public face databases such as Color FERET [6] and FRGC 2.0 [7] are used. The face recognition is used in remote sensing, medical imaging, forensic studies, military, film industry, document processing, graphic arts, printing industry, etc.
The color-component feature selection is implemented by a multiclass boosting Adaboost.M2 framework [8]. This boosting framework is more flexible because the determination of error bound of the final hypothesis is free of the requirement that every weak hypothesis should have less classification error. This is useful for rectifying multiclassification problems. The proposed color-component feature selection procedure is given in Figure 1 that is explained below. Let L = {1…C} is the class label set, where C denotes the number of classes. Let T is a training set composed of N redgreen-blue (RGB) color face images. Each one is denoted by the size of Xt (i) (i=1… N) of size HxW pixels with a corresponding class label li, where liϵL. For each of the RGB color images in T, color conversion can be done from the RGB color space to a number of different color spaces. The different color spaces are YUV color space, RGB color space, YCC color space, LIQ color space, HSV Color space, Lab color space and CID color space.
The total of " K" different color components are yielded from the color conversions under consideration, The mth color component is denoted by fm comprising a color-component pool denoted by F for which fmϵF. The weighted training samples are determined based on selection criterion. The proposed color face recognition will be described in detail in the following subsections.

A. Face recognition Learners
Let Φm be the feature extractor. The Φm can be obtained using the principle Component Analysis (PCA) [14], [15] face feature extraction algorithm.

(i)Construction of FR Learners
To construct learners at each round, the learning set Rt is formed by choosing the r hardest-to-classify training samples per class from the training set T according to distribution Dt(i). Subsequently, a corresponding feature extractor Φm is constructed using Rt along with the mth color-component fm (m=1,..K). Specifically, to form Φm, the Φm is trained with a set composed of the mth color-component images that are generated from Rt via an associated color conversion.
A learner m t h , is defined as follows: S() is used to measure the distance between two input vectors in J-dimensional feature subspace. Adaboost.M2 framework [13] is used to force the weak learners to concentrate not only on the hard instances or patterns, but also on the incorrect class labels that are hardest to classify. This boosting framework would fit well into our color-component feature selection that is devised for Face Recognition belonging to multiclassification problems.
Adaptive Boosting is machine learning. It is a meta-algorithm, and can be used in conjunction with many other learning algorithms to improve their performance. AdaBoost is adaptive in the sense that subsequent classifiers built are tweaked in favor of those instances misclassified by previous classifiers. AdaBoost is sensitive to noisy data and outliers. In some problems, however, it can be less susceptible to the over fitting problem than most learning algorithms. The classifiers, it uses can be weak but as long as their performance is not random, they will improve the final model. Even classifiers with an error rate higher than the expected from a random classifier will be useful, since they will have negative coefficients in the final linear combination of classifiers and hence behave like their inverses.  The sample testing images are taken from the various databases such as color FERET DB in Figure 2 and FRGC 2.0 DB in Figure 3.

EXPERIMENTS
All facial images used in experiments were manually cropped from original images based on the locations. Each cropped facial image was rescaled to the size of 64x64 pixels. The public CMU-PIE [16], Color FERET and FRGC 2.0 face data bases are used to evaluate the proposed method. To construct a face feature extractor Φm, the low dimensional feature extraction techniques principle component analysis (PCA) [8] was used.
The proposed algorithm for color-component feature selection as follows: Step 1: Input the color components F, set of training RGB color images, acceptance threshold and total number of boosting rounds T.
Step 2: Initialize the distribution D0(i) =1/N and Weight vector wt (i,l) = D0(i) / (|L|-1) Step 3: Set the mislabel weight vector for each training sample: Step 4: Update the distribution for each training sample: Step 5: Select "r" hardest training samples per class according to Dt(i) to form a learning set Rt.
Step 6: Train a face feature extractor Φm using Rt along with the m-th color component and construct a FR learner based on Φm.
Step 7: Determine the best FR learner according to selection criterion.
Step 8: Add the best FR learner at the t-th boosting round to the selected FR learner set only if the threshold is acceptance threshold.
Step 9: Update the weight vector.
Original image R G B

Figure 4. Illustration of R, G, B color component images and the three color component images generated by the proposed method
The Figure 4 shows that the one of the color space models RGB is obtained from an input color image.
Seven following modules are used to produce the result.

Recognition by Nearest Neighbour
Classifier.

RESULTS
1. Input the color image from testing set. The input face image is retrieved from the testing set which is constructed from the color image databases. This output is shown in Figure 5.

Color space conversion
Color space conversion transforms the input image to the following color space models RGB, YCbCr, YIQ, HSV, Lab, CID, and YUV.
The color space conversion algorithm is used for converting video data from one color space (RGB) to another color space (YCbCr).

Figure 6. View color component
The Figure 6 shows color conversion models for example YUV color space is obtained from original testing face image.
3. Extraction of color feature set by PCA Use low dimensional feature extraction technique such as PCA [12] to extract the features from the obtained various color components. When the input data to an algorithm is too large to be processed and it is suspected to be notoriously redundant then the input data will be transformed into a reduced representation set of features.
Transforming the input data into the set of features is called feature extraction. J u n e 2 0 , 2 0 1 3 Figure 7. Extract the features.
The Figure 7 indicates the feature extraction process completion. The facial features are extracted from the color space.

Selection of Optimal color feature set using Adaboost framework
Adaboost.M2 framework is used to force the weak learners to concentrate not only on the hard instances (or patterns), but also on the incorrect class labels that are hardest to classify. This boosting framework would fit well into our color-component feature selection that is devised for FR belonging to multiclassification problems. The attribute selection or variable subset selection is the technique of selecting a subset of relevant features for building robust learning models. By removing most irrelevant and redundant features from the data, feature selection helps to improve the performance of learning models. The Figure 8 demonstrates the optimal set of color features from a lot of features that are obtained from the Adaboost framework.

Construction of FR learner
To construct weak learners at each round, the learning set is formed by choosing the hardest-to-classify training [20] samples per class from the training set according to distribution. The Figure 9 shows the training data classification by Adaboost model. J u n e 2 0 , 2 0 1 3 6. Formation of feature vector by weighted color component feature fusion. Generate the combined features for images in probe and gallery. The optimal set of features obtained from AdaBoost frame work is combined at the level of the features. Selected color-component features are then combined into a single concatenated color feature using weighted feature fusion.

Figure 10. Project the feature space
The test image is projected in feature space graph which display in Figure 10.

Recognition by Nearest Neighbor Classifier
Project the features into feature space and then find the nearest neighbor [19] by calculating the Euclidean distance between test image and training set of images. The testing image is recognized. Among the various methods of supervised statistical pattern recognition, the Nearest Neighbor rule achieves consistently high performance, without a prior assumption about the distributions from which the training examples are drawn. It involves a training set of both positive and negative cases.

Figure 11. Face Recognition
The test image and recognized image are displayed in Figure 11.

Estimation of recognition accuracy
The performance measurement is face verification rate (FVR) at false accept rate (FAR) which corresponds to ROC curve [21]. False accept rate or false match rate (FMR) means the probability that the system incorrectly matches the input face image to a non-matching face image in the database. It measures the percent of invalid inputs which are incorrectly accepted.
The ROC plot is a visual characterization of the trade-off between the FAR and the FRR. In general, the matching algorithm performs a decision based on a threshold which determines how close to a template the input needs to be, for it to be considered a match. If the threshold is reduced, there will be less false non-matches but more false accepts. Correspondingly, a higher threshold will reduce the FAR but increase the FRR. J u n e 2 0 , 2 0 1 3

Figure 12: Plot of ROC Curve
Experiments using the face recognition grand challenge database and the biometric experimentation environment system show the effectiveness of the proposed models and algorithms. In particular, for the most challenging FRGC, which contains images, that controlled target images, and contain uncontrolled query images, the proposed method achieves the face verification rate of 78.26% at the false accept rate of 0.1%. The Figure 12 shows the ROC curve.

CONCLUSION AND FUTURE ENHANCEMENTS
In the paper, a best color Face Recognition method is proposed using the proposed variant of boosting learning framework. The best color-component features are selected from various color models. These selected color component features are then combined into a single concatenated color feature using weighted feature fusion. The extraction of color-component features is done by global-based feature extraction methods such as PCA. Other face features or descriptors can be readily incorporated into the proposed selection framework. Standard color spaces such as RGB and YCbCr are only considered during boosting feature selection process in this paper. The color feature extraction technique proposed in [5] can be easily applied to the construction of Face recognition learners. In future, the popular local-based feature extraction techniques will be applied in proposed method. This method will be extended by incorporating new color spaces.