Free Essay

Face Recognition

In:

Submitted By rayyan
Words 33246
Pages 133
REVIEWS, REFINEMENTS AND NEW IDEAS IN FACE RECOGNITION
Edited by Peter M. Corcoran

Reviews, Refinements and New Ideas in Face Recognition Edited by Peter M. Corcoran

Published by InTech Janeza Trdine 9, 51000 Rijeka, Croatia Copyright © 2011 InTech All chapters are Open Access articles distributed under the Creative Commons Non Commercial Share Alike Attribution 3.0 license, which permits to copy, distribute, transmit, and adapt the work in any medium, so long as the original work is properly cited. After this work has been published by InTech, authors have the right to republish it, in whole or part, in any publication of which they are the author, and to make other personal use of the work. Any republication, referencing or personal use of the work must explicitly identify the original source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. The publisher assumes no responsibility for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained in the book. Publishing Process Manager Mirna Cvijic Technical Editor Teodora Smiljanic Cover Designer Jan Hyrat Image Copyright hfng, 2010. Used under license from Shutterstock.com First published July, 2011 Printed in Croatia A free online edition of this book is available at www.intechopen.com Additional hard copies can be obtained from orders@intechweb.org

Reviews, Refinements and New Ideas in Face Recognition, Edited by Peter M. Corcoran p. cm. ISBN 978-953-307-368-2

free online editions of InTech Books and Journals can be found at www.intechopen.com

Contents
Preface IX Part 1 Chapter 1 Statistical Face Models & Classifiers A Review of Hidden Markov Models in Face Recognition 3 Claudia Iancu and Peter M. Corcoran GMM vs SVM for Face Recognition and Face Verification 29 Jesus Olivares-Mercado, Gualberto Aguilar-Torres, Karina Toscano-Medina, Mariko Nakano-Miyatake and Hector Perez-Meana New Principles in Algorithm Design for Problems of Face Recognition 49 Vitaliy Tayanov A MANOVA of LBP Features for Face Recognition 75 Yuchun Fang, Jie Luo, Gong Cheng, Ying Tan and Wang Dai Face Recognition with Infrared Imaging 93 Recent Advances on Face Recognition Using Thermal Infrared Images 95 César San Martin, Roberto Carrillo, Pablo Meza, Heydi Mendez-Vazquez, Yenisel Plasencia, Edel García-Reyes and Gabriel Hermosilla Thermal Infrared Face Recognition – a Biometric Identification Technique for Robust Security System 113 Mrinal Kanti Bhowmik, Kankan Saha, Sharmistha Majumder, Goutam Majumder, Ashim Saha, Aniruddha Nath Sarma, Debotosh Bhattacharjee, Dipak Kumar Basu and Mita Nasipuri 1

Chapter 2

Chapter 3

Chapter 4

Part 2 Chapter 5

Chapter 6

VI

Contents

Part 3 Chapter 7

Refinements of Classical Methods

139

Dimensionality Reduction Techniques for Face Recognition 141 Shylaja S S, K N Balasubramanya Murthy and S Natarajan Face and Automatic Target Recognition Based on Super-Resolved Discriminant Subspace 167 Widhyakorn Asdornwised Efficiency of Recognition Methods for Single Sample per Person Based Face Recognition 181 Miloš Oravec, Jarmila Pavlovičová, Ján Mazanec, Ľuboš Omelina, Matej Féder and Jozef Ban Constructing Kernel Machines in the Empirical Kernel Feature Space 207 Huilin Xiong and Zhongli Jiang Robust Facial Localization & Recognition 223 Additive Noise Robustness of Phase-Input Joint Transform Correlators in Face Recognition 225 Alin Cristian Teusdea and Gianina Adela Gabor Robust Face Detection through Eyes Localization using Dynamic Time Warping Algorithm 249 Somaya Adwan Face Recognition in Video 271 Video-Based Face Recognition Using Spatio-Temporal Representations 273 John See, Chikkannan Eswaran and Mohammad Faizal Ahmad Fauzi Real-Time Multi-Face Recognition and Tracking Techniques Used for the Interaction between Humans and Robots 293 Chin-Shyurng Fahn and Chih-Hsin Wang Perceptual Face Recognition in Humans 315

Chapter 8

Chapter 9

Chapter 10

Part 4 Chapter 11

Chapter 12

Part 5 Chapter 13

Chapter 14

Part 6 Chapter 15

Face Recognition without Identification 317 Anne M. Cleary

Preface
As a baby one of our earliest stimuli is that of human faces. We rapidly learn to identify, characterize and eventually distinguish those who are near and dear to us. This skill stays with us throughout our lives. As humans, face recognition is an ability we accept as commonplace. It is only when we attempt to duplicate this skill in a computing system that we begin to realize the complexity of the underlying problem. Understandably, there are a multitude of differing approaches to solving this complex problem. And while much progress has been made many challenges remain. This book is arranged around a number of clustered themes covering different aspects of face recognition. The first section on Statistical Face Models and Classifiers presents some reviews and refinements of well-known statistical models. The second section presents two articles exploring the use of Infrared imaging techniques to refine and even replace conventional imaging. After this follows the section with a few articles devoted to refinements of classical methods. Articles that examine new approaches to improve the robustness of several face analysis techniques are followed by two articles dealing with the challenges of real-time analysis for facial recognition in video sequences. A final article explores human perceptual issues of face recognition. I hope that you find these articles interesting, and that you learn from them and perhaps even adopts some of these methods for use in your own research activities. Sincerely, Peter M. Corcoran Vice-Dean, College of Engineering & Informatics, National University of Ireland Galway (NUIG), Galway, Ireland

Part 1
Statistical Face Models & Classifiers

1
A Review of Hidden Markov Models in Face Recognition
Claudia Iancu and Peter M. Corcoran
College of Engineering & Informatics National University of Ireland Galway Ireland

1. Introduction
Hidden Markov Models (HMMs) are a set of statistical models used to characterize the statistical properties of a signal. An HMM is a doubly stochastic process with an underlying stochastic process that is not observable, but can be observed through another set of stochastic processes that produce a sequence of observed symbols. An HMM has a finite set of states, each of which is associated with a multidimensional probability distribution; transitions between these states are governed by a set of probabilities. Hidden Markov Models are especially known for their application in 1D pattern recognition such as speech recognition, musical score analysis, and sequencing problems in bioinformatics. More recently they have been applied to more complex 2D problems and this review focuses on their use in the field of automatic face recognition, tracking the evolution of the use of HMMs from the early-1990’s to the present day. Our goal is to enable the interested reader to quickly review and understand the state-of-art for HMM models applied to face recognition problems and to adopt and apply these techniques in their own work.

2. Historical overview and Introduction to HMM
The underlying mathematical theory of Hidden Markov Models (HMMs) was originally described in a series of papers during the 1960’s and early 1970’s [Baum & Petrie, 1966; Baum et al., 1970; Baum, 1972]. This technique was subsequently applied in practical pattern recognition applications, more specifically in speech recognition problems [Jelinek et al., 1975]. However, widespread understanding and practical application of HMMs only began a decade later, in the mid-1980s. At this time several tutorials were written [Levinson et al., 1983; Juang, 1984; Rabiner & Juang, 1986; Rabiner, 1989]. The most comprehensive of these was the last, [Rabiner, 1989], and provided sufficient detail for researchers to apply HMMs to solve a broad range of practical problems in speech processing and recognition. The broad adoption of HMMs in automatic speech recognition represented a significant milestone in continuous speech recognition problems [Juang & Rabiner, 2005]. The mathematical sophistication of HMMs combined with their successful application to a wide range of speech processing problems has prompted researchers in pattern recognition to consider their use in other areas, such as character recognition, keyword spotting, lip-

4

Reviews, Refinements and New Ideas in Face Recognition

reading, gesture and action recognition, bioinformatics and genomics. In this chapter we present a review of the most important variants of HMMs found in the automatic face recognition literature. We begin by presenting the initial 1D HMM structures adapted for use in face recognition problems in section 3. Then a number of papers on hybrid approaches used to improve the performance of HMMs for face recognition are discussed in section 4. In section 5 the various 2D variants of HMM are described and evaluated in terms of the recognition rates achieved from each. Finally section 6 includes some recent refinements in the application of HMM techniques to face recognition problems.

3. HMM in face recognition - initial 1D HMM structures
As mentioned in the previous section, HMMs have been used extensively in speech processing, where signal data is naturally one-dimensional. Nevertheless, HMM techniques remain mathematically complex even in the one-dimensional form. The extension of HMM to two-dimensional model structures is exponentially more complex [Park & Lee, 1998]. This consideration has led to a much later adoption of HMM in applications involving twodimensional pattern processing in general and face recognition in particular. 3.1 Initial research on ergodic and top-to-bottom 1D HMM In 1993, a new approach to the problem of automatic face recognition based on 1D HMMs was proposed by [Samaria & Fallside, 1993]. In this paper faces are treated as twodimensional objects and the HMM model automatically extracts statistical facial features. For the automatic extraction of features, a 1D observation sequence is obtained from each face image by sampling it using a sliding window. Each element of the observation sequence is a vector of pixel intensities (or greyscale levels). Two simple 1D HMMs were trained by these authors in order to test the applicability of HMMs in face recognition problems. A test database was used comprising images of 20 individuals with a minimum of 10 images per person. Images were acquired under homogeneous lighting against a constant background, and with very small changes in head pose and facial expressions. For a first set of tests an ergodic HMM was used. The images were sampled using a rectangular window, size 64 × 64, moving left-to-right horizontally with a 25% overlap (16 pixels), then vertically with 16 pixels overlap and starting again horizontally right-to-left. Using the observation sequence thus extracted, an 8-state ergodic HMM was built to approximately match the 8 distinct regions that seem to appear in the face image (eyes, mouth, forehead, hair, background, shoulders and two extra states for boundary regions). Figure 1 taken from [Samaria & Fallside, 1993] shows the training data used for one subject and the mean vectors for the 8 states found by HMM for that particular subject.

Fig. 1. Training data and states for ergodic HMM [Samaria & Fallside, 1993]

A Review of Hidden Markov Models in Face Recognition

5

In the second set of tests, a left-to-right (top-to-bottom) HMM was used. Each image was sampled using a horizontal stripe 16 pixels high and as wide as the image, moving top-tobottom with 12 lines overlap. The resulting observation sequence was used to train a 5-state left-to-right HMM where only transitions between adjacent states are allowed. The training images and the mean vectors for the 5 states found by HMM are presented in Figure 2.

Fig. 2. Examples of training data and states for top-to-bottom HMM from [Samaria & Fallside, 1993] In both of these models the statistical determination of model features, yields some states of the HMM which can be directly identified with physical facial features. Training and testing were performed using the HTK toolkit1. According to these authors, successful recognition results were obtained when test images were extracted from the same video sequence as the training images, proving that the proposed approach can cope with variations in facial features due to small orientation changes, provided the lighting and background are constant. Unfortunately these authors did not provide any explicit recognition rates so it is not possible to compare their methods with later research. It is reasonable, however, to surmise that their experimental results were marginal and are improved upon by the later refinements of [Samaria & Harter, 1994]. 3.2 Refinement of the top-to-bottom 1D HMM In a later paper [Samaria & Harter, 1994] refined the work begun in [Samaria & Fallside, 1993] on a top-to-bottom HMM. These new experiments demonstrate how face recognition rates using a top-to-bottom HMM vary with different model parameters. They also indicate the most sensible choice of parameters for this class of HMM. Up until this point, the parameterization of the model had been based on subjective intuition. For such a 1D top-to-bottom HMM there are three main parameters that affect the performance of the model: the height of the horizontal strip used to extract the observation sequence, L (in pixels), the overlap used, M (in pixels) and the number of states N of the HMM. The height of the strip, L, determines the size of the features and the length of the observation sequence, thus influencing the number of states. The overlap, M, determines how likely feature alignment is and also the length of the observation sequence. A model with no overlap would imply rigid partitioning of the faces with the risk of cutting across potentially discriminating features. The number of states, N, determines the number of features used to characterize the face, and also the computational complexity of the system. These experiments were performed using the Olivetti Research Lab (ORL) database, containing frontal facial images with limited side movements and head tilt. The database was comprised of 40 subjects with 10 pictures per subject. The experiments used 5 images
1

http://htk.eng.cam.ac.uk/

6

Reviews, Refinements and New Ideas in Face Recognition

per person for training and the remaining 5 images for testing. The results were reported as error rates, calculated as the proportion of incorrectly classified images. Three sets of tests were done, varying the values of each of the three parameters as follows: 2 ≤ N ≤ 10, 1 ≤ L ≤ 10 and 0 ≤ M ≤ L−1. For M varied, the number of states was fixed at N = 5 and window height L was varied between 2 and 10. According to the tests, the error rates drop as the overlap increases, approximately from 28% to 15%. However a greater overlap implies a bigger computational effort. When L was varied, N was fixed to 5 and the overlaps considered were 0, 1 and L-1. In this case if there is little or no overlap, the smaller the strip height the lower the error rate is, with values between 13% for L = 1 up to 28% for L = 10. However, for sufficiently large overlap the strip height has marginal effect on the recognition performance, the error rate remaining almost constant around 14%. In the third set of tests N was varied, with L = 1 and 0 overlap and L = 8 and maximum overlap (M=L1). The performance is fairly uniform for values of N between 4 and 10, with an increase in error for values smaller than three. The conclusions of this paper are: (i) a large overlap in the sampling phase (the extraction of observation sequences) yields better recognition rates; the error rate varies from up to 30% for minimum overlap down to 15% for maximum overlap; (ii) for large overlaps the height of the sampling strip has limited effect. The error rate remains almost constant at 15% for maximum overlap, regardless of the value of L, and (iii) best results are obtained with a HMM with 4 or more states. Error rate drops from around 25% for 1-2 states to 15% from 4 states onward. We remark that these early models were relatively unsophisticated and were limited to fully frontal faces with images taken under controlled background and illuminations conditions. 3.3 1D HMM with 2D-DCT features for face recognition In [Nefian & Hayes, May 1998], Samaria’s version of 1D HMM, is upgraded using 2DDCT feature vectors instead of pixel intensities. The face image is divided into 5 significant regions, viz: hair, forehead, eyes, nose, and mouth. These regions appear in a natural order, each region being assigned to a state in a top-to-bottom 1D continuous HMM. The state structure of the face model and the non-zero transition probabilities are shown in Figure 3.

Fig. 3. Sequential HMM for face recognition The feature vectors were extracted using the same technique as in [Samaria & Harter, 1994]. Each face image of height H and width W is divided into overlapping strips of height L and width W, the amount of overlap between consecutive strips being P, see Figure 4. The number of strips extracted from each face image determines the number of observation vectors.

A Review of Hidden Markov Models in Face Recognition

7

The 2D-DCT transform is applied on each face strip and the observation vectors are determined, comprising the first 39 2D-DCT coefficients. The system is tested on ORL2 database containing 400 images of 40 individuals, 10 images per individual, image size 92 × 112, with small variations in facial expressions, pose, hair style and eye wear. Half of the database is used for training and the other half is used for testing. The recognition rate achieved for L=10 and P=9 is 84%. Results are compared with recognition rates obtained using other face recognition methods on the same database: recognition rate for the eigenfaces method is 73%, and for the 1D HMM used by Samaria is also 84%, but the processing time for DCT based HMM is an order of magnitude faster - 2.5 seconds in contrast to 25 seconds required by the pixel intensity method of [Samaria & Harter, 1994].

Fig. 4. Face image parameterization and blocks extraction [Nefian & Hayes, May 1998]. 3.4 1D HMM with KLT features for face detection and recognition In a second paper [Nefian & Hayes, October 1998] introduce an alternative 1D HMM approach, which performs the face detection function in addition to that of face recognition. This employs the same topology and structure as in the previous work of these authors, described above, but uses different image features. In contrast with the previous paper, the observation vectors used here are the coefficients of Karhunen-Loeve Transform. The KLT compression properties as well as its decorrelation properties make it an attractive technique for the extraction of the observation vectors. Block extraction from the image is achieved in the same way as in the previous paper. The eigenvectors corresponding to the largest eigenvalues of the covariance matrix of the extracted vectors form the KLT basis set. If µ is the mean of the vectors used to compute the covariance matrix, a set of vectors is obtained by subtracting this mean from each of the vectors corresponding to a block in the image. The resulting set of vectors is then projected onto the eigenvectors of the covariance matrix and the resulting coefficients form the observation vectors. The system is used both for face detection and recognition by the authors. For face detection, the system is first trained with a set of frontal faces of different people taken under different illumination conditions, in order to build a face model. Then, given a test image, face detection begins by scanning the image with horizontally and vertically overlapping rectangular windows, extracting the observation vectors and computing the probability of
2

http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html

8

Reviews, Refinements and New Ideas in Face Recognition

data inside each window given the face model, using Viterbi algorithm. The windows that have face model likelihood higher than a threshold are selected as possible face locations. The face detection system was tested on MIT database with 48 images of 16 people with background and with different illuminations and head orientations. Manually segmented faces from 9 images were used for training and the remaining images for testing, with a face detection rate of 90%. For face recognition this system was applied to the ORL database containing 400 images of 40 individuals, 10 images per individual, at a resolution of 92 × 112 pixels, with small variations in facial expressions, pose, hairstyle and eye wear. The system was trained with half of the database and tested with the other half. The accuracy of the system presented in this paper is increased slightly over earlier work to 86% while the recognition time decreases due to use of the KLT features. 3.5 Refinements to 1D HMM with 2D-DCT features Following on the work of [Samaria, 1994] and [Nefian, 1999], Kohir & Desai wrote a series of three papers using the 1D HMM for face recognition problems. In a first paper, [Kohir & Desai, 1998], these authors present a face recognition system based on 1D HMM coupled with 2D-DCT coefficients using a different approach for feature extraction than that employed by [Nefian & Hayes, May 1998 & October 1998]. The extracted features are obtained by sliding square windows in a raster scan fashion over the face image, from left to right and with a predefined overlap. At every position of the window over the image (called sub-image) 2D DCT are computed, and only the first few DCT coefficients are retained by scanning the sub-image in a zigzag fashion. The zigzag scanned DCT coefficients form an observation vector. The sliding procedure and the zigzag scanning are illustrated in Figure 5 [Kohir & Desai, 1998].

Fig. 5. (a) Raster scan of face image with sliding window. (b) Construction of 1D observation vector from zigzag scanning of the sliding window [Kohir & Desai, 1998]. The performance of this system is tested using the ORL database. Half of the images were used in the training phase and the other half for testing (5 faces for training and the remaining 5 for testing), sampling windows of 8 × 8 and 16 × 16, were used with 50% and

A Review of Hidden Markov Models in Face Recognition

9

75% overlaps, and 10, 15 and 21 DCT coefficients were extracted. The number of states in the HMM was fixed at 5 as per the earlier work of [Nefian & Hayes, May 1998]. The recognition rates vary from 74.5% for a 16 × 16 window, with a 50% overlap and 21 DCT coefficients to 99.5% for 16 × 16 window, 75% overlap and 10 DCT coefficients. In a second paper [Kohir & Desai, 1999] these authors further refined their research contribution. To evaluate the recognition performances of the system, 2 new experiments are performed: • In a first experiment the proposed method is tested with different numbers of training and testing faces per subject. The tests were performed on the ORL database, and the number of training faces was increased from 1 to 6, while the remaining faces were used in the testing phase. A sampling window of 16 × 16 with 75% overlap was used with 10 DCT coefficients as these had provided optimal recognition rates in their earlier work. The recognition rates achieved are from 78.33% for a single training image and 9 testing images up to 99.5% which is the rate obtained when 5 or 6 training images and 5 or 4 testing images are used. It is worth noting that the ORL database comprises frontal face images in uniform lighting conditions and that recognition rates close to 100% are often achieved when using such datasets. • In a second experiment the system was tested while increasing the number of states in the HMM. Again the ORL database is used, with 5 images for training and 5 for testing. The recognition rates vary as follows: 92% for a 2-states HMM, increasing to 99.5% for a 5-states HMM and stabilizing around 97%-98% when using up to 17 states. The system was also tested with the SPANN database3 containing 249 persons, each with 7 pictures, with variations in pose, 3 pictures were used for training and the remaining 4 for testing, and the recognition rate achieved was 98.75%. • A third paper, [Kohir & Desai, 2000] describes the same 1D HMM with DCT features, with a variation in the training phase. In this paper, first a mean image is constructed from all the training images, and then each training image is subtracted from the mean image to obtain a mean subtracted image. The observation vectors are extracted from these mean subtracted images using the same window sliding method. The observation vector sequences are then clustered using the K-means technique, and thus an initial state segmentation is obtained. Subsequently, the conventional training steps are followed. In the recognition phase, each test image is first subtracted from the mean image obtained during the training phase and recognition is performed on the resulting mean subtracted image. The experiments for face recognition were performed on the same two databases, ORL and SPANN. For ORL database 5 pictures were used for training and the remaining 5 for testing, and the recognition rate obtained is 100%, compared to 88% when the eigenfaces method is used. For SPANN database 3 pictures were used for training and the remaining 4 for testing, the obtained recognition rate was 90%, compared again with the eigenfaces method where a 77% recognition rate was achieved. For the ORL database different resolutions were also tested, the highest recognition rate, 100% being obtained for 96 × 112. Also, ‘new subject rejection’ for authentication applications was tested on the ORL database. The database was segmented into 2 sets: 20 subjects corresponding to an ‘authorized’ subject class - 5 pictures used in training phase and the rest in the testing phase. The
3

http://www.khayal.ee.iitb.ernet.in/usr/SPANN_DATA_BASE/2D_Signals/Face/faces

10

Reviews, Refinements and New Ideas in Face Recognition

remaining 20 subjects are assigned to an ‘unauthorized’ class - all 10 pictures are used in the testing phase. For each ‘authorized’ subject a HMM model is built. Also a separate ‘common HMM’ model is built using all mean subtracted training images of all the ‘authorized’ subjects. For each test face, if the probability of the ‘common HMM’ is the highest, the input face image is rejected as ’unauthorized’, otherwise the input face image is treated as ’authorized’. The results are: 100% rejection of any new subjects and 17% rejection of known subjects (false negatives). 3.6 Refinement of 1D HMM with sequential prunning As proved by [Samaria & Harter, 1994], the number of states used in a 1D HMM can have a strong influence on recognition rates. The problem of the optimal selection of the structure for an HMM is considered in [Bicego at al., 2003a]. The first part of this paper presents a method of improving the determination of the optimal number of states for an HMM. These authors then proceed to prove the equivalence between (i) a 1D HMM whose observation vectors are modelled with multiple Gaussians per state and (ii) a 1D HMM with one Gaussian per state but employing a larger number of states. According to the authors, there are several possible methods for solving the first problem, e.g. cross-validation, Bayesian inference criterion (BIC), minimum description length (MDL). These are based on training models with different structures and then choosing the one that optimizes a certain selection criterion. However, these methods involve a considerable computational burden plus they are sensitive to the local-greedy behaviour of the HMM training algorithm, i.e. the successful training of the model is influenced by the initial estimates selected. The approach proposed by [Bicego et al., 2003a] addresses both the computational burden of model selection, and the initialization phase. The key idea is the use of a decreasing learning strategy, starting each training session from a ‘nearly good’ situation derived from the previous training session by pruning the ‘least probable’ state. More specifically, the authors proposed starting the model training with a large number of states. They next run the estimation algorithm and, on convergence, evaluate the model selection criterion. The ’least probable’ state is then pruned, and the resulting configuration of the model with one less state is used as a starting point for the next sequence of iterations. In this way, each training session is started from a ’nearly good’ estimate. The key observation supporting this approach is that, when the number of states is extremely large, the dependency of the model behaviour on the initial estimates is much weaker. An additional benefit is that using ’nearly good’ initializations drastically reduces the number of iterations required by the learning algorithm at each step in this process. Thus the number of model states can be rapidly reduced at low computational cost. In order to assess the performance of their proposed method, these authors tested the pruning approach and the standard approach (consisting in training one HMM for varying number of states) with BIC criterion and MMDL (mixture minimum description length) [Figueiredo et al., 1999] criterion. These two strategies are compared in terms of: (i) accuracy of the model size estimation, (ii) total computational cost involved in the training phase, and (iii) classification accuracy. In all the HMMs considered in this paper the emission probability density for each state is a single Gaussian. For the accuracy of the model size estimation, synthetically generated test sets of 3 known HMMs were used. The authors set the number of states allowed from 2 to 10. The selection accuracy ranged from 54% to 100% for standard BIC and MMDL, and from 98% to 100% for pruning BIC and MMDL, with up to 50% less iteration required for the latter.

A Review of Hidden Markov Models in Face Recognition

11

Classification accuracy was tested on both synthetic and real data. For the synthetic data, the test sets used previously to estimate the accuracy of the model size estimation were used, obtaining 92% to 100% accuracy for standard BIC and MMDL compared to 98% to 100% accuracy for pruning BIC and MMDL, with 35% less iterations for pruning. For classification accuracy on real data, two experiments were conducted. The first involves a 2D shape recognition problem, and uses a data set with four classes each with 12 different shapes. The results obtained are 92.5% for standard BIC, 94.37% for standard MMDL, and 95.21% for pruning BIC and MMDL. The second experiment was conducted on the ORL database, using the method proposed by [Kohir & Desai 1998]. The results are 97.5% for standard BIC and MMDL and 97.63% for pruning BIC and MMDL. The classification accuracies are similar, but the pruning method reduces substantially the number of iterations required. 3.7 A 1D HMM with 2D-DCT features and Haar wavelets In a following paper [Bicego et al., 2003b], a comparison between DCT coding and wavelet coding is undertaken. The aim is to evaluate the effectiveness of HMMs in modelling faces using these two different forms of image features. Each compresses the relevant image data, but employing different underlying techniques. Also, the suitability of HMM to deal with the JPEG 2000 image compression standard is considered by these authors. They adopt the 1D HMM approach introduced by [Kohir & Desai, 1998]. However, the optimum number of states for the model is selected using the sequential pruning strategy presented in [Bicego et al., 2003a] and described in the preceding section. The same feature extraction used by [Kohir & Desai, 1998] is employed, and both 2D DCT and Haar wavelet coefficients are computed. These experiments have been conducted on the ORL database, consisting of 40 subjects with 10 sample images of each. The first 5 images are used for training the HMM while the remaining 5 are used in the testing phase. The number of states for each HMM is estimated using the pruning strategy. For feature extraction, a 16 × 16 pixel sliding window is used, with 50% and 75% overlaps being tested, and in each case the first 4, 8 and 12 DCT or Haar coefficients are retained. The recognition rate scores for 50% overlap are between 97.4% for 4 coefficients to 100% for 12 coefficients, and for 75% overlap between 95.4% for 4 coefficients to 99.6% for 12 coefficients. Slightly better results were obtained for DCT coefficients throughout the experiments. It is worth noting that unlike [Samaria & Harter, 1994] and [Nefian & Hayes, 1998] in the case of [Kohir & Desai, 1998] the method of extracting observation vectors results in better performance for a 50% overlap than for 75% overlap. A second experiment was performed to prove the effectiveness of HMM in solving the face recognition problem regardless of the coefficients used, by replacing in the proposed system the wavelet coding with a trivial coding represented by the mean of the square window. The results obtained are 84.9% for 50% overlap and 77.8% for 75% overlap.

4. Hybrid approaches based on 1D HMM
From the discussions of the preceding section it can be seen that 1D HMM can perform successfully in face recognition applications. However, the vast majority of early experiments were performed on the ORL database. The images in this dataset only exhibit very small variations in head pose, facial expressions, facial occlusions such as facial hair and glasses, and almost no variations in illumination. For practical applications a face recognition system must be able to handle significant variations in facial appearance in a

12

Reviews, Refinements and New Ideas in Face Recognition

robust manner. Thus in this next section more challenging face recognition applications are described and further HMM approaches are considered from the literature. Specifically, in this section we consider hybrid approaches based on HMMs used successfully in more challenging applications of face recognition. There are several core problems that a face recognition system has to solve, specifically those of variations in illumination, variations in facial expressions or partial occlusions of the face, and variations in head pose. Firstly an attempt at solving recognition problems caused by facial occlusions is considered [Martinez, 1999]. The solution adopted by this author was to explore the use of principle component analysis (PCA) features to characterize 6 different regions of the face and use 1D HMM to model the relationships between these regions. A second group of researchers [Wallhoff et al., 2001] have tackled the challenging task of recognizing side-profile faces in datasets where only frontal faces were used in the training stage. These authors have used a combination of artificial neural network (ANN) techniques combined with 1D HMM to solve this challenging problem. 4.1 Using 1D HMM with PCA derived features A face recognition system is introduced [Martinez, 1999] for indexing images and videos from a database of faces. The system has to tackle three key problems, identifying frontal faces acquired, (i) under differing illumination conditions, (ii) with varying facial expressions and (iii) with different parts of the face occluded by sunglasses/scarves. Martinez’s idea was to divide the face into N different regions analyzing each using PCA techniques and model the relationships between these regions using 1D HMMs. The problem of different lighting conditions is solved in this paper by training the system with a broad range of illumination variations. To handle facial expressions and occlusions, the face is divided into 6 distinct local areas and local features are matched. This dependence on local rather than global features should minimize the effect of facial expressions and occlusions, which affect only a portion of the overall facial region. Each of these local areas obtained from all the images in the database is projected into a primary eigenspace. Each area is represented in vector form. Figure 6 [Martinez, 1999] shows the local feature extraction process.

Fig. 6. Projection of the 6 different local areas into a global eigenspace Martinez, 1999].

A Review of Hidden Markov Models in Face Recognition

13

Note that face localization is performed manually in this research and thus cannot be precise enough to guarantee that the extracted local information will always be projected accurately into the eigenspace. Thus information from pixels within and around the selected local area is also extracted, using a rectangular window. By considering these six local areas as hidden states, a 1D HMM was built for each image in the database. However, a more desirable case is to have a single HMM for each person in the database, as opposed to a HMM for each image. To achieve this, all HMMs of the same person were merged together into a single 1D HMM, where the transition probability from one state to another is 1/number of HMMs per person. In the recognition phase, instead of using the forward-backward algorithm, the authors used the Viterbi algorithm [Rabiner, 1989] to compute the probability of an observation sequence given a model. Two sets of tests were performed, using pictures and video sequences. The image database4 was created by Aleix Martinez and Robert Benavente. It contains over 4,000 colour facial images corresponding to 126 people - 70 men and 56 women. There are 12 images per person, the first 6 frontal view faces with different facial expressions and illumination conditions and the second 6 faces with occlusions (sun-glasses and scarf) and different illumination conditions. These pictures were taken under strictly controlled conditions. No restrictions on appearance including clothing, accessories such as glasses, make-up or hairstyle were imposed on participants. Each person participated in two sessions, separated by 14 days. The same pictures were taken in both sessions. In addition, 30 video sequences were processed consisting of 25 images almost all of them containing a frontal face. Five different tests were run, using 50 people (25 males and 25 females) randomly selected from the database, converted to greyscale images and sampled at half their size, and also using 30 corresponding video sequences. In a first test, all 12 images per person were used in training, and the system was tested with every image by replacing each one of the local features with random noise with mean 0. The recognition rate obtained was 96.83%. For a second test training was with the first six images and testing with the last six images, featuring occlusions. A recognition rate of 98.5% was achieved. In a third test the last six images were used for training and the first six for testing and the resulting recognition rate was 97.1%. A fourth test consisted of training with only two non-occluded images and testing with all the remaining images. A lower recognition rate of 72% was obtained. Finally, the system was trained with all 12 images for each person, and tested with the video sequences, achieving a 93.5% recognition rate. 4.2 Artificial Neural Networks (ANN) in conjunction with 1D HMM [Wallhoff et al., 2001] approached the challenging task of recognizing profile views with previous knowledge from only frontal views, which may prove a challenging task even for humans. The authors use two approaches based on a combination of Artificial Neural Networks (ANN) and a modelling technique based on 1D HMMs: a first approach uses a synthesized profile view, while a second employs a joint parameter-estimation technique. This paper is of particular interest because of its focus on non-frontal faces. In fact these authors are one of the first to address the concept of training the recognition system with conventional frontal faces, but extending the recognition to include faces with only a sideprofile view.
4

http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html

14

Reviews, Refinements and New Ideas in Face Recognition

The experiments are performed on the MUGSHOT5 database containing the images of 1573 cases, where most individuals are typically represented by only two photographs: one showing the frontal view of the person’s face and the other showing the person’s right hand profile. The database contains pairs of mostly male subjects at several ages and representatives of several ethnic groups, subjects with and without glasses or beards and a wide range of hairstyles. The lighting conditions and the background of the photographs also change. The pictures in the database are stored as 8-bit greyscale images. Prior to applying the main techniques of [Wallhoff et al., 2001] a pre-processing of each image is conducted. Photographs with unusually high distortions, perturbations or underexposure are discarded; all images are manually labelled so that all faces appear in the centre of the image and with a moderate amount of background, and resized to 64x × 64 pixels. Then two sets are defined: a first set consisting of 600 facial image pairs, frontal and right-hand profile, are used for training the neural network. A second set with 100 facial image pairs is used for testing. The features used for experiments are pixel intensities. In order to obtain the observation vectors, each image which was resized to 64 × 64 pixels is divided into 64 columns. So from each image 64 observation vectors are extracted. The dimension of the vectors is the number of rows in the image, which is also 64, and these vectors consist of pixel intensities. In the training phase an appropriate neural network is used, estimated by applying the following intuitions: (i) a point in the frontal view will be found in approximately the same row as in the profile view, (ii) considering the right half of the face to be almost bilaterally symmetrical with the left half, only the first 40 columns of the image are used in the input layer to the ANN. Figure 7 taken from [Wallhoff et al., 2001] shows how a frontal view of the face is used to generate the profile view. In the testing phase, a 1D left to right first order HMM is used, allowing self transitions and transitions to the next state only. The models consist of 24 states, plus two non-emitting start and end states. In the first hybrid approach for face profile recognition there are two training stages. Firstly, a neural network is trained using the first set of 600 images, the frontal image of each individual representing the input and the profile view the output. In this way the neural network is trained to synthesize profiles from the frontal image. In figure 8 [Wallhoff et al., 2001] an example of synthesized profile is shown. In the second training stage, the 100 frontal images are introduced in the neural network and their corresponding profiles are synthesized. Using these profiles, an average profile HMM model is obtained. Then for each testing profile, an HMM model is built using for initialization the average profile model. The Baum-Welch estimation procedure is used for training the HMM. In a second approach only one training stage is performed, the computation speed being vastly improved as a result. This proceeds as follows: the NN is trained using the frontal images as input; the target outputs are in this case the mean values of each Gaussian mixture used for describing the observations of the corresponding profile image. First, an average profile HMM model is obtained using the 600 training profile images. Using this average model, the mean values for each individual in the training set are computed and used as the target values for the NN to be trained. In the recognition phase, for each frontal face the mean value for profile is returned by the NN. Using this mean and the average profile model, the corresponding HMM is built, then the probability of the test profile image given the HMM model is computed. The recognition rates achieved for the
5

http://www.nist.gov/srd/nistsd18.cfm

A Review of Hidden Markov Models in Face Recognition

15

systems proposed in this paper are around 60% for the first approach and up to 49% for the second approach, compared to 70%-80% when humans perform the same recognition task. The approach presented by the authors is very interesting in the context of a mugshot database, where only the two instances, one frontal and one profile of a face are present. Also the results are quite impressive compared to the human recognition rates reported. However, both ANN and HMM are computationally complex, and using pixel intensities as features also contributes to making this approach very greedy in terms of computing resources.

Fig. 7. Generation of a profile view from a frontal view [Wallhoff et al., 2001].

Fig. 8. Example of frontal view, generated and real profile [Wallhoff et al., 2001].

5. 2D HMM approaches
In section 3 and section 4 we showed how 1D HMMs might be adapted for use in face recognition applications. But face images are fundamentally 2D signals and it seems intuitive that they would be more effectively processed with a 2D recognition algorithm. Note however that a fully connected 2D extension of HMM exhibits a significant increase in computational complexity making it inefficient and unsuitable for practical face recognition applications [Levin & Pieraccini, 1992]. As a consequence of this complexity of the full 2D HMM approach a number of simpler structures were developed and are discussed in detail in the following sections.

16

Reviews, Refinements and New Ideas in Face Recognition

5.1 A first application of pseudo 2D HMM to Facial Recognition In his PhD thesis, [Samaria, 1994] was the first researcher to use pseudo-2D HMMs in face recognition, with pixel intensities as features. In order to obtain a P2D HMM, a onedimensional HMM is generalized, to give the appearance of a two-dimensional structure, by allowing each state in a one-dimensional global HMM to be a HMM in its own right. In this way, the HMM consists of a top-level set of super states, each of which contains a set of embedded states. The super states may then be used to model the two-dimensional data in one direction, with the embedded HMMs modelling the data along the other direction. This model is appropriate for face images as it exploits the 2D physical structures of a face, namely that a face preserves the same structure of states from top to bottom – forehead, eyes, nose, mouth, chin, and also the same left-to-right structure of states inside each of these super states. An example of state structure for the face model and the non-zero transition probabilities of the P2D HMM are shown in figure 9. Each state in the overall topto-bottom HMM is assigned to a left-to-right HMM.

Fig. 9. Structure of a P2D HMM. In order to simplify the implementation of P2D-HMM, the author used an equivalent 1D HMM to replace the P2D-HMM as shown in figure 10. In this case, the shaded states in the 1D HMM represent end-of-line states with two possible transitions: one to the same row of states - superstate self-transition - and one to the next row of states - superstate to superstate transition. For feature extraction a square window is used sliding from left-to-right and topto-bottom. Each observation vector contains the intensity level values of the pixels contained by the window, arranged in a column-vector. In order to accommodate the extra end-of-line state, a white frame is added at the end of each line of sampling. Each state is modelled by one Gaussian with mean and standard deviation set, initialized at the beginning of training, to mid-intensity values for normal states and to white with near zero standard deviation for the end-of-line states. The parameters of the model are then iteratively re-estimated using the Baum-Welch algorithm.

Fig. 10. P2D HMM and its equivalent 1D HMM.

A Review of Hidden Markov Models in Face Recognition

17

Samaria’s experiments were carried out on the ORL database. Different topologies and sampling parameters were used for the P2D-HMM: from 4 to 5 superstates and from 2 to 8 embedded states within each superstate. In addition these experiments considered different sizes of sampling windows with different overlaps ranging from 2 × 2 pixels with 1 × 1 overlap up to 24 × 22 (horizontal × vertical) pixels with 20 × 13 pixels overlap. The highest error rate of 18% was obtained for a 3-5-5-3 P2D-HMM, using a 10 × 8 scanning window with an 8 × 6 overlap, while the smallest error rate of 5.5% was obtained for 3-6-6-6-3 P2DHMM, with 10 × 8 (and 12 × 8) window and 8 × 6 (and 9 × 6 respectively) overlap. In the same thesis Samaria also tested the standard unconstrained P2D HMM, which does not have an end-of-line state. In this case no attempt is made to enforce the fact that the last frame of a line of observations should be generated by the last state of the superstate. The recognition results for the unconstrained P2D HMM are similar to those obtained with constrained P2DHMM, the error rates ranging from 18% to 6%. We remark that Samaria also obtained a 2% error rate for a 3-7-7-5-3 P2D HMM with 12×8 sampling window and 4 × 6 overlap, but considering that for only slightly different overlaps (8 × 6 and 4 × 4) the error rates were 6% and 8.5% respectively, this particular result appears to be a statistical anomaly. It does serve to remind that these models are based on underlying statistical probabilities and that occasional aberrations can occur. 5.2 Refining pseudo 2D HMM with DCT features In [Nefian & Hayes, 1999] the authors adapted the P2D-HMM developed by [Kuo & Agazzi, 1994] for optical character recognition analysis, showing how it represented a valid approach for facial recognition and detection. These authors renamed this technique as embedded HMM. In order to obtain the observation vectors, a set of overlapping blocks are extracted from the image from left to right and top to bottom as shown in figure 11, the observation vector finally consisting of the 6 lower-frequency 2D-DCT coefficients extracted form each image block. Each state in the embedded HMMs is modelled using a single Gaussian.

Fig. 11. Face image parameterization and blocks extraction. For face recognition the ORL database was used. The system was trained with half of the database and tested with the other half. The recognition performance of the method presented in this paper is 98%, improving by more than 10% compared with the best results obtained in using 1D HMM in earlier work [Nefian & Hayes, May 1998, October 1998].

18

Reviews, Refinements and New Ideas in Face Recognition

This research also considered the problem of face detection. In the testing phase for detection, 288 images of the MIT database were used, representing 16 subjects with different illuminations and head orientations. A set of 40 images representing frontal views of 40 different individuals from the ORL database is used to train one face model. The testing is performed using a doubly embedded Viterbi algorithm described by [Kuo & Agazzi, 1994]. The detection rate of the system described in this paper is 86%. While this version of HMM appears to be relatively efficient in face detection, it is however computationally very complex and slow, particularly when compared with state of art algorithms [Viola & Jones, 2001]. 5.3 Improved initialization of pseudo 2D HMM Also employing a P2D HMM, [Eickler et al., 2000] describe an advanced face recognition system based on the use of standard P2D HMM employing 2D DCT features is presented. The performance of the system is enhanced using improved initialization techniques and mirror images. It is very important to use a good initial model therefore the authors used all faces in the database to build a ‘common initial model’. Then for each person in the database a P2D HMM model is refined using this ‘common model’. Feature extraction is based on DCT. The image is scanned with a sliding window of size 8 × 8 from left-to-right and top-tobottom with an overlap of 6 pixels (75%). The first 15 DCT coefficients are extracted. The use of DCT coefficients allows the system to work directly on images compressed with JPEG standard without a need to decompress these images. The size of the sampling window was chosen as 8 × 8 because the DCT portion of JPEG image compression is based on this window size. Tests are performed on the ORL database, described previously, with the first 5 images per person used for training and the remaining 5 for testing. Three sets of experiments are performed in this paper. First Experiment Set: the system is tested on different quadratic P2D HMM model topologies (4×4 states to 8×8 states) with 1 to 3 Gaussian mixtures to model the probability density functions. The recognition rates achieved range from 81.5% for 4×4 states with 1 Gaussian to 100% for 8 × 8 states with 2 and 3 Gaussians. Second Experiment Set: the effect of overlap on recognition rates is tested. An overlap of 75% is used for all training while for testing overlaps between 75% and 0% were used, with a 7×7 HMM and from 1 to 3 Gaussian mixtures. The overall result of this experiment is that recognition rates decrease slightly when the overlap is reduced, however, very good recognition rates of 94.5%-99.5% were still obtained even for 0% overlap, compared with 98.5%-100% for 75% overlap. Thus wide variations in overlap have relatively minor effects on overlap for a sophisticated 7x7 HMM model. Third Experiment Set: comprises an evaluation of the effect of compression artefacts on the recognition rate. Recognition was performed on JPEG compressed images across a range of quality settings ranging from 100 for the best quality to 1 for the highest compression ratio as shown in figure 12 [Eickler et al., 2000]. The results are as follows: for compression ratios of up to 7.5 to 1, the recognition rates remain constant around 99.5%±0.5%. For compression ratios over 12.5 to 1, the recognition rates drop below 90%, down to approximately 5% for 19.5 to 1 compression ratio. There are some additional conclusions we can draw from the work of [Eickler et al., 2000]. Firstly, building an initial HMM model using all faces in the database is an improvement over the intuitive initialization used by [Samaria, 1994] or [Nefian & Hayes, 1999], however

A Review of Hidden Markov Models in Face Recognition

19

this may lead to the dependency of the initial model on the composition of the database. Secondly, these authors obtain excellent results when using JPEG compressed images in the testing phase (overlap 0%), speeding up the recognition process significantly. Note however, in the training stage they use uncompressed images scanned with a 75% overlap and as they have used a very complex HMM model with 49 states the training stage of their approach is resource and time intensive offsetting the benefits of faster recognition speeds.

Fig. 12. Recognition rates versus compression rates [Eickler et al., 2000]. 5.4 Discrete vs continuous modelling of observation vectors for P2D HMM In another paper on the subject of face recognition using HMM, [Wallhoff et al., 2001] consider if there is a major difference in recognition performance between HMMs where the observation vectors are modelled as continuous or discrete processes. In the continuous case, the observation probability is expressed as a density probability function approximated by a weighted sum of Gaussian mixtures. In the case of a discrete output probability, a discrete set of observation probabilities is available for each state, and input vector. This discrete set is stored as a set of codebook entries. The codebook is typically obtained by k-means clustering of all available training data feature vectors. The authors used for their experiments 321 subjects selected from the FERET database6. For testing the system, two galleries of images were used: fa gallery, containing a regular frontal image for each subject, and fb gallery, containing an alternative frontal image, taken seconds after the corresponding fa image. First the images are pre-processed, using a semiautomated feature extraction that starts with the manual labelling of the eye and mouth centre-coordinates. The next step is the automatic rotation of the original images so that a line through the eyes is horizontal. After this the face is divided vertically and processing continues on a half-face image. The images are re-sized to the smallest image among the resulting images being 64 × 96 pixels. For feature extraction, the image is scanned using a rectangular window, with an overlap of 75%. After the DCT coefficients for each block are calculated, a triangular shaped mask is applied and the first 10 coefficients are retained, representing the observation vector. Two
6

http://www.frvt.org/feret/default.htm

20

Reviews, Refinements and New Ideas in Face Recognition

sets of experiments were performed, for continuous and discrete outputs. For the case of continuous output, the experiments used 8 × 8 and 16 × 16 scanning windows, and 4 × 4 to 7 × 7 state structures for the P2D HMM. Initially only one Gaussian per state was used. The best recognition rate in this case was 95.95%, for 8 × 8 block size and 7 × 7 states for HMM. When the number of Gaussians was increased form 1 to 3, the recognition rate dropped, maybe due to the fact that only one image per person was used in the training phase. In the case of discrete output values, identical scanning windows and HMM were used, and two codebook sizes of 300 and 1000 values were used to generate the observation vectors. The highest recognition rate obtained was 98.13%, for 8 × 8 pixels block size, 7 × 7 states HMM, and a codebook size of 1000. In both cases, continuous and discrete, better results were obtained for the smaller size of scanning window. 5.5 Face retrieval on large databases After using the combination of 2D DCT and P2D HMM for face recognition on small databases, a new HMM-based measure to rank images within a larger database is next presented, [Eickeler, 2002]. The relation of the method presented to confidence measures is pointed out and five different approximations of the confidence measure for the task of database retrieval are evaluated. These experiments were carried out on the C-VIS database, containing the extracted faces of three days of television broadcast resulting in 25000 unlabeled face images. Normal HMM-based face recognition for database retrieval entails building a model for each person in the database. However, in the case of a very large and unlabeled database, that would imply building a model λ j for each image O j in the database, which is not only computationally expensive, but results in poor modelling, considering that a robust model for one person requires multiple training images of that person. In this case, calculating the probability of a query image for each built model P(Oquery |λ j ) is simply not practical. A more feasible method for database retrieval is to train a query HMM λquery using the query images Oquery of the person searched for ωquery , but noting that the probability derived by the Forward-Backward algorithm, P(O j |λquery ) cannot be used as ranking measure for the images in the database because inaccuracies in the modelling of the face images have a big influence on the probability. In order to fix this problem, the ranking of the images uses the query model λquery as a representation of the person being searched for and a set of cohort models
Λ cohort representing people not being searched for. An easy way to form the cohort is by using

former queries or by taking some images form the database. So instead of calculating P(Oquery |λ j ) , the probability of an image O j given the person being searched is used:
P(O j |ωquery ) ∝ P(λquery |O j ) = P(O j |λQuery ) P(O j |Λ cohort )

(1)

In this research five different confidence measures were used for database retrieval based on this formula. For the confidence measure using normalization, the denominator is replaced:

P(O j |Λ cohort ) =

λk ∈Λcohort



P(O j |λk )

(2)

A Review of Hidden Markov Models in Face Recognition

21

Another confidence measure uses one filler (common) model instead of a cohort of HMMs for a group of people. The filler model can be trained on all people of the cohort group. If the denominator is set to a fixed probability, it can be dropped from the formula, in which case the confidence measure will be P(O j |λquery ) . The fourth confidence measure is based on the sum of ranking differences between the ranking of the cohort models on the query image and the ranking of the cohort models on each of the database images. Finally, the Levenshtein Distance (the Levenshtein distance between two strings is given by the minimum number of operations needed to transform one string into the other) is considered as an alternative measure for the comparison of the rankings of the cohort models for the query image and the database images. For the experimental part 14 people with 8 to 16 face images each were used as query images, and also as cohort set. A NN-based face detector was used to detect the inner facial rectangles in the video broadcast and the rectangle of each image is scaled to 66 × 86 pixels. In order to remove the background an ellipsoid mask is applied. A P2D HMM with 5 × 5 states is used. The results of the query are evaluated using precision and recall: precision is the proportion of relevant images among the retrieved images while recall is the proportion of relevant images in the database that are part of the retrieval result. In a first experiment a database retrieval for each person of the query set using the normalization is performed and only the precision is calculated considering the database is unlabeled hence an exact number for each person in unknown. For 12 out of 14 people the precision is constant at 100% for around 40 retrieved images (the number of images per person varies between 20 and 300). In a second experiment all five measures were tested for one person. The results are almost perfect for normalization, a little worse but much faster for the filler model. The ‘sum of ranking differences’ and Levenshtein Distance measures return relatively good results but are inferior to normalization, while the use of a fixed probability gives significantly worse results than all other measures.
5.6 A low-complexity simplification of the Full-2D-HMM An alternative approach to 2D HMM was proposed by [Othman & Aboulnasr, 2000]. These authors propose a low-complexity 2D HMM (LC2D HMM) system for face recognition. The aim of this research is to build a full 2D HMM but with reduced complexity. The challenge is to take advantage of a full 2D HMM structure, but without the full complexity implied by an unconstrained 2D model. Their model is implemented in the 2D DCT compressed domain with 8×8 pixel non-overlapping blocks to maintain compatibility with standard JPEG images. The authors claim a computational complexity reduction from N4 for a fully connected 2D HMM to 2N2 for the LC2D HMM, where N is the number of states. Although the accuracy of the system is not better than other approaches, these authors claim that the computational complexity involved is somewhat less than that required for a 1D HMM and significantly less than that of P2D HMM. The LC2D HMM is based on 2 key assumptions: (i) the active state at the observation block Bk,l is dependant only on immediate vertical and horizontal neighbours, Bk−1,l and Bk,l−1;7; (ii) the active states at the 2 observation blocks in anti-diagonal neighbourhood locations, Bk−1,l and Bk,l−1 are statistically independent given the current state. This assumption allows
7 From a mathematical perspective this assumption is equivalent to a second-order Markov Model, requiring a 3D transition matrix.

22

Reviews, Refinements and New Ideas in Face Recognition

separating the 3D state transition matrix into two distinct 2D transition matrices, for horizontal and vertical transitions. This decreases the complexity of the model quite significantly. This low-complexity model topology and image scanning are illustrated in figure 13.

Fig. 13. (a) Image scanning b) Model topology [Othman & Aboulnasr, 2000] The authors state that the two assumptions are acceptable for non-overlapped feature blocks, but have less validity for very small sized feature blocks or as the allowable overlap increases. The tests were performed on the ORL database. The model for each person was trained with 9 images, and the remaining image was used in the testing phase. Image scanning is performed in a two dimensional manner, with block size set to 8×8. Only the first 9 DCT coefficients per block were used. Different block overlap values were used to investigate the system performance and the validity of the design assumptions. The recognition rates are around 70% for 0 or 1 pixel overlap, decreasing dramatically down to only 10% for a 6-pixel overlap. This is explained because the assumptions of statistical independence, which are the underlying basis of this model, lose their validity as the overlap increases.
5.7 Refinements of the low-complexity approach In a subsequent publication by the same authors, [Othman & Aboulnasr, 2001], a hybrid HMM for face recognition is introduced. The proposed system comprises of a LC2D HMM, as described in their earlier work used in combination with a 1D HMM. The LC2D HMM carries out a complete search in the compressed JPEG domain, and a 1D HMM is then applied that searches only in the candidate list provided by the first module. In the experiments presented in this paper, a 6×2 states model was used for the LC2D HMM, and 4 and 5 state top-to-bottom models were used for the 1D HMM. For the 1D HMM, DCT feature extraction is performed on a horizontal 10 × 92 scanning window. For the 2D HMM, a 8×8 block size is used for scanning the image, and the first 9 DCT coefficients are retained from each block. No overlap is allowed for the sliding windows. Tests are performed on the ORL database. In a first series of tests the effects of training data size on the model robustness were studied. The accuracy of the system ranges from 48%58% when trained with only 2 images per person, to almost 95%-100% if trained with 9 images per person. A second series of experiments provides a detailed analysis of the tradeoff between recognition accuracy and computational complexity and determines an optimal operating point for this hybrid approach. This appears to be the first research in this field to

A Review of Hidden Markov Models in Face Recognition

23

consider such trade-offs in a detailed study and this methodology should provide a useful approach for other researchers in the future. In a third paper, [Othman & Aboulnasr, 2003], these authors propose a 2D HMM face recognition system that limits the independence assumptions described in their original work to conditional independence among adjacent observation blocks. In this new model, the active states of the two anti-diagonal observation blocks are statistically independent given the current state and knowledge of the past observations. This translates into a more flexible model, allowing state transitions in the transverse direction as shown in figure 14, taken from, [Othman & Aboulnasr, 2003].

(a)

(b)

Fig. 14. Modified LC2D HMM [Othman & Aboulnasr, 2003]. (a) Vertical transitions to state S3,3 for 5×5 state model (b) Horizontal transitions to state S3,3 also for 5×5 state model. This modified LC2D HMM face recognition system is examined for different values of the structural parameters, namely number of states per model and number of Gaussian mixtures per state. These tests are again conducted on the ORL database. The images are scanned using 8×8 blocks and the first 9 2D DCT coefficients comprise the observation vector. The HMMs were trained using 9 images per person, and tested using the 10th image. The test is repeated 5 times with different test images and the results are averaged over a total of 200 test images for 40 persons. Test images are not members of the training data set at any time. The results vary from a very low 4% recognition rate for a 7 × 3 HMM with 64 Gaussian mixtures per state, up to 100% for a 7 × 3 HMM with 4 Gaussian mixtures per state. Best results are obtained for 4 and 8 Gaussians per state. The reason for the poor performance for a higher number of Gaussian mixtures is that the model becomes too discriminating and cannot recognize data with any flexibility, outside the original training set. Finally, the reader’s attention is drawn to detailed comments by [Yu & Wu, 2007] on the key assumption of conditional independence in the relationship between adjacent blocks. In this communication, [Yu & Wu, 2007] it is shown that this key assumption is entirely unnecessary.

6. More recent research on HMM in face recognition
While there have been more recent research which applies HMM techniques to face recognition, most of this work has not refined the underlying methods, but has instead combined known HMM techniques with other face analysis techniques. Some work is worth

24

Reviews, Refinements and New Ideas in Face Recognition

mentioning, such as that of [Le & Li, 2004] who combined a one-dimensional discrete hidden Markov model (1D-DHMM) with new way of extracting observations and using observation sequences. All subjects in the system share only one HMM that is used as a means to weigh a pair of observations. The Haar wavelet transform is applied to face images to reduce the dimensionality of the observation vectors. Experiments on the AR face database8 and the CMU PIE face database9 show that the proposed method outperforms PCA, LDA, LFA based approaches tested on the same databases. Also worth mentioning is the work of [Yujian, 2006]. In this paper, several new analytic formulae for solving the three basic problems of 2-D HMM are provided. Although the complexity of computing these is exponential in the size of data, it is almost the same as that of a 1D HMM for cases where the numbers of rows or columns are a small constant. While this author did not apply these results specifically to facial recognition problem they appear to offer some promise in simplifying the application of a full 2D HMM to the face recognition problem. Another notable contribution is the work of [Chien & Liao, 2008] which explores a new discriminative training criterion to assure model compactness combined with ability for accurate to discrimination between subjects. Hypothesis testing is employed to maximize the confidence level during model training leading to a maximum-confidence model (MCHMM) for face recognition. From experiments on the FERET10 database and GTFD11, the proposed method obtains robust segmentation in the presence of different facial expressions, orientations, and so forth. In comparison with the maximum likelihood and minimum classification error HMMs, the proposed MC-HMM achieves higher recognition accuracies with lower feature dimensions. Notably this work uses more challenging databases than the ORL database. Finally we conclude this chapter referring to our own recent work in face recognition using EHMM, presented in [Iancu, 2010; Corcoran & Iancu 2011]. This work can be divided in three parts according to our objectives. The tests were performed on a combined database (BioID, Achermann, UMIST) and on the FERET database. The first objective was to build a recognition system applicable on handheld devices with very low computational power. For this we tested the EHMM-based face recognizer for different sizes of the model, different number of Gaussians, picture size, features, and number of pictures per person used for training. The results obtained for very small picture size (32 × 32), with 1 Gaussian per state and on a simplified EHMM are only 58% recognition for only 1 image per person used for training, when we use 5 pictures per person for training the recognition rates go up to 82% [Corcoran & Iancu 2011]. A second objective was to limit the effect of illumination variations on recognition rates. For this three illumination normalization techniques were used and various combinations of these were tested: histogram equalization (HE), contrast limited adaptive histogram equalization (CLAHE) and DCT in logarithm domain (logDCT). The best recognition rates were obtained for a combination of CLAHE and HE (95.71%) and the worst for logDCT (77.86%) on the combined database [Corcoran & Iancu 2011]. http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html http://www.ri.cmu.edu/research_project_detail.html?project_id=418&menu_id=261 10 http://www.frvt.org/feret/default.htm 11 http://www.anefian.com/research/face_reco.htm
8 9

A Review of Hidden Markov Models in Face Recognition

25

A third objective was to build a system robust to head pose variations. For this we tested the face recognition system using frontal, semi-profile and profile views of the subjects. The first set of tests was performed on the combined database. Here the maximum head pose angle is around 30°. We compared recognition rates obtained when building one EHMM model per person versus one EHMM model per picture. The second set of tests was performed on FERET database which has a much bigger variety of head poses. In this case we used one frontal, 2 semi-profiles and 2 profiles for each subject in the training stage and all pictures of each subject in the testing stage. We compared the recognition rates when building 1 model per person versus 2 models per person versus 3 models per person. We obtained better recognition rates for one model per person for the first set of tests where the database has little head pose variation but better recognition rates for 2 models per person for the second set of tests where the database has a very high head pose variation [Iancu, 2010].

7. Review and concluding remarks
The focus of this chapter is on the use of HMM techniques for face recognition. For this review we have presented a concise yet comprehensive description and review of the most interesting and widely used techniques to apply HMM models in face recognition applications. Although additional papers treating specific aspects of this field can be found in the literature, these are invariably based on one or another of the key techniques presented and reviewed here. Our goal has been to quickly enable the interested reader to review and understand the state-of-art for HMM models applied to face recognition problems. It is clear that different techniques balance certain trade-offs between computational complexity, speed and accuracy of recognition and overall practicality and ease-of-use. Our hope is that this article will make it easier for new researchers to understand and adopt HMM for face analysis and recognition applications and continue to improve and refine the underlying techniques.

8. References
Baum, L. E. & Petrie, T. (1966). Statistical inference for probabilistic functions of finite state Markov chains. Annals of Mathematical Statistics, vol. 37, 1966. Baum, L. E.; Petrie, T.; Soules, G. & Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Annals of Mathematical Statistics, vol. 41, 1970. Baum, L. E. (1972). An inequality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes, Inequalities, vol. 3, pp. 1-8, 1972. Bicego, M.; Castellani, U. & Murino, V. (2003b). Using hidden markov models and wavelets for face recognition. Proceedings of Image Analysis and Processing 2003. 12th International Conference on, pp. 52–56, 2003. Bicego, M.; Murino, V. & Figueiredo, M. (2003a). A sequential pruning strategy for the selection of the number of states in hidden markov models. Pattern Recognition Letters, Vol. 24, pp. 1395–1407, 2003.

26

Reviews, Refinements and New Ideas in Face Recognition

Chien, J-T. & Liao, C-P. (2008). Maximum Confidence Hidden Markov Modeling for Face Recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on , Vol. 30, No 4, pp. 606-616, April 2008 Corcoran, P.M. & Iancu, C. (2011). Automatic Face Recognition System for Hidden Markov Model Techniques, Face Recognition Volume 2, Intech Publishing, 2011. Eickeler, S. (2002). Face database retrieval using pseudo 2d hidden markov models. Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Proceedings, pp. 58–63, May 2002. Eickeler, S.; Muller, S. & Rigoll, G. (2000). Recognition of jpeg compressed face images based on statistical methods. Image and Vision Computing Jour- nal, Special Issue on Facial Image Analysis, Vol. 18, No 4, pp. 279–287, March 2000. Figueiredo, M.; Leitao, J. & Jain, A. (1999). On fitting mixture models. Proceedings of the Second International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, Springer-Verlag, pp. 54-69, 1999. Iancu, C. (2010). Face recognition using statistical methods. PhD thesis, NUI Galway, 2010. Jelinek, F.; Bahl, L.R. & Mercer, R.L. (1975). Design of a linguistic statistical decoder for the recognition of continuous speech. IEEE Transactions on Information Theory, Vol. 21, No 3, pp. 250 – 256, 1975. Juang, B.H. (1984). On the hidden markov model and dynamic time warping for speech recognition-a unified view. AT&T Technical Journal, Vol. 63, No 7, pp. 1213–1243, September 1984. Juang, B.H. & Rabiner, L.R. (2005). Automatic speech recognition - a brief history of the technology development. Elsevier Encyclopedia of Language and Linguistics, Second Edition, 2005. Kohir, V.V. & Desai, U.B. (1998). Face recognition using a dct-hmm approach. Applications of Computer Vision, WACV ’98, Proceedings, Fourth IEEE Workshop on, pp. 226–231, October 1998. Kohir, V.V. & Desai, U.B. (1999). A transform domain face recognition approach. TENCON 99, Proceedings of the IEEE Region 10 Conference, Vol. 1, pp. 104–107, September 1999. Kohir, V.V. & Desai, U.B. (2000). Face recognition. IEEE International Symposium on Circuits and Systems, Geneva, Switzerland, May 2000. Kuo, S. & Agazzi, O. (1994). Keyword spotting in poorly printed documents using pseudo 2d hidden markov models. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 16, pp. 842–848, August 1994. Le, H.S. & Li, H. (2004). Face identification system using single hidden markov model and single sample image per person. IEEE International Joint Conference on Neural Networks, Vol. 1, 2004. Levin, E. & Pieraccini, R. (1992). Dynamic planar warping for optical character recognition. Proceedings ICASSP 1992, San Francisco, Vol. 3, pp. 149–152, March 1992. Levinson, S.E.; Rabiner, L.R. & Sondhi, M.M. (1983). An introduction to the application of the theory of probabilistic functions of a markov process to automatic speech recognition. Bell System Technical Journal, Vol. 62, No 4, pp. 1035–1074, April 1983. Martinez, A. (1999). Face image retrieval using hmms. IEEE Workshop on Content-Based Access of Image and Video Libraries, (CBAIVL ’99) Proceedings, pp. 35–39, June 1999.

A Review of Hidden Markov Models in Face Recognition

27

Nefian, A.V. (1999). A hidden markov model based approach for face detection and recognition. PhD Thesis, 1999. Nefian, A.V. & Hayes III, M.H. (Oct. 1998). Face detection and recognition using hidden markov models. Image Processing, ICIP 98, Proceedings. 1998 International Conference on, Vol. 1, pp. 141–145, October 1998. Nefian, A.V. & Hayes III, M.H. (May 1998). Hidden markov models for face recognition. Acoustics, Speech, and Signal Processing ICASSP ’98. Proceedings of the 1998 IEEE International Conference on, Vol. 5, pp. 2721–2724, May 1998. Nefian, A.V. & Hayes III, M.H. (1999). An embedded hmm-based approach for face detection and recognition. Acoustics, Speech, and Signal Processing, ICASSP ’99. Proceedings, IEEE International Conference, 6:3553–3556, March 1999. Othman, H. & Aboulnasr, T. (2000). Hybrid hidden markov model for face recognition. 4th IEEE Southwest Symposium on Image Analysis and In- terpretation, pp. 34–40, April 2000. Othman, H. & Aboulnasr, T. (2001). A simplified second-order hmm with ap- plication to face recognition. ISCAS 2001 IEEE International Symposium on Circuits and Systems, Vol. 2, pp. 161–164, May 2001. Othman, H. & Aboulnasr, T. (2003). A separable low complexity 2d hmm with application to face recognition. Pattern Analysis and Machine Intelli- gence, IEEE Transactions on, 2003. Park, H.S. & Lee, S.W. (1998). A Truly 2D Hidden Markov Model For Off-Line Handwritten Character Recognition. Pattern Recognition, Vol. 31, No 12, pp. 1849-1864, December 1998. Rabiner, L.R. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of IEEE, Vol. 77, No 2, pp. 257–286, February 1989. Rabiner, L.R. & Juang, B.H. (1986). An introduction to hidden markov models. IEEE ASSP Magazine, Vol. 3, No 1, pp. 4–16, 1986. Samaria, F. (1994). Face recognition using hidden markov models. Ph.D. thesis, Department of Engineering, Cambridge University, UK, 1994. Samaria, F. & Fallside, F. (1993). Face identification and feature extraction using hidden markov models. Image Processing: Theory and Applications, Elsevier, pp. 295–298, 1993. Samaria, F. & Harter, A.C. (1994). Parameterization of a stochastic model for human face identification. Applications of Computer Vision, 1994., Pro- ceedings of the Second IEEE Workshop on, Vol. 77, pp. 138–142, December 1994. Viola, P. & Jones, M. (2001). Robust real-time object detection, Technical report 2001/01, Compaq CRL, 2001. Wallhoff, F.; Eickeler, S. & Rigoll, G. (2001). A comparison of discrete and continuous output modeling techniques for a pseudo-2d hidden markov model face recognition system. International Conference on Image Processing, Proceedings, Vol. 2, pp. 685–688, October 2001. Wallhoff, F., Müller, S. & Rigoll, G. (2001). Hybrid face recognition system for profile views using the mugshot database. IEEE ICCV Workshop on Recognition, Analysis and

28

Reviews, Refinements and New Ideas in Face Recognition

Tracking of Faces and Gestures in Real-Time Systems, Proceedings, pp. 149–156, July 2001. Yu, L. & Wu, L. (2007). Comments on ’a separable low complexity 2d hmm with application to face recognition’. Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 29, No 2, pp. 368–368, February 2007. Yujian, L. (2007). An analytic solution for estimating two-dimensional hidden Markov models. Applied Mathematics and Computation, Vol. 185, No 2, pp. 810-822, February 2007.

0 2
GMM vs SVM for Face Recognition and Face Verification
Jesus Olivares-Mercado, Gualberto Aguilar-Torres, Karina Toscano-Medina, Mariko Nakano-Miyatake and Hector Perez-Meana
National Polytechnic Institute Mexico
1. Introduction
The security is a theme of active research in which the identification and verification identity of persons is one of the most fundamental aspects nowadays. Face recognition is emerging as one of the most suitable solutions to the demands of recognition of people. Face verification is a task of active research with many applications from the 80’s. It is perhaps the biometric method easier to understand and non-invasive system because for us the face is the most direct way to identify people and because the data acquisition method consist basically on to take a picture. Doing this recognition method be very popular among most of the biometric systems users. Several face recognition algorithms have been proposed, which achieve recognition rates higher than 90% under desirable’s condition (Chellapa et al., 2010; Hazem & Mastorakis, 2009; Jain et al., 2004; Zhao et al., 2003). The recognition is a very complex task for the human brain without a concrete explanation. We can recognize thousands of faces learned throughout our lives and identify familiar faces at first sight even after several years of separation. For this reason, the Face Recognition is an active field of research which has different applications. There are several reasons for the recent increased interest in face recognition, including rising public concern for security, the need for identity verification in the digital world and the need for face analysis and modeling techniques in multimedia data management and computer entertainment. Recent advances in automated face analysis, pattern recognition, and machine learning have made it possible to develop automatic face recognition systems to address these applications (Duda et al., 2001). This chapter presents a performance evaluation of two widely used classifiers such as Gaussian Mixture Model (GMM) and Support Vector Machine (SVM) for classification task in a face recognition system, but before beginning to explain about the classification stage it is necessary to explain with detail the different stages that make up a face recognition system in general, to understand the background before using the classifier, because the stages that precede it are very important for the proper operation of any type of classifier.
1.1 Face recognition system

To illustrate the general steps of a face recognition system consider the system shown in Fig. 1, which consists of 4 stages:

2

30

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

Fig. 1. General Structure of a face recognition system.
1.1.1 Capture

This stage is simple because it only needs a camera to take the face image to be procesed. Due to this is not necessary to have a camera with special features, currently cell phones have a camera with high resolution which would serve or a conventional camera would be more than enough because the image can be pre-processed prior to extract the image features. Obviously, if the camera has a better resolution can be obtained clearer images for processing.
1.1.2 Pre-processing

In this stage basically apply some kind of cutting, filtering, or some method of image processing such as normalization, histogram equalization or histogram specification, among others. This is to get a better image for processing by eliminating information that is not useful in the case of cutting or improving the quality of the image as equalization. The pre-processing of the image is very important because with this is intended to improve the quality of the images making the system more robust for different scenarios such as lighting changes, possibly noise caused by background, among others.
1.1.3 Feature extraction

The feature extraction stage is one of the most important stages in the recognition systems because at this stage are extracted facial features in correct shape and size to give a good representation of the characteristic information of the person, that will serve to have a good training of the classification models. Today exists great diversity of feature extraction algorithms, the following are listed some of them: • Fisherfaces (Alvarado et al., 2006). • Eigenfaces(Alvarado et al., 2006). • Discrete Walsh Transform (Yoshida et al., 2003). • Gabor Filters (Olivares et al., 2007). • Discrete Wavelet Transform (Bai-Ling et al., 2004). • Eigenphases (Savvides et al., 2004).
1.1.4 Classifiers

The goal of a classifier is to assign a name to a set of data for a particular object or entity. It defines a set of training as a set of elements, each being formed by a sequence of data for a specific object. A classifier is an algorithm to define a model for each class (object specific), so that the class to which it belongs an element can be calculated from the data values that define the object. Therefore, more practical goal for a classifier is to assign of most accurate form to new elements not previously studied a class. Usually also considered a test set that allows measure the accuracy of the model. The class of each set of test is known and is used to validate the model. Currently there are different ways of learning for classifiers among which are the supervised and unsupervised.

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

31 3

In supervised learning, a teacher provides a category label or cost for each pattern in a training set, and seeks to reduce the sum of the costs for these patterns. While in unsupervised learning or clustering there is no explicit teacher, and the system forms clusters or “natural groupings ”of the input patterns. “Natural”is always defined explicitly or implicitly in the clustering system itself; and given a particular set of patterns or cost function, different clustering algorithms lead to different clusters. Also, is necessary to clarify the concept of identification and verification. In identification the system does not know who is the person that has captured the characteristics (the human face in this case) by which the system has to say who owns the data just processed. In verification the person tells the system which is their identity either by presenting an identification card or write a password key, the system captures the characteristic of the person (the human face in this case), and processes to create an electronic representation called live model. Finally, the classifier assumes an approximation of the live model with the reference model of the person who claimed to be. If the live model exceeds a threshold verifying is successful. If not, the verification is unsuccessful. 1.1.4.1 Classifiers types. Exist different types of classifiers that can be used for a recognition system in order to choose one of these classifiers depends on the application for to will be used, it is very important to take in mind the selection of the classifier because this will depend the results of the system. The following describes some of the different types of classifiers exist. Nearest neighbor. In the nearest-neighbor classification a local decision rule is constructed using the k data points nearest the estimation point. The k-nearest-neighbors decision rule classifies an object based on the class of the k data points nearest to the estimation point x0 . The output is given by the class with the most representative within the k nearest neighbors. Nearness is most commonly measured using the Euclidean distance metric in x-space (Davies E. R., 1997; Vladimir & Filip, 1998). Bayes’ decision. Bayesian decision theory is a fundamental statistical approach to the problem of pattern recognition. This approach is based on quantifying the tradeoffs between various classification decisions using probability and the costs that accompany such decisions. It makes the assumption that the decision problem is posed in probabilistic terms, and that all of the relevant probability values are known (Duda et al., 2001). Neural Networks. Artificial neural networks are an attempt at modeling the information processing capabilities of nervous systems. Some parameters modify the capabilities of the network and it is our task to find the best combination for the solution of a given problem. The adjustment of the parameters will be done through a learning algorithm, i.e., not through explicit programming but through an automatic adaptive method (Rojas R., 1996). Gaussian Mixture Model. A Gaussian Mixture Model (GMM) is a parametric probability density function represented as a weighted sum of Gaussian component densities. GMMs are commonly used as a parametric model of the probability distribution of continuous measurements or features in a biometric system, such as vocal-tract related spectral features in a speaker recognition system. GMM parameters are estimated from training data using the iterative Expectation-Maximization (EM) algorithm or Maximum A Posteriori MAP) estimation from a well-trained prior model (Reynolds D. A., 2008). Support Vector Machine. The Support Vector Machine (SVM) is a universal constructive learning procedure based on the statistical learning theory. The term “universal”means

4

32

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

that the SVM can be used to learn a variety of representations, such as neural net (with the usual sigmoid activation), radial basis function, splines, and polynomial estimators. In more general sense the SVM provides a new form of parameterization of functions, and hence it can be applied outside predictive learning as well (Vladimir & Filip, 1998). In This chapter presents only two classifiers, the Gaussian Mixture Model (GMM) and Support Vector Machine (SVM) as it classifiers are two of the most frequently used on different pattern recognition systems, and then a detailed explanation and evaluation of the operation of these classifier is required.

2. Gaussian Mixture Model
2.1 Introduction.

Gaussian Mixture Models can be used to represent probability density functions complex, from the marginalization of joint distribution between observed variables and hidden variables. Gaussian mixture model is based on the fact that a significant number of probability distributions can be approximated by a weighted sum of Gaussian functions as shown in Fig. 2. Use of this classifier has excelled in the speaker’s recognition with very good results (Reynolds & Rose, 1995; Reynolds D. A., 2008).

Fig. 2. Approximation of a probability distribution function by a weighted sum of Gaussian functions. To carry out the development of Gaussian Mixture Model must consider 3 very important points: • Model initialization. • Model development. • Model evaluation.
2.1.1 Model initialization

Gaussian mixture models allow grouping data. The K-means algorithm is an algorithm that corresponds to a non-probabilistic limit, particular of the maximum likelihood estimation applied to Gaussian mixtures.

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

33 5

The problem is to identify data groups in a multidimensional space. It involves a set x1..., x N of a random variable of D-dimensions in a Euclidean space. A group can be thought of as a data set whose distance between them is small compared to the distance to points outside the group. Introducing a set of D-dimensional vectors μ k , with k = 1, 2, . . . , K where μ k is the prototype associated with the k-th group. The goal is to find an assignment of the observed data to the groups, as well as a set of vectors μ k as to minimize the sum of the squares of the distances between each point to its nearest vector μ k . For example, initially select the first M feature vectors as the initial centers, as shown in Figure 3, ie: μ i = Xi (1)

Fig. 3. Illustration of K-Means algorithm for M3. Then is added a vector more and get the distance between the new vector and M centers, determining that the new vector belongs to the center with which the distance is the lowest. Subsequently the new center is calculated by averaging the items belonging to the center. Thus denoting by Xi,j the characteristic vectors belonging to the center μ − k, the new center is given by: μk = 1 N X N j∑ k,j =1 (2)

This process is repeated until the distance between the k − th center on the iteration n and n + 1 is less than a given constant. Figure 3 shows that the first three vectors are used as initial centers. Then insert the fourth vector which has the shortest distance from the center x. Subsequently the new center is

6

34

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

calculated by averaging the two vectors belonging to the center X. Then is analyzed the vector 5, which has a shorter distance from the center O, which is modified by using the vectors 1 and 5, as shown in the second iteration in Figure 3. Then is analyzed the vector 6, which has a minimum distance to the center O. Here the new center O vectors are modified using 1, 5 and 6. The process continues until ninth iteration, where the center O is calculated using the vectors 1, 5, 6, 9, 12; the center X is calculated using the vector 2, 4, 8, 11, while the Y is obtained 3, 7, 10 from the vectors. After obtaining the centers, the variance of each center is obtained using the relationship: σk =
2.1.2 Model development

1 Nk μ k − Xk,j N j∑ =1

2

(3)

Gaussian Mixture Models(GMM) are statistical modeling methods while a model is defined as a mixture of a certain numbers of Gaussian functions for the feature vectors (Jin et al., 2004). A Gaussian mixture density is a weighted sum of M component densities , this is shown in Figure 4 and obtained by the following equation: p( x | λ) =

→ Where x is a N-dimensional vector, bi (− ),i = 1, 2, . . . , M, are the components of density and x pi , i = 1, 2, . . . , M, are weights of the mixtures. Each component density is a D-Gaussian variation of the form: → x bi (− ) =
1 exp − ( x − μ i ) σi−1 ( x − μ i ) 2 (2π ) | σi |
D 2 1 2

i =1

∑ pi bi ( x )

M

(4)

1

(5)

Where () denotes the transposed vector, μ − i denotes the average value of N dimensions and σi covariance matrix which is diagonal, and pi the distribution of weights which satisfy the relationship:

i =1

∑ pi = 1

M

(6)

So the distribution model is determined by the mean vector, covariance matrix and the weights of the distribution with which the model is represented as: λ = pi , μ i , σi , i = 1, 2, . . . , M (7)

The estimation of system parameters using the ML algorithm (Maximum Likeklihood) seeks to find the parameters to approximate the best possible distribution of the characteristics of the face under analysis and will seek to find the parameters of λ to maximize distribution. For a sequence of T training vectors X = x1 , . . . , x T , the GMM likelihood can be written as: p( X | λ) =

t =1

∏ p ( Xt | λ )

T

(8)

Unfortunately, Equation 8 is nonlinear in relation to the parameters of λ, so to is possible to maximize directly, so it must use an iterative algorithm called Baum-Welch. Baum-Welch

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

35 7

Fig. 4. Gaussian Mixture Model, GMM algorithm is used by HMM algorithm to estimate its parameters and has the same basic principle of the algorithm of Expectation Maximization (EM Expectation-Maximization), which is part of an initial set of parameters λ(r − 1) and a new model is estimated λ(r ), where r denotes the r − th iteration, so to: p( X | λ(r )) ≥ P ( X | λ(r − 1)) (9) − → Thus, this new model ( λ ), becomes the initial model for the next iteration. Each T elements must update the model parameters as follows: Pesos de la mezcla pi = Media μi = Covarianza σi = 1 T p (i | X t + k , λ ) T t∑ =1

(10)

T ∑ t =1 p (i | X t + k , λ ) X t + k T ∑ t =1 p (i | X t + k , λ )

(11)

T ∑t=1 p(i | Xt+k , λ)( Xt+k − σi )2 T ∑ t =1 p (i | X t + k , λ )

(12)

To calculate the posterior probability is obtained by: p (i | X t + k , λ ) =
2.1.3 Model evaluation

p i b i ( Xt + k ) M ∑ j =1 p j b j ( Xt + k )

(13)

In order to carry out the evaluation of the model considers that the system will be used to identify R people, which are represented by models λ1 , λ2 , λ3 , . . . , λ R . The aim is then to find the model with maximum posterior probability for a given observation sequence. Formally, the person identified is one that satisfies the relation:

8

36

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

R = arg max Pr (λk | X ), p( X | λk ) Pr (λk ) , p( X )

k = 1, 2, . . . , R

(14)

Using Bayes theorem, equation 14 can be expressed as: R = arg max k = 1, 2, . . . , R (15)

1 Assuming to the probability of each person is equally likely, then Pr (λk ) = R and taking into account to P ( X ) is the same for all models of speakers, equation 15 simplifies to:

R = arg max p( X | λk ), Replacing p( X | λk ), p( X | λ) = in equation 16 yields: R = arg max ∏ p( Xt | λk ), t =1 T

k = 1, 2, . . . , R

(16)

t =1

∏ p ( Xt | λ k )

T

(17)

k = 1, 2, . . . , R

(18)

Finally using logarithms have: R = arg max ∑ log10 ( p( Xt | λk )), t =1 T

k = 1, 2, . . . , R

(19)

where p( Xt | λk ) is given by the equation 4, that is by the output of the system shown in Figure 4.

3. Support Vector Machine
The Support Vector Machine (SVM) (Vladimir & Filip, 1998) is a universal constructive learning procedure based on the statistical learning theory. Unlike conventional statistical and neural network methods, the SVM approach does not attempt to control model complexity by keeping the number of features small. Instead, with SVM the dimensionality of z-space can be very large because the model complexity is controlled independently of its dimensionality. The SVM overcomes two problems in its design: The conceptual problem is how to control the complexity of the set of approximating functions in a high-dimensional space in order to provide good generalization ability. This problem is solved by using penalized linear estimators with a large number of basis functions. The computational problem is how to perform numerical optimization in a high-dimensional space. This problem is solved by taking advantage of the dual kernel representation of linear functions. The SVM combines four distinct concepts: 1. New implementation of the SRM inductive principle. The SVM use a special structure that keeps the value of the empirical risk fixed for all approximating functions but minimizes the confidence interval. 2. Input samples mapped onto a very high-dimensional space using a set of nonlinear basis functions defined a priori. It is common in pattern recognition applications to map the input vectors into a set of new variables which are selected according to a priori assumptions about the

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

37 9

learning problem. For the support vector machine, complexity is controlled independently of the dimensionality of the feature space (z-space). 3. Linear functions with constraints on complexity used to approximate or discriminate the input samples in the high-dimensional space. The support vector machine uses a linear estimators to perform approximation. As it, nonlinear estimators potentially can provide a more compact representation of the approximation function; however, they suffer from two serious drawbacks: lack of complexity measures and lack of optimization approaches which provide a globally optimal solution. 4. duality theory of optimization used to make estimation of model parameters in a high-dimensional feature space computationally tractable. For the SVM, a quadratic optimization problem must be solved to determine the parameters of a linear basis function expansion. For high-dimensional feature spaces, the large number of parameters makes this problem intractable. However, in its dual form this problem is practical to solve, since it scales in size with the number of training samples. The linear approximating function corresponding to the solution of the dual is given in the kernel representation rather than in the typical basis function representation. The solution in the kernel representation is written as a weighted sum of the support vectors.
3.1 Optimal separating hyperplane

A separating hiperplane is a linear fuction that is capable of separating the training data without error (see Fig. 5). Suppose that the training data consisting of n samples ( x1 , y1 ), . . . , ( xn , yn ), x ∈ d , y ∈ +1, −1 can be separated by the hypoerplane decision function D ( x ) = ( w · x ) + w0 (20)

Fig. 5. Classification (linear separable case) with appropriate coefficients w and w0 . A separating hyperplane satisfies the constraints that define the separation of the data samples:

( w · x ) + x0 ≥ + 1 ( w · x ) + x0 ≤ − 1

if yi = +1 if yi = −1, i = 1, . . . , n (21)

10

38

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

For a given training data set, all possible separating huperplanes can be represented in the form of equation 21. The minimal distance from the separating hyperplane to the closest data point is called the margin and will denoted by τ. A separating hyperplane is called optimal if the margin is the maximum size. The distance between the separating hyperplane and a sample x is | D ( x )| /|| w||, assuming that a margin τ exists, all training patterns obey the inequality: yk D ( xk ) ≥ τ, || w|| k = 1, . . . , n (22)

where yk ∈ −1, 1. The problem of finding the optimal hyperplane is that of finding the w that maximizes the margin τ. Note that there are an infinite number of solutions that differ onlu in scaling of w. To limit solutions, fix the scale on the product of τ and norm of w, τ || w|| = 1 (23)

Thus maximizing the margin τ is equivalent to minimizing the norm of w. An optimal separating hyperplane is one that satisfies condition (21 above and additionally minimizes η (w) = || w||2 (24)

with respect to both w and w0 . The margin relates directly to the generalization ability of the separating hyperplane. The data points that exist at margin are called the support vectors (Fig. 5). Since the support vectors are data points closest to the decision surface, conceptually they are the samples that are the most difficult to classify and therefore define the location of the decision surface. The generalization ability of the optimal separating hyperplane can be directly related to the number of support vectors. En [ Numbero f supportvectors] (25) n The operator En denotes expectation over all training sets of size n. This bound is independent of the dimensionality of the space. Since the hyperplane will be employed to develop the support vector machine, its VC-dimension must be determined in order to build a nested structure of approximating functions. For the hyperplane functions (21) satisfying the constraint || w||2 ≤ c, the VC-dimension is bounded by En [ Errorrate] ≤ h ≤ min(r2 c, d) + 1 (26)

where r is the radius of the smallest sphere that contains the training input vectors ( x1 , . . . , xn ). The factor r provides a scale in terms of the training data for c. With this measure of the VC-dimension, it is now possible to construct a structure on the set of hyperplanes according to increasing complexity by controlling the norm of the weights || w||2: Sk = (w · x ) + w0 : || w||2 ≤ ck , c1 < c2 < c3 . . . (27)

The structural risk minimization principle prescribes that the function that minimizes the guaranteed risk should be selected in order to provide good generalization ability. By definition, the separating hyperplane always has zero empirical risk, so the guaranteed risk is minimized by minimizing the confidence interval. The confidence interval is minimized

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

39 11

by minimizing the VC-dimension h, which according to (26) corresponds to minimizing the norm of the weights || w||2. Finding an optimal hyperplane for the separable case is a quadratic optimization problem with linear constraints, as formally stated next. Determine the w and w0 that minimize the functional η (w) = subject to the constraints yi [(w · x ) + w0 ] ≥ 1, i = 1, . . . , n
d.

1 || w||2 2

(28)

(29)

given the training data ( xi , yi ), i = 1, . . . , n, x ∈ The solution to this problem consists of d + 1 parameters. For data of moderate dimension d, this problem can be solved using quadratic programming. For training data that cannot be separated without error, it would be desirable to separate the data with a minimal number or errors. In the hyperplane formulation, a data point is nonseparable when it does not satisfy equation (21). This corresponds to a data point that falls within the margin or on the wrong side of the decision boundary. Positive slack variables ξ i , i = 1, . . . , n, can be introdiced to quantify the nonseparable data in the defining condition of the hyperplane: yi [(w · x ) + w0 ] ≥ 1 − ξ i (30)

For a training sample xi , the slack variable ξ i is the deviation from the margin border corresponding to the class of yi see Fig. 6. According to our definition, slack variables greater than zero correspond to misclassified samples. Therefore the number of nonseparable samples is Q(w) =

i =1

∑ I ( ξ i > 0)

n

(31)

Numerically minimizing this functional is a difficult combinatorial optimization problem because of the nonlinear indicator function. However, minimizing (31) is equivalent to minimizing the functional Q(ξ ) =

where p is a small positive constant. In general, this minimization problem is NP-complete. To make the problem tractable, p will be set to one.
3.2 Inner product kernel

i =1

∑ ξi

n

p

(32)

The inner product kernel (H) is known a priori and used to form a set of approximating functions, this is determined by the sum H ( x, x ) =

where m may be infinite. Notice that in the form (33), the evaluation of the inner products between the feature vectors in a high-dimensional feature space is done indirectly via the evaluation of the kernel H between support vectors and vectors in the input space. The selection of the type of kernel function

j =1

∑ gj (x) gj (x )

m

(33)

12

40

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

Fig. 6. Nonseparable case corresponds to the selection of the class of functions used for feature construction. The general expression for an inner product in Hilbert space is

(z · z ) = H ( x, x )

(34)

where the vectors z and z are the image in the m-dimensional feature space and vectors x and x are in the input space. Below are several common classes of multivariate approximating functions and their inner product kernels: Polynomials of degree q have inner product kernel H ( x, x ) = [( x · x ) + 1] q Radial basis functions of the form f ( x ) = sign n (35)

i =1

∑ αi exp



| x − xi |2 σ2

(36)

where σ defines the width have the inner product kernel H ( x, x ) = exp − Fourier expansion f ( x ) = vo + has a kernel H ( x, x ) = sin(q + 1 )( x − x ) 2 sin( x − x )/2 (39) q j =1

| x − x |2 σ2

(37)

∑ (v j cos( jx) + w j sin( jx))

(38)

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

41 13

4. Evaluation results
Here are some results with both classifiers, GMM and SVM combined with some feature extraction methods mentioned above, like Gabor, Wavelet and Eigenphases. The results shown were performed using the database “The AR Face Database”(Martinez, 1998) is used, which has a total of 9, 360 face images of 120 people (65 men and 55 women) that includes face images with several different illuminations, facial expression and partial occluded face images with sunglasses and scarf. Two different training set are used, the first one consists on images without occlusion, in which only illumination and expressions variations are included. On the other hand the second image set consist of images with and without occlusions, as well as illumination and expressions variations. Here the occlusions are a result of using sunglasses and scarf. These images sets and the remaining images of the AR face database are used for testing. Tables 1 and 2 shows the recognition performance using the GMM as a classifier. The recognition performance obtained using the Gabor filters-based, the wavelet transform-based and eigenphases features extraction methods are shown form comparison. Table 1 shows that when the training set 1 is used for training, with a GMM as classifier, the identification performance decrease in comparison with the performance obtained using the training set 2. This is because the training set 1 consists only of images without occlusion and then system cannot identify several images with occlusion due to the lack of information about the occlusion effects. However when the training set 2 is used the performance of all of them increase, because the identification system already have information about the occlusion effects. Image set 1 71.43 % 71.30 % 60.63 % Image set 2 91.53 % 92.51 % 87.24 %

Gabor Wavelet Eigenphases Table 1. Recognition using GMM

Average Gabor Wavelet Eigenphases

Image set 1 False acceptance False reject 4.74 % 7.26 % 6.69 % 6.64 % 37.70 % 14.83 %

Image set 2 False acceptance False reject 1.98 % 4.13 % 1.92 % 5.25 % 21.39 % 21.46 %

Table 2. Verification using GMM Tables 3 and 4 show the obtained results with Gabor filters, Wavelet and Eigenphases as features extractors methods in combination with SVM for identification and verification task. Also shows the same characteristics like GMM when the training set 1 and training set 2 are used for training. Figs. 7 and 8 shows the ranking performance evaluation of Gabor, Wavelets and eigenphases feature extractions methods, using GMM for identitification and Figs. 9 and 10 shows the ranking performance evaluation of Gabor, Wavelet and Eigenphases with the Support Vector Machine for identification. In Figs. 11-13 shows the evaluation of the GMM as verifier using different thresholds for acceptance in these graphs shows the performance of both the false acceptance and the false rejection. Showing the moment when both have the same percentage, depending on the needs

14

42

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

Fig. 7. Ranking performance evaluation using GMM and Training set 1.

Fig. 8. Ranking performance evaluation using GMM and Training set 2.

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

43 15

Fig. 9. Ranking performance evaluation using SVM and Training set 1.

Fig. 10. Ranking performance evaluation using SVM and Training set 2.

16

44

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

Gabor Wavelet Local 3 Local 6 Local fourier 3 Local fourier 6 Eigenphases Table 3. Recognition using SVM Average Gabor Wavelet Eigenphases

Image set 1 75.98 % 79.54 % 84.44 % 81.05 % 85.92 % 85.59 % 80.63 %

Image set 2 94.90 % 97.29 % 97.67 % 97.29 % 97.92 % 97.85 % 96.28 %

Image set 1 False acceptance False reject 0.43 % 22.65 % 0.17 % 22.27 % 0.001 % 34.74 %

Image set 2 False acceptance False reject 0.12 % 8.38 % 0.04 % 4.64 % 0.002 % 16.04 %

Table 4. Verification using SVM will have to choose a threshold. In Figs. 14-16 shows the evaluation of the SVM as verifier using also different thresholds.

Fig. 11. Verification performance of Gabor-based feature extraction method, for several threshold values using GMM.

5. Conclusion
In this chapter presented two classifiers that can be used for face recognition, and shown some evaluation results where the GMM and SVM are used for identification and verification tasks. Two different image sets were used for training. One contains images with occlusion and the

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

45 17

Fig. 12. Verification performance of Wavelet-based feature extraction method, for several threshold values using GMM.

Fig. 13. Verification performance of Eigenphases feature extraction method, for several threshold values using GMM.

18

46

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

Fig. 14. Verification performance of Gabor-based feature extraction method, for several threshold values using SVM.

Fig. 15. Verification performance of Wavelet-based feature extraction method, for several threshold values using SVM.

GMM Verification for Face Recognition and Face Verification and Face vs SVM
GMM vs SVM for Face Recognition

47 19

Fig. 16. Verification performance of Eigenphases feature extraction method, for several threshold values using SVM. other one contains images without occlusions. The performance of this classifiers are shown using the Gabor-based, Wavelet-based and Eigenphases method for feature extraction. It is important to mention, at the verification task it is very important to keep the false acceptation rate as low as possible, without much increase of the false rejection rate. To find a compromise between both errors, evaluation results of both errors with different thresholds are provided. To evaluate the performance of proposed schemes when they are required to perform an identification task the rank(N) evaluation was also estimated. Evaluation results show that, in general, the SVM performs better that the GMM, specially, when the training set is relatively small. This is because the SVM uses a supervised training algorithm and then it requires less training patterns to estimate a good model of the person under analysis. However it requires to jointly estimating the models of all persons in the database and then when a new person is added, the all previously estimated models must be computed again. This fact may be an important limitation when the database changes with the time, as well as, when huge databases must be used, as in banking applications. On the other hand, because the GMM is uses a non-supervised training algorithm, it requires a larger number of training patterns to achieve a good estimation of the person under analysis and then its convergence is slower that those of SVM, however the GMM estimated the model of each person independently of that of the other persons in the database. It is a very important feature when large number of persons must be identified and the number of persons grows with the time because, using the GMM, when a new person is added, only the model of the new person must be added, remaining unchanged the previously estimated ones. Thus the GMM is suitable for applications when large databases must be handed and they change with the time, as in banking operations. Thus in summary, the SVM is more suitable when the size of databases under analysis is almost constant and it is not so large, while the GMM is more suitable for applications in which the databases size is large and it changes with the time.

20

48

Reviews, Refinements and New Ideas in Face Recognition Will-be-set-by-IN-TECH

6. References
Alvarado G.; Pedrycz W.; Reformat M. & Kwak K. C. (2006). Deterioration of visual information in face classification using Eigenfaces and Fisherfaces. Machine Vision and Applications, Vol. 17, No. 1, April 2006, pp. 68-82, ISSN: 0932-8092. Bai-Ling Z.; Haihong Z. & Shuzhi S. G. (2004). Face recognition by applying wavelet subband representation and kernel associative memory. Neural Networks, IEEE Transactions on, Vol. 15, Issue:1, January. 2004, pp. 166 - 177, ISSN: 1045-9227. Chellapa R.; Sinha P. & Phillips P. J.(2010), Face recognition by computers and humans. Computer Magazine, Vol. 43, Issue 2, february 2010, pp. 46-55, ISSN: 0018-9162. Davies E. R. (1997). Machine Vision: Theory, Algorithms, Practicalities, Academic Press, ISBN: 0-12-206092-X, San Diego, California, USA. Duda O. R; Hart E. P. & Stork G. D. (2001). Pattern Classification, Wiley-Interscience, ISBN: 0-471-05669-3, United States of America. Hazem. M. El-Bakry & Mastorakis N. (2009). Personal identification through biometric technology, AIC’09 Proceedings of the 9th WSEAS international conference on Applied informatics and communications, pp. 325-340, ISBN: 978-960-474-107-6, World Scientific and Engineering Academy and Society (WSEAS) Stevens Point, Wisconsin, USA. Jain, A.K.; Ross, A. & Prabhakar, S. (2004). An introduction to biometric recognition. IEEE Trans. on Circuits and Systems for Video Technology, Vol. 14, Jan. 2004, pp. 4-20, ISSN: 1051-8215. Jin Y. K.; Dae Y. K. & Seung Y. N. (2004), Implementation and enhancement of GMM face recognition systems using flatness measure. IEEE Robot and Human Interactive Communication, Sept. 2004, pp. 247 - 251, ISBN: 0-7803-8570-5. Martinez A. M. & Benavente R. (1998). The AR Face Database. CVC Technical Report No. 24, June 1998. Olivares M. J.; Sanchez P. G.; Nakano M. M. & Perez M. H. (2007). Feature Extraction and Face Verification Using Gabor and Gaussian Mixture Models. MICAI 2007: Advances in Artificial Intelligence, Gelbukh A. & Kuri M. A., pp. 769-778, Springer Berlin / Heidelberg. Reynolds D.A. & Rose R.C. (1995). Robust text-independent speaker identification using gaussian mixture speaker models. IEEE Transactions on Speech and Audio Processing, Vol. 3, Issue 1, Jan. 1995, pp. 72-83, ISSN: 1063-6676. Reynolds D.A. (2008). Gaussian Mixture Models, Encyclopedia of Biometric Recognition, Springer, Feb 2008, , ISBN 978-0-387-73002-8. Rojas R. (1995). Neural networks: a systematic introduction, Springer-Verlag, ISBN: 3-540-60505-3, New York. Savvides, M.; Kumar, B.V.K.V. & Khosla, P.K. (2004). Eigenphases vs eigenfaces. Pro- ceedings of the 17th International Conference on Pattern Recognition, Vol. 3, Aug. 2004, pp. 810-813, ISSN: 1051-4651. Vladimir C. & Filip M. (1998). Learning From Data: Concepts, Theory, and Methods, Wiley Inter-Science, ISBN: 0-471-15493-8, USA. Yoshida M.; Kamio T. & Asai H. (2003). Face Image Recognition by 2-Dimensional Discrete Walsh Transform and Multi-Layer Neural Network, IEICE Transactions on Fundamentals of Electronics, Communications and Computer., Vol. E86-A, No. 10, October 2003, pp.2623-2627, ISSN: 0916-8508. Zhao W.; Chellappa R.; Phillips P. J. & RosenfeldA. (2003). Face recognition: A literature survey. ACM Computing Surveys (CSUR), Vol. 35, Issue 4, December 2003, pp. 399-459.

3
New Principles in Algorithm Design for Problems of Face Recognition
Vyacheslav Chornovil State Institute of Modern Technologies and Management of Lviv Ukraine 1. Introduction
This chapter is devoted to two main problems in pattern recognition. First problem concerns the methodology of classification quality and stability estimation that is also known as classification reliability estimation. We consider general problem statement in classification algorithms design, classification reliability estimation and the modern methods solution of the problem. On the other hand we propose our methods for solution of such kind of a problem. In general this could be made by using of different kind of indicators of classification (classifier) quality and stability. All this should summarize everything made before with our latest results of solution of the problem. Second part of the chapter is devoted to new approach that gives the possibility to solve the problem of classifier design that is not sensitive to the learning set but belongs to some kind of learning algorithms. Let us consider recognition process, using the next main algorithms, as some parts of the some complicated recognition system: algorithm for feature generating, algorithms for feature selection and classification algorithms realizing procedure of decision making. It is really important to have good algorithms for feature generating and selection. Methods for feature generating and selection are developed a lot for many objects to be recognized. For facial images the most popular algorithms use 3D graph models, morphological models, the selection some geometrical features using special nets, etc. From other hand very popular algorithms for feature generating and selection are those which use Principle Component Analysis (PCA) or Independent Component Analysis (ICA). PCA and ICA are good enough for a number of practical cases. However there are a lot of different deficiencies in classifier building. For classifiers using learning the most essential gap is that all such classifiers can work pretty well on learning stage and really bad for some test cases. Also they can work well enough if classes are linearly separated e.g. Support Vector Machines (SVM) in linear case. For non-linear case they have a number of disadvantages. That is why it is important to develop some approaches for algorithms building that are not sensitive for the kind of sample or complexity in classification task. All classification algorithms built for the present could be divided almost into 5 groups: algorithms built on statistical principles, algorithms, built on the basis of potential functions and non-parametrical estimation, algorithms, using similarity functions (all kinds of metrical classifiers like 1NN and kNN classifiers) algorithms, using logical principles like decision lists, trees, etc, hierarchical and combined algorithms. A lot of recognition systems use a number of algorithms or algorithm compositions. For their optimization and tuning

Vitaliy Tayanov

50

Reviews, Refinements and New Ideas in Face Recognition

one uses special algorithms like boosting. These compositions can be linear or non-linear as well. To build any effective classifier we need to use some algorithms that allow us to measure the reliability of classification. Having such algorithms we can find estimates of optimal values of classifier parameters. Accuracy of these estimates allows us to build reliable and effective classification algorithms. They perform the role of indicators of measurement of different characteristics and parameters of the classifier. We propose the new approach for object classification that is independent of the learning set but belongs to some kind of learning algorithms. Methods, using for new classification approach concerns the field of results combination for the classification algorithms design. One of the most progressive directions in this area assumes using of the consensus in recognition results, produced by different classifiers. Idea of usage of consensus approach is the following. One divides all objects to be recognized into three groups: objects, located near separation hyperplane (ambiguity objects), objects, located deeply inside the second class and belong to the first one (misclassified objects) and objects that are recognized correctly with large enough index of reliability (easy objects). The group of ambiguity objects is the largest one that can cause errors during recognition due to their instability. Because of that it is extremely important to detect such kind of objects. Next step will be detecting of the true class for every of ambiguity objects. For this it is planned to use apparatus of cellular automata and Markov models. It is important to mark that such an approach allows us to reduce the effect of overestimation for different recognition tasks. This is one of the most principle reasons for using such kind of algorithms. The practical application of consensus approach for the task of face recognition could be realized by the following way. If we use one of the following classifiers e.g. 1NN, kNN, classifier, built on the basis of potential functions, SVM, etc. we can have some fiducial interval of the most likely candidates. Fiducial interval in general is the list of the candidates that are the most similar to the target object (person). If we use decision making support system controlled by operator result of the system work could be one or two candidates, given to the operator for the final decision or expertise. If we use an autonomous system the final decision should be made by the system that has to select one of the most likely candidates using some special verification algorithms that analyse the dynamics of behaviour of the object in the fiducial interval under the verifying external conditions and parameters of the algorithm.

2. Estimations building in pattern recognition
2.1 Some important tasks of machine learning The modern theory of machine learning has two vital problems: to obtain precise upper bound estimates of the overtraining (overfitting) and ways of it’s overcoming. Now the most precise familiar estimates are still very overrated. So the problem is open for now. It is experimentally determined the main reasons of the overestimation. By the influence reducing they are as follows [Vorontsov, 2004]: 1. The neglect of the stratification effect or the effect of localization of the algorithms composition. The problem is conditioned by the fact that really works not all the composition but only part of it subject to the task. The overestimation coefficient is from several tens to hundreds of thousands;

New Principles in Algorithm Design for Problems of Face Recognition

51

The neglect of the algorithms similarity. The overestimation coefficient for this factor is from several hundreds to tens of thousands. This factor is always essential and less dependent from the task than first one; 3. The exponential approximation of the distribution tail area. In this case the overestimation coefficient can be several tens; 4. The upper bound estimation of the variety profile has been presented by the one scalar variety coefficient. The overestimation coefficient is often can be taken as one but sometimes it can be several tens. The reason of overtraining effect has been conditioned by the usage of an algorithm with minimal number of errors on the training set. This means that we realize the one-sided algorithms tuning. The more algorithms are going to be used the more overtraining will be. It is true for the algorithms, taken from the distribution randomly and independently. In case of algorithm dependence (as rule in reality they are dependent) it is suggested that the overtraining will be reduced. The overtraining can be in situation if we use only one algorithm from composition of two algorithms. Stratification of the algorithms by the error number and their similarity increasing reduces the overtraining probability. Let us consider a duplet algorithm-set. Every algorithm can cover a definite number of the objects from the training set. If one uses internal criteria [Kapustii et al., 2007; Kapustii et al., 2008] (for example in case of metrical classifiers) there is the possibility to estimate the stability of such coverage. Also we can reduce the number of covered objects according to the stability level. To cover more objects we need more algorithms. These algorithms should be similar and have different error rate. There is also interesting task of redundant information decrease. For this task it is important to find the average class size guaranteeing the minimal error rate. The reason in such procedure conditioned also by the class size decrease for the objects interfering of the recognition on the training phase. The estimation of the training set reduction gives the possibility to define the data structure (the relationship between etalon objects and objects that are the spikes or non-informative ones). Also the less class size the less time needed for the decision making procedure. But the most important of such approach consists in possibility to learn precisely and to understand much deeper the algorithms overtraining phenomenon. In this paper we are going to consider the metrical classifiers. Among all metrical classifiers the most applied and simple are the kNN classifiers. These classifiers have been used to build practical target recognition systems in different areas of human's activity and the results of such classification can be easily interpreted. One of the most appropriate applications of metrical classifiers (or classifiers using the distance function) concerns the biometrical recognition systems and face recognition systems as well. 2.2 Probabilistic approach to parametrical optimization of the kNN classifiers The most advanced methods for optimization composition algorithm, informative training set selection and feature selection are bagging, boosting and random space method (RSM). These methods try to use the information containing in the learning sample as much as they can. Let us consider the metrical classifier optimization in feature space, using different metrics. The most general presentation of the measure between feature vectors x and y has been realized through Manhatten measure as the simple linear measure with weighted coefficients ai [Moon & Stirling, 2000]:

2.

52

Reviews, Refinements and New Ideas in Face Recognition

d( x , y ) =  ai |xi − y i |, i =1

n

(1)

where d( x , y ) =  ai |xi − y i |is the arbitrary measure between vectors x and y . i =1

n

Minkovski measure as the most generalized measure in pattern recognition theory can be presented in form of d( x , y ) = ( |xi − y i |p ) p = (  ai |xi − y i |) p = C ( p ) ai |xi − yi |, i =1 i =1 i =1 n 1 n 1 n

(2)

where parametrical multiplier C ( p ) have been presented in form of
C ( p ) = (  ai |xi − y i |) i =1 n 1− p p

; ai = xi − y i |p − 1 ; p > 0 |

(3)

One can make the following conclusions. An arbitrary measure is the filter in feature space. It determines the weights on features. The weight must be proportional to the increase of one of indexes when it has been added to general feature set used for class discrimination procedure. Such indexes are: correct recognition probability, average class size, divergence between classes, Fisher discriminant [Bishop, 2006]. One can use another indexes, but the way of their usage should be similar. If one of the features does not provide the index increase (or worsen it) the value of such feature weight should be taken as zero. So by force of supplementary decrease of feature number one can accelerate the recognition process retaining the qualitative characteristics. The feature optimization problem and measure selection has been solved uniquely. This procedure has been realized using weighted features and linear measure with weighted coefficients. Feature selection task at the same time has been solved partially. First the feature subset from general set is determined. Such set has been determined by some algorithm (for example by the number of orthogonal transforms). Such algorithm should satisfy the definite conditions like follows: class entropy minimization or divergence maximization between different classes. These conditions have been provided by the Principle Component Analysis [Moon & Stirling, 2000]. The last parameter using in the model is the decision function or decision rule. Number of decision functions can be divided into functions working in feature space and the functions based on distance calculation. For example the Bayes classifier, linear Fisher discriminant, support vector machine etc. work in feature space. The decision making procedure is rather complex in multidimensional feature space when one uses such decision rules. Such circumstance is especially harmful for continuous recognition process with pattern series that have been recognized. Thus realizing the recognition system with large databases in practice one uses classifiers based on distance function. The simplest classifier is 1NN. But this classifier has been characterized by the smallest probability indexes. Therefore one should use kNN one. So the task consists in selection of k value that is optimal for decision making procedure in bounds of fiducial interval. This interval corresponds to the list of possible candidates. Unlike the classical approach k value has upper bound by class size. In classical approach the nearest neighbor value should be taken rather large, approximating Bayes classifier.

New Principles in Algorithm Design for Problems of Face Recognition

53

Let us consider RS with training. The calculation and analysis of the parameters of such systems is carried out on the basis of learning set. Let there exists the feature distribution in linear multidimensional space or unidimensional distribution of distances. We are going to analyse the type of such distribution. The recognition error probability for μ = 0 could be presented as
|x|≥θ



p( x )dx , where θ is the threshold. According to the Chebyshev inequality

[Moon & Stirling, 2000] we obtain

|x|≥

 θ p( x )dx ≤ θ

σ2
2

.

Let us consider the case of mean and variance equality of p( x ) distribution. The upper bound for single mode distributions with mode μ = 0 with help of Gauss inequality [Weinstein, 2011] is:
P(|x − μ | λτ ) ≤ ≥ where τ 2 ≡ σ 2 + ( μ − μ0 )2 . Let μ = μ0 = 0 and τ ≡ σ . Then the threshold θ is θ = λτ = λσ and λ = inequality for the threshold θ could be presented in form of: 4 9λ 2 (4)

θ . Thus the Gauss σ

|x|≥

 θ p( x )dx ≤ 9θ

4σ 2
2

.

(5)

As seen from (5), the Gauss upper bound estimate for the single-mode distribution is better in 2.25 times then for the arbitrary distribution. So the influence of the distribution type on the error probability is significant. The normal distribution has equal values of mode, mean and median. Also this distribution is the most popular in practice. On the other hand the normal distribution has been characterized by the maximum entropy value for the equal values of variance. This means that we obtain the minimal value of classification error probability for the normally distributed classes. For the algorithm optimization one should realize the following steps: • to calculate the distance vector between objects for the given metric; to carry out the non-parametrical estimation of the distance distribution in this vector • by the Parzen window method or by the support vector machines; to estimate the mean and variance of the distribution; • on the basis of estimated values to carry out the standardization of the distribution • ( μ = 0 , σ = 1 ); to build the distributions both for the theoretical case and estimated one by the non• parametrical methods; to calculate the mean square deviation between the distributions; • to find out the parameter space, when deviation between the distributions less then • given δ level.
2.2.1 Probability estimation for some types of probability density functions Let us consider some probability density functions (pdfs) that have a certain type of the form (presence of the extremum, right or left symmetry). If pdf have not one of such types of

54

Reviews, Refinements and New Ideas in Face Recognition

structure one can use the non-parametrical estimation. As the result of such estimation we get the uninterrupted curve describing pdf. This function can be differentiated and integrated by the definition. Because the Gaussians have been characterized by the minimal 4σ 2 error of the classification for the given threshold θ and does not exceed (see eq.5) for 9θ 2 the unimodal and symmetric pdf or pdf with right asymmetry, the double-sided inequality for the given value of recognition error can be presented in form of :
0.5(1 − erf ( )) ≤ ε ≤

θ σ

4σ 2 . 9θ 2

(6)

where μ = 0 .

Fig. 1. Right asymmetry of pdf

Fig. 2. Left asymmetry of pdf

New Principles in Algorithm Design for Problems of Face Recognition

55

Let us analyse the form of potentially generated pdfs of distances between objects. All of the distributions will have extremum. This will be conditioned by following facts. All of the pdfs have been determined on the interval [0, ∞ ) and the density near zero and for the large distances is not high because these values are mostly unlikely. The right asymmetry is much more likely because pdf of distances is limited by zero and from the other side it has no strictly determined limitations. Let's consider a widespread problem of classification in the conditions of two classes. We will denote the size of classes as s1 and s2 correspondingly. Then if the probability of replacement of object of a class having size s1 within a fiducial interval is equal ε 1 the probability of no replacement of objects from the same class by objects from a class s2 in this interval is equal to (1 − ε 1 )s2 under the condition of independence of objects [Kapustii et al,

2008; Kyrgyzov, 2008; Tayanov & Lutsyk, 2009]. For other class at corresponding changes this probability is equal to (1 − ε 2 )s1 . If now one selects some virtual class and admits that replacement of any object of this class by objects from the mentioned two classes is authentic event it is possible to write down a following equation:

γ ((1 − ε 1 )s + (1 − ε 2 )s ) = 1 ,
2 1

(7)

where the proportionality multiplier is calculated trivially. Sometimes there are situations when distances between objects are equal to 0. Thus nonparametrically estimated distribution of one of the classes can have a maximum in a point corresponding to zero distance. Let density of distributions are equal p1 (0) and p2 (0) in a zero point. The estimation of relation between probabilities can be set in a form of

p1 (0)s2 or p2 (0)s1

ln

p1 (0)s2 . Thus it is necessary to make boundary transition from cumulative density p2 (0)s1

function (cdf) to pdf as they are connected among themselves by differentiation operation. p (0)s2 p (0)s1 The relation ln 1 s1 or generally ( ln 2 s2 ) can be used for construction of the following p2 (0) p1 (0) classifier ln p1 (θ )s2 > γ1; p2 (θ )s1 ln p2 (θ )s1 > γ2; p1 (θ )s2

p (θ )s2 ln 1 s1 < γ 1 , p2 (θ )

or

p (θ )s1 ln 2 s2 < γ 2 , p1 (θ )

(8)

p1 (θ )s2 p (θ )s1 = 0 or ln 2 s2 = 0 have no influence on classification results and p2 (θ )s1 p1 (θ ) the decision can be accepted for benefit of any class. In case of non-parametric estimation the probability of such value is almost equal to 0. This approach is especially useful for the recognition tasks with similar objects i.e. objects that are week separated in the feature space. It should be noted that such type of algorithms have been oriented on the tasks with where values ln

56

Reviews, Refinements and New Ideas in Face Recognition

high level of class overlapping. Face recognition belongs to the tasks that have sufficiently a lot of objects that could not be separated so easy.
2.3 Combinatorial approach Let us present the recognition results for kNN classifier in form of binary sequence:

1111111111000 11111111100000 11100...             l m1 m l1 l2 m2   3 3 
It
Fig. 1. The recognition results in form of binary sequence for kNN classifier Using kNN classifier it is important that among k nearest neighbours we have the related positive objects majority or the absolute one. Let us consider the simpler case meaning the related majority. The kNN classifier correct work consists in fact that for k nearest neighbours it has to be executed the condition

 l i i

>

 m i i

, i = 1, 2, 3... ,

(9)

  where li , mi are the groups that appear after class size decrease. Under the group one understands the homogeneous sequence of elements. In such sequence (see Fig.1) there exist patterns of all classes. In general case there is no direct conformity between the group number and the class number although. Let us consider the case of non-pair k value in kNN classifier only. This means that we have the case of synonymous classification. Such univocacy could disappear in case of pair k value and votes equality for different classes. Let us estimate the effect of class size reduction in case of kNN classifier. Note that reduced class sizes are equal to each other and equal s∗ . Let us consider the kNN classifier correct
k ∗ work condition: ENT   + 1 ≤ s . In contradistinction to 1NN classifier there is no such an 2  importance of the first nearest patterns of the true class. Thus all such sequences one could denote as li . Let us determine the probabilities that it will be selected s * patterns from the

true class by the combinatorial approach. These probabilities have fiducial sense. This means that for the given part of positive objects there will be no selections among the patterns of the false classes by the correspondent combinatorial way. The multiplication of pointed two probabilities determines the probability of kNN classifier correct work. Let assign q j as the recognition error probability for the corresponding mi groups:

New Principles in Algorithm Design for Problems of Face Recognition

57

    k q1 = P  inf   mi  ≥ ENT   + 1  ;   2     i      k q2 = P  inf   mi  + mi + 1 ≥ ENT   + 1  ;   2    i       inf   mi  + mi + 1 + mi + 2 ≥   ;...  q3 = P   i   k   ≥ ENT   + 1    2        inf   mi  +  mi + j − 1 ≥    j  qj = P   i  ;...  ≥ ENT  k  + 1      2  

(10)

The combinatorial expression for q j probability could be written in form of:

qj =

k j = ENT   + 1 2



s∗

Cj Cs − j mi + j − 1 s −  mi + j −1  i,j i ,j



C ss



k ,  mi + j − 1 ≥ ENT   + 1 . 2 i, j

(11)

The fiducial probability for arbitrary true pattern sequence is equal:

Pq j =

k j = ENT   + 1 2



s∗

C j Cs − j  li s −  li i i ∗



C ss

.

(12)

Thus the correct recognition probability for kNN classifier has been determined by probability (12) and addition to probability (11):

Pj = Pq j (1 − q j ) =

k j = ENT   + 1 2



s∗

C j Cs − j  li s −  li i i ∗



C ss

−  .    .

 s∗  j s∗ − j   C C li s −  li k   j = ENT   + 1  i i 2 

 s∗ ∗  j Cs − j   C k mi + j −1 s −  mi + j −1   j = ENT   + 1  i ,j i,j 2 

(13)

(C ) s∗ s

2

It is modelled the recognition process with different sequences of patterns of true and false classes for the 1NN and kNN classifiers in case of absolute majority. For modelling the face recognition system has been taken. The class size (training set) has been taken as 18

58

Reviews, Refinements and New Ideas in Face Recognition

according to the database that it was made. On the Fig.1 the results of modelling of the training set decrease influence on the recognition results for the 1NN classifier have been presented. On the Fig.2 the similar results for kNN classifier under condition k ENT   + 1 = s∗ have been presented. 2

Fig. 2. The probability of correct recognition as function of training set ( x axis) and number of true/false objects in the target sequence ( y axis) for the 1NN classifier

Fig. 3. The probability of correct recognition as function of training set ( x axis) and number of true/false objects in the target sequence( y axis) for the kNN classifier On the Fig.1,2 x axis means the size of the training set and the y axis means the size of the true patterns sequence (left picture) and sequence of both true and false patterns (right

New Principles in Algorithm Design for Problems of Face Recognition

59

picture). The y axis has been formed by the following way. We organized 2 cycles where we changed the number of true and false patterns. For every combination of these patterns and different class sizes we calculate the probability of correct recognition.

Fig. 4. The probability of correct recognition as function of training set ( x axis) and k ENT   + 1 value ( y axis) 2 On the Fig.4,5 the results of kNN classifier modelling have been presented. Here it has been k satisfied the following condition: ENT   + 1 ≤ s∗ . On the Fig.5 the fiducial probability as 2
k function of training set size ( x axis) and ENT   + 1 value ( y axis). 2 The probability part of proposed approach is based in following idea. Despite of combinatorial approach, where the recognition results were determined precisely, we define only the probability of the initial sequence existence. Due to low probability of arbitrary sequence existing (especially for the large sequences) it has been determined the probability of homogeneous sequences existing of the type {0} or {1} . This probability has been determined on the basis of the last object in given sequence as probability of replacing this object (the object from the true class {1} by the others objects of the false classes from the

60

Reviews, Refinements and New Ideas in Face Recognition

database. This means that the size of homogeneous sequence has been determined by the most "week" object in the homogeneous pattern sequence. The probability of existing of the non-homogeneous sequences is inversely proportional to the 2|l + m| value, where |l + m| is the sequence size. This procedure could be realized using distribution function (fatigue function) of the distances between the objects. This approach has been developed for metrical classifiers and classifiers on the basis of distance function in [Kapustii et al., 2008; Tayanov & Lutsyk, 2009]. Thus we need to calculate the probability of sequence with true patterns existing that has definite size or for the given probability rate we need to calculate the maximal size of the sequence that satisfies this probability. For the binary sequence the sum of the weights of the lower order bits is always less than the next most significant bit.

k Fig. 5. The probability of correct recognition as function of ENT   + 1 ( x axis) and 2 number of true/false objects in the target sequence ( y axis)

New Principles in Algorithm Design for Problems of Face Recognition

61

The difference is equal to 1. This means that arbitrary pattern replacement of the true class in the fiducial interval is equivalent to the alternate replacement of the previous ones. The minimal whole order of the scale of notation that has such peculiarity is equal to 2. Thus we need to calculate the weights of the true patterns position and compare them with binary digit. Such representation of the model allows us to simplify the probability calculation of the patterns replacement from the true sequences by the patterns of false classes. On the other side the arbitrary weights can be expressed through the exponent of number 2 that also simplifies the presentation and calculation of these probabilities. So the probability of the homogeneous sequence of the true patterns existence has been calculated on the basis of distance distribution function and is the function of the algorithm parameters. We should select the sequence of the size that has been provided by the corresponding probability. We after apply the combinatorial approach that allows us to calculate the influence effect of the class size decrease on the recognition probability rate. Thus the probabilistic part of the given approach has been determined by the recognition algorithm parameters. So the integration of both probabilistic and combinatorial parts allows us to define more precise the influence of the effect of the training set reduction. Let us consider step by step the example of fast computing of replacement of true pattern probability from the sequence where relation between weights of the objects is whole exponent of number 2. Thus for example the weights can be presented by the following way: w = {2 9 , 2 6 , 2 4 , 2 3 , 2 2 , 2 1 , 2 0 } . As known the probability of replacement of true object from the sequence by the false one when it is known that replacement is true event is inversely proportional to the weights of these objects. Let define the probability of replacement of the object having the 2 9 weight comparatively to the object with 2 6 weight. As far as we do not know what object has been replaced the total weight of the fact that there will not be replaced the objects with 2 6 weight and lower is equal: w = {2 9 , 2 6 , 2 4 , 2 3 , 2 2 , 2 1 , 2 0 } . This weight can be expressed trough 2 6 weight accurate within 1 by following way: 2 6 (1 + 0.5) = 1.5 * 2 6 . In case of large sequences this one has week influence on the accuracy. The relation between 2 9 and 2 6 is equal to 8. In case of divisible group of events we obtain the 8λ + 1.5λ = 1 equation, where the proportional coefficient λ approximately equal to 0.11. So the probability of non-replacement of the object with 2 9 weight is equal 8 * 0.11 = 0.88 . The object with 2 6 weight has the corresponding probability equal to 1 − 0.88 = 0.12 . Since we know exactly that replacement is the true event and the last object has weight equal to 1 the accuracy correction that equal to 1 makes the appropriate correction of probability calculation.

3. Classification on the basis of division of objects into functional groups
Algorithms of decision making are used in such tasks of pattern recognition as supervised pattern recognition and unsupervised pattern recognition. Clustering tasks belong to unsupervised pattern recognition. They are related to the problems of cluster analysis. Tasks where one provides the operator intervention in the recognition process belong to the learning theory or machine learning theory. The wide direction in the theory of machine learning has the name of statistical machine learning. It was founded by V.Vapnik and Ja. Chervonenkis in the sixties-seventies of the last century and continued in nineties of the same century and has the name of Vapnik-Chervonenkis theory (VC theory) [Vapnik, 2000].

62

Reviews, Refinements and New Ideas in Face Recognition

It should be noted that classification algorithms built on the basis of training sets are mostly unstable because learning set is not regular (in general). That is why it has been appeared the idea of development of algorithms that partially use statistical machine learning but have essentially less sensitivity to irregularity of the training sets. This chapter focuses on tasks that partially use learning or machine learning. According to the general concept of machine learning a set is divided into general training and test (control) subsets. For the training subset one assumes that the class labels are known for every object. Using test subsets one verifies the reliability of the classification. The reliability of algorithms has been tested by methods of cross-validation [Kohavi, 1995; Mullin, 2000]. Depending on the complexity of the classification all objects can be divided into three groups: items that are stable and are classified with high reliability (“easy” objects), objects belonging to the borderline area between classes (“ambiguous” objects) and objects belonging to one class, and deeply immersed inside another one (“misclassified” objects). Among those objects that may cause an error the largest part consists of terminal facilities. Therefore it is important to develop an algorithm that allows one to determine the largest number of frontier facilities. The principal idea of this approach consists in preclassification of objects by dividing them into three functional groups. Because of this it is possible to achieve much more reliable results of classification. This could be done by applying the appropriate algorithms for every of obtained groups of objects.
3.1 The most stable objects determination The idea of the model building is as follows. The general object set that have to be classified is divided on three functional groups. To the first group of objects the algorithm selects the objects with high level of classification reliability. The high level of reliability means that objects are classified correctly under the strong (maximal) deviations of the parameters from optimal ones. From the point of view of classification complexity these objects belongs to the group of so called "easy" objects. The second group includes objects, on which there is no consensus. If one selects two algorithms in a composition of algorithms, they should be as dissimilar as possible and they should not be a consensus. If one uses larger number of algorithms, the object belongs to the second group if there is no consensus in all algorithms. If consensus building uses intermediate algorithms, parameters of which are within the intervals between the parameters of two the most dissimilar algorithms, this makes it impossible to allocate a larger number of objects, on which there is no consensus. Dissimilarity between algorithms is determined on the basis of the Hamming distance between results of two algorithms defined as binary sequences [Kyrgyzov, 2008; Vorontsov, 2008]. In practice this also means that in general it will not be detected the new objects, if one uses composition of more than two algorithms, on which one builds the consensus. The third group consists of those objects, on which both algorithms have errors, while they are in consensus. The error caused by these objects can not be reduced at all. Thus the error can not be less than the value determined by the relative amount of objects from the third group. The next step will be the reclassification of the second group of objects. This special procedure allows us to determine the true class, to which a particular object belongs to. Reclassifying the second group of objects we can also have some level of error. This error together with the error caused by the third group will give the total error of all proposed algorithms.

New Principles in Algorithm Design for Problems of Face Recognition

63

The research carried out in this paper concerns the analysis of statistical characteristics of the results of a consensus generating by two algorithms. The objective of the task analysis is a statistical regularity of characteristics of various subsets taken by division of the general set into blocks of different size. Probability distribution by the consensus for three groups of objects has been carried out by nonparametric estimation using Parzen window with Gauss kernels.
3.1.1 Experimental results Figs 6-11 show the parametrically estimated pdfs for the probability of a correct consensus, the probability of incorrect consensus and the probability that consensus will not be reached.

Fig. 6. Task ”pima” from UCI repository: non-parametrical estimation of the pdf of the correct consensus that consists of two algorithms (solid line is used for the set of 200 objects and dot line is used for set of 30 objects correspondingly) .

Fig. 7. Task ”pima” from UCI repository: non-parametrical estimation of the pdf of the incorrect consensus that consists of two algorithms (solid line is used for the set of 200 objects and dot line is used for set of 30 objects correspondingly)

64

Reviews, Refinements and New Ideas in Face Recognition

Fig. 8. Task ”pima” from UCI repository: non-parametrical estimation of the pdf of no consensus between two algorithms (solid line is used for the set of 200 objects and dot line is used for set of 30 objects correspondingly) As can be seen from the figures these distributions can be represented using onecomponent, two-component or multicomponent Gauss mixture models (GMM). In multicomponent GMM weights determined according to their impact factors. Distribution parameters (mean and variance) and weights of impact in the model are estimated using EM algorithm. Estimation of corresponding probability values was carried out by blocks with a minimal size of Q = 30 and Q = 200 elements. The size of these blocks has been driven by a small sample size which according to various criteria ranges from 30 to 200 items. According to the standard definition of a small sample it is assumed that sample is small when it is characterized by irregular statistical characteristics. As seen from all figures the estimates obtained by blocks with a minimal size of 30 elements and some more are irregular. This means that for these tasks the sub-sample size of 30 items and some more is small. This has been indicated by long tails in the corresponding probability distributions. The maximum in zero point for two-component model is characterized by a large number of zero probabilities. This can be possible if there are no mistakes in the consensus of two algorithms. Estimates of probabilities on the basis of average values and the corresponding maximum probability distributions (for maximum likelihood estimation (MLE)) are not much different, which gives an additional guarantee for the corresponding probability estimates. Significance of obtained consensus estimates of probabilities of correct consensus, incorrect consensus and probability that consensus will not be achieved, provides a classification complexity estimate. Problems and algorithms for the complexity estimation of classification task is discussed in [Basu, 2006]. For example, tasks "pima" and "bupa" are about the same level of complexity because values of three probabilities are approximately equal. Tab. 1 shows that all algorithms excepting proposed new one have large enough sensitivity to the equal by the classification complexity tasks they work with. Mathematical analysis of composition building of algorithms has been considered in details in [Zhuravlev, 1978].

New Principles in Algorithm Design for Problems of Face Recognition

65

Fig. 9. Task ”bupa” from UCI repository: non-parametrical estimation of the pdf of the correct consensus that consists of two algorithms (solid line is used for the set of 200 objects and dot line is used for set of 30 objects correspondingly) Figs 6-11 show graphic dependencies of consensus results for problems taken from repository UCI. This repository is formed at the Irvin’s University of California. The data structure of the test tasks from this repository is as follows. Each task is written as a text file where columns are attributes of the object and rows consist of a number of different attributes for every object. Thus the number of rows corresponds to the number of objects and the number of columns corresponds to the number of attributes for each object. A separate column consists of labels of classes, which mark each object. A lot of data within this repository has been related to biology and medicine. Also all these tasks could be divided according to the classification complexity. In the data base of repository there exists a number of tasks with strongly overlapped classes. Some of them will be used for research.

Fig. 10. Task ”bupa” from UCI repository: non-parametrical estimation of the pdf of the incorrect consensus that consists of two algorithms (solid line is used for the set of 200 objects and dot line is used for set of 30 objects correspondingly)

66

Reviews, Refinements and New Ideas in Face Recognition

Fig. 11. Task ”bupa” from UCI repository: non-parametrical estimation of the pdf of no consensus between two algorithms(solid line is used for the set of 200 objects and dot line is used for set of 30 objects correspondingly) In Tab. 1 one gives the probabilities of errors obtained on the test data for different classifiers or classifier compositions. All these algorithms were verified on two tasks that are difficult enough from the classification point of view. For the proposed algorithm it has been given the minimal and maximal errors that can be obtained on given tested data. In Tab. 1 the value of minimal error is equal to consensus error for the proposed algorithm. The value of maximal error has been calculated as sum of minimal error and the half of the related amount of objects, on which there is no consensus (fifty-fifty principle). As seen from the table the value of maximal error is much less than the least value of error of all given algorithms for two tasks from UCI repository. In comparison with some algorithms given in the table the value of minimal error is approximately 10 times less for the proposed algorithm then the error of some other algorithms from the table. The proposed algorithms are characterized by much more stability of the classification error in comparison with other algorithms. It can be seen from corresponding error comparison for two tasks from the UCI repository. Algorithm Task bupa 0.313 0.327 0.307 0.33 0.422 0.338 0.333 0.040/0.212 pima 0.236 0.302 0.227 0.290 0.230 0.307 0.041/0.203

Monotone (SVM) Monotone (Parzen) AdaBoost (SVM) AdaBoost (Parzen) SVM Parzen RVM Proposed algorithm (min/max)

Table 1. Error of classification for different algorithms

New Principles in Algorithm Design for Problems of Face Recognition

67

Q=200

Q=30

μ
Pc Pe Pc

σ
0.024 0.006 0.019

μ
0.611 0.046 0.344

σ
0.064 0.013 0.052

0.635 0.041 0.324

Table 2. Task ”pima” from UCI repository Q=200 Q=30

μ
Pc Pe Pc

σ
0.024 0.006 0.019

μ
0.611 0.046 0.344

σ
0.064 0.013 0.052

0.635 0.041 0.324

Table 3. Task ”bupa” from UCI repository In tabs 2-3 the estimates of probability of belonging of every object from the task of repository UCI to every of three functional groups of objects have been given. In this case the objects, on which consensus of the most dissimilar algorithms exists ( Pc ), belong to the class of so called "easy" objects. Then objects, on which both of algorithms that are in consensus make errors ( Pe ), belong to the class of objects that cause uncorrected error and this error can not be reduced at all. The last class of objects consists of objects, on which there is no consensus of the most dissimilar algorithms ( Pc ). This group of objects also belongs to the class of border objects. In the tables one gives variances of corresponding probabilities too. Minimal size of the blocks, on which one builds estimates using algorithms of cross-validation changes from 30 to 200.
3.1.2 Case of three classifiers in the consensus composition In the previous case we analysed the classifier composition that consists of two the most dissimilar algorithms. Now we are going to build the classifier composition that consists of three algorithms. The third algorithms we choose considering the following requirements. These algorithms have to be exactly in the middle of two the most dissimilar algorithms. This means that the Hamming distance between the third algorithm and one of the most dissimilar algorithms is equal to the distance between the “middle” algorithm and the second algorithm in the consensus composition of two the most dissimilar algorithms. In Tabs 4 and 5 the results of comparison of two consensus compositions have been given. The first composition consists of two algorithms and the second one consists of three algorithms correspondingly. As in the previous case we used “pima” and “bupa” testing tasks from UCI repository.

68

Reviews, Refinements and New Ideas in Face Recognition

consensus of two classifiers μ σ
Pc Pe Pc

consensus of tree classifiers μ σ 0.607 0.0347 0.358 0.021 0.006 0.017

0.635 0.041 0.324

0.024 0.006 0.019

Table 4. Task ”pima” from UCI repository As seen from the both tabs there is no big difference between two cases. Consensus of two algorithms can detect a bit lager quantity of correctly classified objects that means a bit more reliable detection of correctly classified objects. Consensus of three algorithms can detect a bit larger quantity of objects on which we have no consensus (the third group of objects). But if we will use the “fifty-fifty” principle for detection objects from the third group the general error of classification will be the same. We can also note that the variances of two consensuses compositions have no large differences between each other. consensus of two classifiers μ σ
Pc Pe Pc

consensus of tree classifiers μ σ 0.586 0.037 0.377 0.012 0.002 0.013

0.616 0.040 0.344

0.008 0.002 0.008

Table 5. Task ”bupa” from UCI repository

Fig. 12. Task ”bupa” from UCI repository: non-parametrical estimation of the pdf of correct consensus between two (solid line) and tree (dot-line) algorithms

New Principles in Algorithm Design for Problems of Face Recognition

69

Fig. 13. Task ”bupa” from UCI repository: non-parametrical estimation of the pdf of incorrect consensus between too (solid line) and tree (dot-line) algorithms

Fig. 14. Task ”bupa” from UCI repository: non-parametrical estimation of the pdf of no consensus between too (solid line) and tree (dot-line) algorithms On figs 12-17 the results of consensus building for three algorithms have been given. Here we also use two tasks from the UCI repository as in the case of two algorithms. According to figs corresponding to the task of “bupa” we can make the following conclusions. In comparison to the case of two algorithms we can see that for the number of preclassified groups of objects we have just shifts between the corresponding pdfs and the form of curves is approximately the same. We can also note that relative value of the shift is rather small (about 5% for the pdf of correct probability). This shift is almost conditioned by the statistical error of determining of the most different algorithms. According to figs corresponding to the task of “pima” we can mark that differences in forms of pdfs are more essential than in previous task. This circumstance could be used for

70

Reviews, Refinements and New Ideas in Face Recognition

comparison of the task complexity using the value of overtraining as stability to learning. Using such approach it is possible to obtain much more precise and informative estimations of the complexity from the learning process point of view.

Fig. 15. Task ”pima” from UCI repository: non-parametrical estimation of the pdf of correct consensus between too (solid line) and tree (dot-line) algorithms

Fig. 16. Task ”pima” from UCI repository: non-parametrical estimation of the pdf of incorrect consensus between too (solid line) and tree (dot-line) algorithms

New Principles in Algorithm Design for Problems of Face Recognition

71

Fig. 17. Task ”pima” from UCI repository: non-parametrical estimation of the pdf of no consensus between too (solid line) and tree (dot-line) algorithms

4. Specific of usage of the proposed approach for problems of face recognition
The problem of face recognitions is one of the principle tasks of the large project connected with determining of human behaviour and psychoanalysis of the human, based on the face expression and body movement. Such type of systems belongs to the class of no contact systems. Unlike to the human recognition systems based on fingerprints or images of iris these systems do not require human to keep finger of eyes near (or on) the scanner. This is also very important from law point of view. It is impossible to force a human to put the finger on the scanner if he does not want to do this and if this is not a criminal case. The same fact is concerned the case of iris recognition systems. To take a picture of somebody is not forbidden and this or that person could not be familiar with the fact that somebody took already picture of the face of such person. This is really important when creating the training and test databases. Face recognitions systems can be joined with hidden video cameras installed in shops, supermarkets, banks and other public places. Here it is important to hide the fact of video surveillance. This could be done with help of no contact recognition systems only. On other hand the facial information and mimicry could be used for the human behaviour determination and psychophysical state of the human. This is important to avoid and predict of acts of terrorism. Here it is very important information about dynamics of face expression and movement of the separate parts of the face. In spite of the fact that face recognition systems has larger value of error of both of the types than finger print recognition systems, iris recognition systems and others, they find a lot of different applications because of their flexibility of installation, training and testing. In this situation it is very important to make research in the field of recognition probability estimation, overtraining estimation, model parameters estimation, etc. to find the most optimal parameters of the face recognition systems. To build very reliable recognition systems it is important to use proposed approach that allows us to build hierarchical recognition on the basis of objects division into functional groups and due to this to use the effect of preclassification.

72

Reviews, Refinements and New Ideas in Face Recognition

For the procedure of decision making one proposes to use the notation of fiducial interval. By the fiducial interval one understands the list of possible candidates for the classification. Usage of the fiducial interval is very useful for the decision making support systems with presence of an operator. The result of the system work is the list of candidates that are the most similar to the object to be recognized. In this case the final decision about the object will be made by operator. The system can work as completely autonomous one with using of the fiducial interval for the decision making. In fiducial interval there exist several group of objects that belong to their own class. Our task is to find the group of objects that corresponds to the object to be recognised or to make decision that there is no corresponding objects in fiducial interval. The idea of fiducial interval consists in following concepts. The size of the fiducial interval (the number of possible candidates) has to be enough to be sure that if the corresponding objects are in the database of the recognition system they will drop into this interval. The size of the fiducial interval corresponds to the fiducial probability. The larger is fiducial interval the larger is fiducial probability. That is why it is convenient to use the notation of fiducial interval for the probability of the fact that corresponding objects will drop in the list of possible candidates. The second paragraph of this chapter has been devoted to the problems of forming of some types of fiducial intervals.

5. Discussion and future work
In this chapter we shortly considered some approaches for solution of such important problems as recognition reliability estimation and advanced classification on the basis of division of objects into three functional groups. In domain of reliability estimation there exist two principal problems. First problem concerns the tasks of statistical estimation of the probability of correct recognition especially for small training sets. This is very important when we can not achieve additional objects so fast and make our training set more representative. That could be in situations when we work with data slowly changing in time. Another important problem concerns the effect of overestimation in pattern classification. The value of overestimation could be found as difference between the recognition results on training and test sets. In the beginning of the chapter one mentioned the main problems of the statistical learning theory and overestimation as one of the most principal problems. One did not pay attention to this problem in this chapter but it is planned to do in future research. The attention has been payed to the problems of recognition reliability estimation. In this chapter the results of both combinatorial and probabilistic approach to recognition reliability estimation have been presented. As seen from the figures there was realized the advanced analysis and estimation of the recognition results when the training set is decreased. So we can make the prognosis of the recognition probability for reduced training sets using combinatorial approach. The reliability of such approach can be provided on the basis of probabilistic approach. It was considered some methods of the reliability estimation for some types of classifiers. Such of the classifiers belongs to the group of so called metrical classifiers or classifiers on the basis of dissimilarity functions or distance functions. It will be interesting to consider the proposed methods in case of other types of classifiers e.g. classifiers using separating hyperplane, classifiers built on logic functions and others. It will be interesting to consider the idea of how to express one classifier through another or to build relations between the different types of classifiers. All this could give us the possibility to use one approach to reliability estimation for any type of the classifiers.

New Principles in Algorithm Design for Problems of Face Recognition

73

In the second part of the chapter the probability of belonging of every object to each of the three groups of objects: a group of "easy" objects, on which it is reached the correct consensus of two algorithms, a group of objects, on which two the most dissimilar algorithms have an incorrect consensus and a group of objects, on which one does not achieve consensus have been considered. The analysis shows that there are probability distributions of data that can be presented as a multicomponent models including GMM. All this makes it possible to analyze the proposed algorithms by means of mathematical statistics and probability theory. From the figures and tables one can see that the probability estimations using methods of cross-validation with averaged blocks of 30 and 200 elements minimum differ a little among themselves, which makes it possible to conclude that this method of consensus building, where consensus consists in the most dissimilar algorithms, is quite regular and does not have such sensitivity to the samples as other algorithms that use training. As seen from the corresponding tables, the minimum classification error is almost less by order of magnitude than error for the best of existing algorithms. The maximal error is less of 1.5 to 2 times in comparison with other algorithms. Also, the corresponding errors are much more stable both relatively to the task, on which one tests the algorithm and the series of given algorithms where the error value has significantly large variance. Moreover, since the minimal value of error is quite small and stable, it guaranties the stability of receipt of correct classification results on objects, on which consensus is reached by the most dissimilar algorithms. Relatively to other algorithms such a confidence can not be achieved. Indeed, the error value at 30 − 40% (as compared to 4% ) gives no confidence in results of classification. The fact that the number of ambiguous objects selected by two the most different algorithms is less than the number of objects selected by three algorithms conditioned by the overtraining of two the most dissimilar algorithms. So the future research in this domain should be devoted to the problem of overtraining of the ensemble of two the most dissimilar algorithms. This means that it should be reduced the overtraining of the preclassification that allows us to reduce the error of classification gradually and due to this to satisfy much more reliable classification.

6. References
Basu, M. & Ho, T. (2006). Data complexity in pattern recognition, Springer-Verlag, ISBN 184628-171-7, London Bishop, C. (2006). Pattern recognition and machine learning, Springer-Verlag, ISBN 0-38731073-8, New York Kapustii, B.; Rusyn, B. & Tayanov, V. (2007) Mathematical model of recognition systems with smalldatabases. Journal of Automation and Information Sciences, vol. 39, No. 10, pp. 70–80, ISSN 1064-2315 Kapustii, B.; Rusyn, B. & Tayanov, V. (2008). Features in the Design of Optimal Recognition Systems . Automatic control and computer sciences, vol. 42, No. 2, pp. 64–70, ISSN 0146-4116 Kapustii, B.; Rusyn, B. & Tayanov, V. (2008). Estimation of the Influence of Information Class Coverage on Generalized Ability of the k-Nearest-Neighbors Classifier. Automatic control and computer sciences, vol. 42, No. 6, pp. 283–287, ISSN 0146-4116

74

Reviews, Refinements and New Ideas in Face Recognition

Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection, Proceedings of 14th International Joint Conference on Artificial Intelligence, pp. 1137-1145, Palais de Congres Montreal, Quebec, Canada, 2010 Kyrgyzov, I. (2008). Recherche dans les bases de donnes satellitaires des paysages et application au milieu urban :clustering, consensus et categorisation: Ph.D. thesis, l’ecole Nationale Superiere des Telcommunications, Paris Moon, T., Stirling, S. (2000). Mathematical methods and algorithms for signal processing, PrenticeHall, ISBN 0-201-36186-8, New Jersey. Mullin, M., Sukthankar, R. (2000) Complete cross-validation for nearest neighbour classifiers, Proceedings of International Conference on Machine Learning, pp. 639-646, 2000 Tayanov, V. & Lutsyk, O. (2009). Classifier Quality Definition on the Basis of the Estimation Calculation Approach. Computers & Simulations in Modern Science, Mathematical and Computers in Science and Engineering, A series of Reference Books and Textbooks, pp. 166-171, ISSN 1790-2769 Vapnik, V. (2000). The Nature of Statistical Learning Theory (2), Springer-Verlag, ISBN 0-38798780-0, New York Vorontsov, K. (2010). Exact combibatorial bounds on the probability of overfitting for empirical risk minimization. Pattern Recognition and Image Analysis, vol.20, No. 3 pp. 269-285 ISSN 1054-6618 Vorontsov, K. (2008) On the influence of similarity of classifiers on the probability of overfitting pattern recognition and image analysis: new information technologies, Proceedings of International Conference on Pattern Recognition and Image Analysis: new information technologies (PRIA-9), Volume 2., Nizhni Novgorod, Russian Federation, pp. 303-306, 2000 Weinstein, E. (March 2011). Gauss’s Inequality, In: A Wolfram Web Resource, 16.03.2011, Available from http://mathworld.wolfram.com Zhuravlev, J. (1978). An Algebraic Approach to Recognition and Classification Problems. Problems of cybernetics, vol.33, pp.5–68 (in Russian)

4
A MANOVA of LBP Features for Face Recognition
Yuchun Fang, Jie Luo, Gong Cheng, Ying Tan and Wang Dai
School of Computer Engineer and Science College, Shanghai University China

1. Introduction
Face recognition is one of the most broadly researched subjects in pattern recognition. Feature extraction is a key step in face recognition. As an effective texture description operator, Local Binary Pattern (LBP) feature is firstly introduced by Ahonen et al into face recognition. Because of the advantages of simplicity and efficiency, LBP feature is widely applied and later on becomes one of the bench mark feature for face recognition. The basic idea of LBP feature is to calculate the binary relation between the central pixel and its local neighborhood. The images are described with a multi-regional histogram sequence of the LBP coded pixels. Since most of the LBP pattern of the images are uniform patterns, Ojala et al, 2002 proposed Uniform Local Binary Pattern (ULBP). Through discarding the direction information of the LBP feature, they proposed the Rotation Invariant Uniform Local Binary Pattern (RIU-LBP) feature. The Uniform LBP feature partly reduces the dimension and retains most of the image information. RIU-LBP greatly reduces the dimension of the feature, but its performance in face recognition decreases drastically. This chapter mainly discusses the major factors of the ULBP and RIU-LBP features and introduces an improved RIU-LBP feature based on the factor analysis. Many previous works also endeavored to modify the LBP features. Zhang and Shan et al, 2006 proposed Histogram Sequence of Local Gabor Binary Pattern (HSLGBP), whose basic idea was to perform LBP coding to the image in multi-resolution and multi-scale of the images, thereby enhancing the robustness to the variation of expression and illumination; Jin et al, 2004 handled the center pixel value as the last bin of the binary sequence, the formation of the new LBP operator could effectively describe the local shape of face and its texture information; Zhang and Liao et al, 2007a, 2007b proposed multi-block LBP algorithm (MB-LBP), the mean of pixels in the center block and the mean of pixels in the neighborhood block were compared; Zhao & Gao, 2008 proposed an algorithm for multi-directional binary mode (MBP) to perform LBP coding from four different directions; Yan et al, 2007 improved the robustness of the algorithm by fusing the mult-radius LBP feature; He et al, 2005 believed that every sub-block contained different information, and proposed an enhanced LBP feature. The original image was decomposed into four spectral images to calculate the Uniform LBP codes, and then the waterfall model was used to combine them as the final feature. In order to effectively extract the global and local features of face images, Wang Wei et al 2009 proposed LBP pyramid algorithm. Through multi-scale analysis, the algorithm

76

Reviews, Refinements and New Ideas in Face Recognition

first constructed the pyramid of face images, and then the histogram sequence in a hierarchical way to form the final features. No matter how the ULBP features are modified, the blocking number, the sampling density, the sampling radius and the image resolution dominantly control the performance of the algorithms. They affect the memory consumption and computation efficiency of the final feature drastically. However, the values of these factors have to be pre-selected and in most previous work, their values are decided with some experience, which obviously ignores the influence degree of each factor and the experience values are hard to be generalized to other databases. In order to seek a general conclusion, in this chapter, we use statistical method of multivariate analysis of variance (MANOVA) to discuss the contribution of four factors for face recognition based on both ULBP and RIU-LBP features. Besides, we research the correlation of the factors and explore which factors play a key role in face recognition. We also analyze the characteristics of the factors; discuss the change of influence of factors for different LBP features. Based on the factor analysis, we propose a modified RIU-LBP feature. The chapter is organized as follows. In Section 2, we introduce the LBP operators, the LBP features and the four major factors. In Section 3, we illustrate how the MANOVA is applied in exploring the importance of four factors and the results obtained for the two types of LBP features. Based on the above analysis results, an improved RIU-LBP algorithm is introduced in Section 4, which is a fusion of multi-directional RIU-LBP features. We summarize the chapter with several key conclusions in Section 5.

2. LBP features and factors
LBP feature is a sequence of histograms of blocked sub-images of face images coded with LBP operator. The image is divided into rectangle regions and histograms of the LBP codes are calculated over each of them. Finally, the histograms of each region are concatenated into a single one that represents the face image. 2.1 Three LBP operators With the variation of the LBP operator, the obtained LBP features are of different computation complexity. Three types of LBP operators are compared in this chapter. The basic LBP operator is formed by thresholding the neighborhood pixels into binary code 0 or 1 in comparison with the gray value of the center pixel. Then the central pixel is coded with these sequential binary values. Such coding denoted as LBP( P , R ) is determined by the radius of neighborhoods R and the sampling density P . With various values of R and P , the general LBP operator could adapt to different scales of texture features, as shown in Figure 1. The order of binary code reserves the direction information of texture around each pixel with 2 P variations. When there exist at most 2 times of 0 to 1 or 1 to 0 variation, the binary pattern is called a uniform pattern. The Uniform LBP operator LBP(u 2, R ) codes the pixel with uniform patterns P

and denotes all un-uniform patterns with the same value. Its coding complexity is P2 − P + 2 .

Rotation Invariant Uniform (RIU) LBP operator LBP(Riu 2) is another very popular texture P ,R operator. It neglects the order of binary coding and the center pixel of RIU-LBP is denoted by simply counting the number of 1s in the neighborhood as denoted in Equation (1)

A MANOVA of LBP Features for Face Recognition

77

LBP(Riu 2) P ,R

 P   s( g( p ) − g(c )), = p =0  P + 1, 

for uniform patterns otherwise.

(1)

where c is the center pixel, g(⋅) denotes gray level of pixel and s(⋅) is the sign function. The coding complexity of the RIU-LBP operator is P + 2 .

LBP(1,4) Fig. 1. The General LBP operator

LBP2 ,4

LBP(2,8)

2.2 Three LBP features After the original face image is transformed into an LBP image with the LBP operators, the LBP image is blocked into M -by- N squares (See examples in Figure 2) to reserve the space structure of face, and then the LBP histogram is calculated for each square to statistically reflect the edge sharpness, flatness of region, existence of special points and other attributes of a local region. The LBP feature is the concatenated serial of all M -byN LBP histograms. Hence, LBP feature is intrinsically a statistic texture description of the image containing a sequence of histograms of blocked sub-images. The blocking number and the sampling density determine the feature dimensions. For the three LBP operators introduced in Section 2.1, the corresponding LBP feature is denoted as LBP( P , R ) ( M , N ) ,
LBP(u 2, R ) ( M , N ) and LBP(Riu 2) ( M , N ) respectively by taking into consideration the blocking P P ,R

parameters. Due to different coding complexity, the above three LBP features LBP( P , R ) ( M , N ) ,
LBP(u 2, R ) ( M , N ) and LBP(Riu 2) ( M , N ) are of various dimensions as shown in Equation (2) to P P ,R

(4) respectively. For the former two types of LBP feature, increasing the sampling density will result in explosion of dimension. Examples of dimension comparison are listed in Table I. The blocking number and sampling density are two major factors affecting the dimensions of the LBP feature.
D = ( M × N ) × 2P D = ( M × N ) × [ P 2 − P + 2 + 1]

(2) (3)
(4)

D = ( M × N ) × ( P + 2)

78

Reviews, Refinements and New Ideas in Face Recognition

M×N /P The general LBP
M × N × 2P

7 × 8 /8

7 × 8 / 16

14 × 16 / 8

14336

28672

57344

Uniform LBP

M × N × ( P 2 − P + 2 + 1) RIU-LBP
M × N × ( P + 2)

3304

13608

13216

560

1008

2240

Table 1. Dimension comparison of three LBP features ( M × N denotes the blocking number, P denotes the sampling density)
2.3 The four factors of LBP feature The blocking number, the sampling density, the sampling radius and the image resolution are four factors that determine the LBP features. The blocking number and sampling density are two important initial parameters affecting the dimensions and arouse more attentions in previous research. In addition, the blocking number and the image resolution determine the number of pixels of each sub-image. It means how much local information of face contains in each sub-image. If the image resolution is H × W , the blocking number is M × N , and then each sub-image  H  W  contains   ×   pixels, [ ] is rounding. For example, when the image resolution is 140 * M  N  160, the blocking numbers are respectively 3 × 4 , 7 × 8 , 14 × 16 , 21 × 24 and the sub-images contain respectively 1880, 400, 100, 49 pixel as shown in Figure 2. The position of the neighbour points of the LBP operator is decided by the size of the sampling radius, so the vale of radius also directly affects the LBP features.

Fig. 2. Comparison of the blocking number of sub-image Among the four factors, the blocking number and the sampling density are two factors deciding the dimension of the LBP features. For an example shown in Table 1, in the case of u2 different sampling density, the dimension of LBP(8,1) (8,7) feature is D = 3304 . When P u2 doubles, D = 13608 for LBP(16,1) (8,7) . Figure 2 shows that the more the blocking number is,

the higher the dimension of features is. Such feature will inevitably cost huge amount of memory and lowers the speed in computation. Does it deserve to spend so much memory to

A MANOVA of LBP Features for Face Recognition

79

prompt precision of merely a few percents? We do some preliminary experiments to find the answer. For different values of the four factors, we compare the face recognition rate on a face database containing 2398 face images (1199 persons, each of 2 images) selected from the FERET database. Experimental results are evaluated with the curve of rank with respect to CMS (Cumulative Matching Score), meaning the rate of correct matching lower than a certain rank. The closer this curve towards the line CMS = 1 , the better is the performance of the corresponding algorithms. Figure 3 is the comparison of recognition rate for three LBP features. It can be observed that the Uniform LBP has very close performance with the basic LBP but of much lower dimension. While the recognition rate of the RIU-LBP feature decreases significantly due to the loss of direction information. The results indicate that it is good enough to adopt ULBP instead of the basic LBP feature and the direction information has major impact to the recognition rate.
0.9 0.88 Recognition Rate 0.86 0.84 0.82 0.8 0.78 0.76 0.74 5 10 15 Rank 20 25 Basic LBP Uniform LBP RIU-LBP 30

Fig. 3. Comparison of recognition rate for three LBP features (With blocking number 7 x 8, sampling density 8, sampling radius 2 and image resolution 70*80) Figure 4 compares the performance of the same kind of LBP feature with various values of the blocking numbers and sampling density. The comparison shows that higher sampling density and more blocking numbers could result in better performance. It demonstrates that the two factors affect not only the feature dimensions, but also the face recognition rate. Besides, the sampling radius determines the sampling neighborhood of sub-blocks. The image resolution and the blocking number determine the number of pixels of each sub-block.

0.9 Recognition Rate 0.85 0.8 0.75 0.7 7X8/4 7X8/8 14X16/4 14X16/8 5 10 15 Rank 20 25 30

Fig. 4. Comparison of recognition rate for different blocking number and sampling density for RIU-LBP (With sampling radius 2, image resolution 70*80 and 7 x 8 /4 denotes blocking number 7 x 8 and sampling density 4)

80

Reviews, Refinements and New Ideas in Face Recognition

Figure 5 compares the performance of the same kind of LBP feature with various values of the sampling radius and image resolution. The results show that higher resolution and larger sampling density are better for face recognition. Though, these two factors do not affect the feature dimensions, they have unneglectable impact on the recognition rate.

0.9 Recognition Rate 0.8 0.7 0.6 0.5 5 10 15 Rank 20 25 140*160/1 140*160/2 35*40/1 35*40/2 30

Fig. 5. Comparison of recognition rate for different sampling radius and image resolution for RIU-LBP (With blocking number 7 x 8, sampling density 8 and 140*160/1 denotes image resolution 140*160 and sampling radius 1)

3. The importance of four factors
As described in the above, when we use LBP feature in face recognition, the blocking number, the sampling density, the sampling radius, the image resolution would affect the recognition rate in varying degrees. In most present research, the values of these factors are selected according to experience in experiments. Ahonen , et al, 2004 compared different levels of the blocking number, the sampling density and the sampling radius based on the Uniform LBP feature. Under the image resolution of 130 * 150, they selected blocking number 7 × 7 , sampling density 8 and radius 2 as a set of best value which could balance the recognition rate and the feature dimensions. Moreover, they referred to that if the sampling density dropped from 16 to 8, it could substantially reduce the feature dimension and the recognition rate only lowered by 3.1%. Later on, Ahonen, et al, 2006 also analyzed the effect of blocking number for the recognition rate by several experiments. The conclusion is that the blocking number 6 × 6 are better than blocking number 4 × 4 in case of less noise, and vice versa. Chen, 2008 added fusion of decision-making in LBP feature extraction method. They selected the sampling density 8, radius 2 blocking number 4 × 4 and image resolution 128 * 128. Xie et al, 2009 proposed LLGP algorithm, and they selected image resolution 80* 88 and blocking number 8 × 11 as the initial parameters. Zhang et al 2006 proposed HSLGBP algorithm and also discussed the size of sub-images and its relationship with recognition rates. Wang et al, 2008 used multi-scale LBP features to describe the face image. On the basis, they discussed the relationship between the blocking number and recognition rate, and summarized that too large or too small size of blocks would affect recognition rate. In some other papers, the researchers fixed the size of sub-image determined by the blocking number and the resolution of images.

A MANOVA of LBP Features for Face Recognition

81

However, more open problems could not be explained with the experienced values. How is the degree of impact of the four factors? Are they contributes the same in efficiency? Are there interactions between pairs of factors? How will the parameters affect the recognition? How to compare the performance of different LBP features? In order to solve these problems, we endeavor to compare four major factors for the two most typical LBP features, i.e. the ULBP and the RIU-LBP feature with MANOVA. Our purpose is to explore the contribution of the four factors for recognition and the correlation among them. The results of the studies to these problems provide important merits for the improvement of LBP features.
3.1 MANOVA MANOVA is an extensively applied tool for multivariate analysis of variance in statistics. For our problem, the four variables waiting to be explored are the resolution of images, the blocking numbers, the sampling density and the radius of LBP operators for face recognition tasks. With MANOVA, we could identify whether the independent variables have notable effect and whether there exist notable interactions among the independent variables [12]. By denoting the four factors as follows: I - Resolution of images B - Blocking numbers P - Sampling density of the LBP operator R - Sampling radius of the LBP operator And taking the face recognition rate as the dependent variable, the total sum of squares deviations ST is denoted in Equation (5):

ST = S B +SP + SR + SI + SB× P + SB× R + SB× I + SP× R + SP× I + SR× I + SE

(5)

where SB× P + SB× R + SB× I + SP× R + SP× I + SR× I is the sum of interaction, and SE is the sum of squares of the errors. MANOVA belongs to the F -test, in which the larger F value and the smaller P value correspond to independent variables that are more significant. Hence, the significance of the factors is evaluated through checking and comparing the F value and the P value. If the P value is less than a given threshold, the factor has dominant effect, or there exist notable interactions between two factors. If the F value of one factor is the largest, its effect is the most important.
3.2 Experiment design of factors We use the same face database as described in Section 2.3 in MANOVA. As analyzed in Section 2.3, the general LBP feature has much higher dimensions than the ULBP feature but the performance is close, so we conduct the experiments for ULBP and RIULBP features. We set each factor three or four different levels as shown in Table 2. Under the RIU-LBP features, 108 sets of experimental data were obtained. Under the ULBP features, 81 sets of experimental data were obtained (level of blocking number 21 × 24 is missed due to too high computation complexity).

82
B

Reviews, Refinements and New Ideas in Face Recognition

P

R

I

Level1 Level2 Level3 Level4

3× 4 7×8 14 × 16
21 × 24

4 8 16 —

1 2 3 —

35*40 70*80 140*160 —

Table 2. Different levels of four factors in experiment
3.3 Analysis of the factors of RIU-LBP 3.3.1 The significance and interaction We firstly analyze the independent influence of four factors and significance of interaction. Table 3 shows the results based on RIU-LBP feature. The row of Table 3 is in descending order by F value. The first part of Table 3 shows the independent effect of the four factors. The value of P were less than 0.05, it means all four factors have significant effects for recognition rate. The impact from the largest to the smallest, respectively, is the blocking number, the sampling radius, the image resolution and the sampling density. Specially, the F value of blocking number is greater than the other three factors, which reflects the importance of the blocking number in face recognition. The F value of the sampling density is much smaller than the other three factors, meaning the weakest degree of influence.

Df
B

Sum Sq 1.509 0.464 0.381 0.048 0.067 0.043 0.021 0.011 0.007 0.006 0.038 2.594

Mean Sq 0.503 0.232 0.190 0.024 0.011 0.010 0.004 0.003 0.002 0.001

F value

P value

3 2 2 2 6 4 6 6 6 6 68 107

900.683

Similar Documents

Free Essay

Face Recognition

...Intel® Technology Journal | Volume 18, Issue 4, 2014 HETERogEnEoUs FAcE REcognITIon: An EmERgIng TopIc In BIomETRIcs Contributor Guodong Guo West Virginia University An emerging topic in biometrics is matching between heterogeneous image modalities, called heterogeneous face recognition (HFR). This emerging topic is motivated by the advances in sensor technology development that make it possible to acquire face images from diverse imaging sensors, such as the near infrared (NIR), thermal infrared (IR), and three-dimensional (3D) depth cameras. It is also motivated by the demand from real applications. For example, when a subject’s face can only be acquired at night, the NIR or IR imaging might be the only modality for acquiring a useful face image of the subject. Another example is that no imaging system was available to capture the face image of a suspect during a criminal act. In this case a forensic sketch, drawn by a police artist based on a verbal description provided by a witness or the victim, is likely to be the only available source of a face of the suspect. Using the sketch to search a large database of mug-shot face photos is also a heterogeneous face recognition problem. Thus it is interesting to study the HFR as a relatively new topic in biometrics. In this article, several specific HFR problems are presented, and various approaches are described to address the heterogeneous face matching problems. Some future research directions are discussed as well to advance...

Words: 7690 - Pages: 31

Premium Essay

Essay On Face Recognition

...Abstract— Face recognition is a biometric system used to verify or identify a person from a digital image. This involve extracting features of face and then recognize it, regardless of expression, lighting, illumination, transformations (translate, rotate and scale image), ageing and pose. The existing approaches are deblurring based, joint deblurring and recognition, deriving blur invariant features and direct approach, which uses convolution model for performing face recognition in the presence of blur. So these methods cannot handle non uniform blurring situations that frequently arise from rotations and tilts in hand held cameras. In this paper, face recognition is done, in the presence of space varying motion blur. We have taken the concept that the set of all images obtained by non-uniformly blurring a given image form a convex set. We develop an algorithm based on TSF (Transformation Spread Function) model. On each focused...

Words: 1130 - Pages: 5

Premium Essay

Face Recognition

...Re  Face Recognition Paper Adriana Zachry Psych/560 November 13, 2012 Christopher Wessinger Face Recognition Paper Face recognition develops slowly through life. Recognizing a face can be a difficult for the individual and also for the brain system that processes. The complexity of recognizing individual faces can be a difficult task at times. Recognizing faces also includes looking at an individual’s emotional expression and then, being able to take that information and processing it. This can be more complicated because facial recognition also includes the processing of emotions and emotional content. The brain can easily recognize a face without encountering any complications. Facial identification is essential for recognition of people in the social context within our society. The basic process of visual perception includes translating incoming stimulus into a perception and memory. When an individual will initially sees an object or a person, this information then gets processed through the brain. Bottom up and top down processing plays a critical role in object recognition. When we first look at an object we process it. This is called bottom up processing. When people apply previous knowledge to that object, it is known as top down processes. There is also a process when we recognize an object; we match an incoming object with stored information that helps us to recognize what is before us. A study was conducted by Palmer, Rsich and Chase on the perspective...

Words: 1117 - Pages: 5

Premium Essay

Face Recognition

...DECO HCI Seminar WS 2010/2011: Student projects stefan.bachl January 19, 2011 Tags: android, application, hci, ios, mobile, open data, prototyping, seminar This semester’s design task of the Human Computer Interaction seminar at DECO was to create, prototype and evaluate a location-aware mobile application that uses open data. Five teams of three students each completed the seminar, resulting in creative, useful and sometimes also provoking use of fictitious open data. As open datain Austria is in its infancy (for more information visit open3.at orgov.opendata.at), teams had difficulties finding existing open (government) data sources. Some teams solved this problem by augmenting their application ideas with user-generated content as the main data source. The horizontal prototypes for the evaluation were implemented with various tools, includingTitanium Mobile, Sencha Touch and also native development on iOS and Android platforms. We proudly present the mobile application concepts of our five participating student teams. Note that most screens contain static data used only for the evaluation of the prototypes. The applications are (also if stated otherwise) not available on any store at the moment. ------------------------------------------------- Team AAA_Team: RegioBioFood Aleksandar Djordjevic, Alex Brandner, Andreas Hörmann RegioBioFood is an application which helps people to find bio products in an easier way, only by few clicks on their android smart phone. People...

Words: 1290 - Pages: 6

Free Essay

Human Face Detection and Recognition Using Web-Cam

...Journal of Computer Science 8 (9): 1585-1593, 2012 ISSN 1549-3636 © 2012 Science Publications Human Face Detection and Recognition using Web-Cam Petcharat Pattanasethanon and Charuay Savithi Depatment of Business Computer, Faculty of Accountancy and Management, Mahasarakham UniversityKamreang, Kantharawichai, Mahasarakham 44150, Thailand Abstract: Problem statement: The illuminance insensitivity that reflects the angle of human facial aspects occurs once the distance between the object and the camera is too different such as animated images. This has been a problem for facial recognition system for decades. Approach: For this reason, our study represents a novel technique for facial recognition through the implementation of Successes Mean Quantization Transform and Spare Network of Winnow with the assistance of Eigenface computation. After having limited the frame of the input image or images from Web-Cam, the image is cropped into an oval or eclipse shape. Then the image is transformed into greyscale color and is normalized in order to reduce color complexities. We also focus on the special characteristics of human facial aspects such as nostril areas and oral areas. After every essential aspectsarescrutinized, the input image goes through the recognition system for facial identification. In some cases where the input image from the Web-Cam does not exist in the database, the user will be notified for the error handled. However, in cases where the image exists...

Words: 1996 - Pages: 8

Premium Essay

Face Recognition

...Thesis Paper Outline Format I. Introduction: In this section, give the reader an idea of why your paper will be important and/or interesting, what you will be arguing, and make the organization of the paper clear to the reader. a. Explanation of purpose and background information (optional): Explain why this topic needs to be written about (may require some background on the topic) b. Thesis statement: A basic statement of your position; your answer to your research question c. Expanded thesis statement: A brief listing of the major points that you will make in your paper, in the order in which you will make them II. Arguments: Each of your main arguments can either argue a point that supports your position, or argue against something you believe is wrong. This is a lengthy paper, so ideally you will have more than three arguments to make. You should make as many as you can. Organize your arguments to flow from one to the next or, ideally, to put your strongest arguments first and last. a. Argument 1 i. Supporting evidence (author, pg. or para. #) ii. More supporting evidence! (author, pg. or para. #) iii. Even more supporting evidence!! (author, pg. or para. #) b. Argument 2 i. Supporting evidence (author, pg. or para. #) ii. More supporting evidence! (author, pg. or para. #) iii. Even more supporting evidence!! (author, pg. or para. #) c. Argument 3 i. Supporting evidence (author, pg. or para. #) ii. More supporting evidence! (author, pg. or para...

Words: 694 - Pages: 3

Free Essay

Increased Contact Can Reduce the Other-Race Effect in Face Recognition

...Other-Race Effect in Face Recognition As humans, we come into contact with many faces in a day. The capability of these homosapiens to precisely distinguish thousands of faces is incredible seeing that all faces have approximately the similar arrangement. Nevertheless, this “gift” does not spread similarly the same to all faces. Sporer (2001) stated that humans commonly exhibit weaker remembrance for faces of another race compared to own-race faces (as cited in Hancock and Rhodes, 2008). The majority of us must have heard this line, “How am I to know if I have ever seen the person previously? They all look the same to me.” When we hear an individual say this, we’re prone to assume that the individual is racist, but is it possible that there could be a particular theory behind the notion? This occurrence is identified as the Other-Race Effect (ORE) (ORE). Tanaka, Kiefer and Bukach (2004) mentioned that the Other-Race Effect (ORE) states that individuals have a higher probability to recall and identify faces of people who are from their own race rather instead of their own racial group. Extensive evidence has proven that adults are better at distinguishing faces of their own race than those of unfamiliar races (Meissner and Brigham, 2001). The Other-Race Effect (ORE) are not stemmed from the intrinsic variances in the discriminability of diverse populations of faces. Instead, it is due to the different approaches people process own and other-race faces (Rhodes et al., 2009)...

Words: 1554 - Pages: 7

Free Essay

Ir Face Recognition

...專題製作成果報告 專題研究計畫名稱 近紅外線影像人臉辨識 組員: 姓名: 黃偉倫 學號: B9921243 姓名: 林稟軒 學號: B9921230 指導教師: 謝堯洋教授 中華民國102年6月28日 摘要 人臉辨識系統的應用非常的廣泛,人臉辨識的技術在這幾十年來也已經有不少的研究成果,因此有不少增加辨識率的方法被提出。但要如何處理當姿勢角度改變、表情改變、照片裡人臉的大小不一,甚至光所造成的亮度問題所產生的影像變化量,這是人臉辨識的一大難題。我們將針對目前提出的眾多增加辨識率之方法進行文獻探討,藉此分析比較各種人臉辨識方法之優缺點,以提供未來進行相關研究時的依據與參考,以有效增加人臉的辨識率。 而我們所做的的方面是在於紅外線的人臉辨識,而人臉辨識的步驟通常都是先把照片做處理,才會進行拿去進行辨識的工作。在紅外線人臉識別裡,有許許多多人做過了很多種方式去辨識,我們看了其中的6種辨識法來研讀,並且挑了其中的PCA、LDA、SVM來實際製作跟測試。 1. Principal Component Analysis(PCA) 2. Linear Discriminant Analysis(LDA) 3. Independent Component Analysis (ICA) 4. Support Vector Machines (SVM) 5. ARENA (基於Memory-based) 6. Unified Bayesian Framework 目錄 摘要 i 目錄 ii 第一章、緒論 1 1-1 研究動機 1 1-2 研究目的 2 第二章、理論基礎 3 2-1 研究方法 3 2-1-1 Principal Component Analysis (PCA) 3 2-1-2 Linear Discriminant Analysis (LDA) 3 2-1-3 Independent Component Analysis (ICA) 3 2-1-4 Support Vector Machines (SVM) 3 2-1-5 ARENA (基於 Memory based) 4 2-1-6 Unified Bayesian Framework 4 2-2 臉部辨識的基本過程 4 2-3 MATLAB介紹 7 第三章、實作或模擬過程 8 3-1 建立人臉資料庫以及其內部樣式 8 3-2 資料庫取樣方法 9 3-3 PCA辨識 9 3-3-1 SVD在PCA上的應用 9 3-3-2 PCA訓練方法 10 3-3-3 PCA測試方法 11 3-4 LDA辨識 11 3-5 SVM辨識 12 3-5-1 SVM訓練方法 12 3-5-2 SVM測試方法 12 3-5-3 multi SVM 12 四、測試或模擬分析結果與討論 13 4-1 測試結果 13 4-1-1 測試PCA、LDA及SVM_Linear第一階段 13 4-1-2 測試PCA、LDA及SVM_Linear第二階段 14 4-1-3 SVM影像降維度與未降維度 15 4-1-4 基於PCA第一階段之線性SVM與多項式SVM 16 4-1-5 測試結果比較圖 18 4-2 測試討論...

Words: 2058 - Pages: 9

Free Essay

Tax Research Case

...TO: J Corporation FROM: Yizhen Gong RE: §357(c) gain IRS examination DATE: October 13, 2014 Facts Joe owns 100% of the stock of J Corporation. Joe plans on contributing a parcel of land to J co. having a fair market value of $1.5 million and a basis of $350,000. Further, the land is subject to a mortgage of $1.2 million. To avoid the gain that would result on this transaction, due to §357(c), Joe plans on contributing a promissory note to J Corporation of with a face value of $850,000. He claims that this note will have a basis equal to its face value thus eliminating the gain caused by §357(c). Issues Whether Joe’s transfer of a promissory note to its wholly owned corporation, in an amount equal to the excess of liabilities over the basis of assets contributed in a §351 transfer, avoids §357(c) gain recognition? Can he increase the basis of the assets by transferring his own promissory note to the corporation? Conclusion 1. Joe received stock valued at $300,000 and thus realized a gain of $1,150,000 ($300,000+$1,200,000-$350,000). §357(c) requires Joe recognize a gain of $850,000 ($1,200,000-$$350,000). 2. It depends on whether Joe’s note has a basis in Joe’s hands for purpose of section §357(c). Analysis 1. Under §357(c), no gain or loss is if property is transferred to a corporation solely in exchange for stock of that corporation if, immediately upon the transfer, the transferors are in control of the corporation. Joe transferred...

Words: 743 - Pages: 3

Free Essay

Facebook Deepface Project

...technology is finally able to handle the extensive amounts of equations going on to create the virtual neurons necessary to recognize images and or speech. Watch Video at this link: What is DeepFace? 'Human-Level' Face Matching, Explained https://www.youtube.com/watch?v=13FZHiXJSsE Application: Although the concept of facial recognition is not a breakthrough, the uses for it are. Facebook states that the DeepFace technology will be used to improve user privacy to dispel privacy concerns it has had in the past. DeepFace will be able to identify the user in a photo and notify them before allowing them to be tagged in it without their permission. This is especially useful as it will allow the user to blur out their face in images that may be embarrassing or even incriminating. This will notify uses whether the person uploading the photo is a friend or a stranger but only allows users to see the identities of the people they are already friends. Google and the government have taken a special interest in this application. Google has developed its own variation of this technology called FaceNet that boasts more than 86% accuracy at identifying whether images are the same person, while DeepFace has an accuracy score of 97% using the same recognition data set called Labeled Faces in the Wild. The difference between the FaceNet and...

Words: 630 - Pages: 3

Premium Essay

Degree of Alignment

...* Degree of Alignment * Walgreens holds a reputation as the local trustworthy neighborhood convenient store, several which are accessible 24 hours a day, 7 days a week. This image is important to the face and ethics of the business and so we share a vision and strive to maintain its status and recognition. Walgreens wants to achieve customer satisfaction while providing everyday household items, necessities, and other merchandise. They intend to accomplish this shared vision by tending to customers ensuring satisfaction, keeping the atmosphere of the business neat and clean, and conveniently making the locations available internationally. The organization’s actual plans and actions also strive for this same goal, however, there are always other factors such as whether employees are being rude to customers, the maintenance of certain locations are not being kept, or if competition was rising causing us to lose business. * They offer services in various areas and product categories. Some of these categories include Pharmacy & Health; Photo; Beauty & Personal care; Medicine & Treatments; and Vitamins & Supplements. Other categories of interest at Walgreens include Sexual Wellness; Grocery; Baby, Kids & Toys; Household; Diet & Fitness; As Seen on TV; and seasonal products. Through providing these various products and services, they will achieve success in their goal of customer satisfaction and being the leading manufacturer for these services over...

Words: 252 - Pages: 2

Free Essay

Cutting Edge Ai

...As Nadine developes experiences in a work place she can also sharpen her skills of the task at hand. This makes Nadine a perfect AI for any work place because she will be able to adapt and even possibly assist others. The team of Nadine’s creators believe that people with autism or dementia would benefit from being around her. People who are completely alone at an old age will have a companion to talk to. B. “Sophia” * Created by Hanson Robotics at SXSW Interactive located in Austin, TX. * Two sophisticated cameras in both eyes, used to interact with humans * Tracks facial expressions and eye movement of people and also recognize them * Face made of rubbery material called “Frubber” to mimic elasticity of human face * Hanson uses a combination of Alphabet's Google Chrome voice recognition technology and other software that helps Sophia process speech, hold a conversation, remember...

Words: 845 - Pages: 4

Premium Essay

The Punch

...Punch The Punch The most important thing to remember about leaving the past behind is to never look back, no matter how tempting it might be. That means leaving the physical and mental pain with the past. They never explained how much your fist hurts after that swing. The fact that the cheekbone of boy your age is as firm as the cement floors just a few metres away. This is the easy pain though. The pain which hurts most is when your throbbing fist becomes still and the adrenaline you felt is all but a memory. I tried to argue to the ref that he was being a grub all game, that I was provoked, that he threw the first punch and that some sort of justification would ease my pain. Constant vulgarities being tossed at me, elbows to the face, rubbing my face into my own home turf ground, tasting the bitterness of the mud and feeling the dirt seeping into my pores. There’s a limit to how many times you can tell someone to back off, before the testosterone within you breaks through your self-control barrier and forces you to make sudden decisions which affect your life. So I threw my punch. Then he threw his. Exchanging rapid punches like an eager child opening a birthday present. Constant yelling from the sidelines, in the background; whether it was abusive or positive comments. We didn’t hear the words, not in our current state. Those punches and jabs were far from swift though. Still half the time they scraped past our slippery shoulders. That didn’t die down the velocity of the ‘fight’...

Words: 812 - Pages: 4

Free Essay

Eye Centre Localistaion

...ACCURATE EYE CENTRE LOCALISATION BY MEANS OF GRADIENTS Fabian Timm and Erhardt Barth Institute for Neuro- and Bioinformatics, University of L¨ beck, Ratzeburger Allee 160, D-23538 L¨ beck, Germany u u Pattern Recognition Company GmbH, Innovations Campus L¨ beck, Maria-Goeppert-Strasse 1, D-23562 L¨ beck, Germany u u {timm, barth}@inb.uni-luebeck.de Keywords: Eye centre localisation, pupil and iris localisation, image gradients, feature extraction, shape analysis. Abstract: The estimation of the eye centres is used in several computer vision applications such as face recognition or eye tracking. Especially for the latter, systems that are remote and rely on available light have become very popular and several methods for accurate eye centre localisation have been proposed. Nevertheless, these methods often fail to accurately estimate the eye centres in difficult scenarios, e.g. low resolution, low contrast, or occlusions. We therefore propose an approach for accurate and robust eye centre localisation by using image gradients. We derive a simple objective function, which only consists of dot products. The maximum of this function corresponds to the location where most gradient vectors intersect and thus to the eye’s centre. Although simple, our method is invariant to changes in scale, pose, contrast and variations in illumination. We extensively evaluate our method on the very challenging BioID database for eye centre and iris localisation. Moreover...

Words: 4275 - Pages: 18

Free Essay

Electronic Voting

...Project – an Automated Make-up color selection system. Supervisor – Dr. H.L.Premarathne Field(s) of concern – Artificial Neural Networks, Fuzzy Logic, Image Processing, Data Classification, make-up Background: Women typically like to be in the centre of attraction of other the people. In order to be elegant looking and to get the attention of others, ladies often use make-up. Make-up is a favorite topic of women, and is a primary concern, not only when attending functions such as weddings, parties, but in day-to-day life when going for work too. The success of make-up relies on how well one can select the colors that matches her skin color, eye color, shape of the face and other relevant features. Make-up is also an art; hence one should have a good artistic eye to select the make-up which suits her. Inappropriate applying of make-up will cause a person to be in the centre of sarcasm and annoyance, instead of being in the centre of attraction. This is why; ladies often take the service of a beautician. A beautician is a professional who’s trained and who has expertise knowledge on beauty therapy and make-up. With experience, a beautician can match the make-up colors to suit a person, according to her appearance and personality. However, one does not need the help of a beautician, if that person can choose the appropriate make-up colors for herself. Introduction: Selection of colors for a make-up is vital for a Beautician as well as for any lady who rely on make-up...

Words: 1003 - Pages: 5