In this paper, we examine the role of feature selection in face recognition from the perspective of sparse representation. We cast the recognition problem as finding a sparse representation of the test image features w.r.t. the training set. The sparse representation can be accurately and efficiently computed by l^1-minimization. The proposed simple algorithm generalizes conventional face recognition classifiers such as nearest-neighbor and nearest-subspace. Using face recognition under varying illumination and expression as an example, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficient and whether the sparse representation is correctly found. We conduct extensive experiments to verify this new approach using the Extended Yale B database and the AR database. Our thorough evaluation shows that the proposed algorithm achieves accurate recognition on face images with varying illumination and expression, using conventional features such as Eigenfaces and facial parts. Furthermore, other unconventional features such as severely downsampled images and randomly projections perform almost equally well as the feature dimension increases. The differences in performance between different features become insignificant once the feature-space dimension is sufficiently large.




Download Full History