FACE RECOGNITION BASED ACCES CONTROL SYSTEM

ABSTRACT
We present an approach to control the access of human into a strong room based on face recognition. The
automatic recognition of human faces presents a significant challenge to the pattern recognition community. Human faces are very similar with minor differences from person to person. Furthermore variations in lighting conditions, facial expressions and pose variations complicate the face recognition task as one of the difficult task of pattern analysis. Our approach treats face recognition as a two dimensional recognition
problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2D characteristic views. In a proposed system, on detection of a person by passive infrared sensor which is located at the door, image is taken by a web camera, the same image is projected into feature space of stored database of training images called as eigen space. The projected test image is compared with stored feature database of training images (eigen space projected). The face space is defined by the eigen faces which are the set of eigen vectors of the faces. The L2 norm of test image with all data base is computed, and compared with the threshold. If a person’s face matches with the database image1, the access is given to him by opening the door, otherwise it is denied and the alarm is generated. If recognized person leaves the strong room, it is sensed by other passive infrared sensor, located inside the room and the door is locked.

INTRODUCTION
Biometrics identification systems uses pattern recognition techniques to identify peoples using their characteristics such as fingerprints, retina and iris recognition. However these techniques are not so easy for use. For example in bank transactions and entry into a secure areas. Such techniques have the disadvantages that they are intrusive both physically and socially. The user must position his body relative to the sensor, and then pause for a while to recognize himself or herself. Machine recognition of faces has several applications, ranging from static matching of controlled photographs as in the mug shot matching and credit card verification to surveillance video images. Such applications have different constraints in terms of complexity of processing requirement and thus provides a wide range of different technical challenges. The most
common use of such a system could be at the entrance of places such as army bases, banks and other companies for security purposes. Face recognition from video and voice recognition have a natural place in this next generation smart environments. They are unobtrusive, usually passive, do not restrict user movement, and are low power consuming and inexpensive. Developing a computational model of face recognition is quite difficult, because faces are complex, multidimensional, and meaningful visual stimuli. They are natural class of objects. Face recognition is a very high level task for which computational approaches can only suggest broad constraints on the corresponding neural activity.
The method used here is low dimensional procedure for the characterization of human faces based on Principal
Component Analysis (PCA) also known as Karhunen-Loeve (KL) transform or eigenspace, seeks the direction in the input space along which most of the image variations lies. Eigenface is the practical approach for face recognition. This reduces the dimension size of an image greatly in a short time.
II. System Block Diagram
Fig 1 shows the block diagram of system. It consists of following units.
1. PIR 1:
This is passive infrared sensor (PIR1) which senses the infra red radiation emitted by human body at 3.3mm and generate a digitally compatible signal to the parallel port of personal computer, on detecting the presence of a person. It is located at the entrance of strong room.
2. Web cam:
On receiving the appropriate signal from the PIR1 sensor the computer will activate the web cam to take the picture of face of the individual. This image is considered as the test image in face recognition algorithm.
3. Personal computer:
This is the main coordinating and controlling unit of the system. The computer monitors the status of PIR1 and PIR2 sensor at parallel port. On detecting the presence of a person, webcamera will take his image. The computer will process the picture and perform necessary face recognition algorithm. Based on the results obtained, it will take necessary actions like opening the door by stepper motor activation in matched condition and generating the alarm on no match condition avoiding the entry of unauthorized person..
4. PIR 2:
This is also a passive infrared sensor which will detect the exit of the individual from the strong room and will give the signal to personal computer which in turn gives signal to the stepper motor to lock the door and the lighting system to turn it off.
5. Stepper motor:
The stepper motor is to open or close the door. On recognizing the person as one of the database image, the door is opened to allow his entry. When a person leaves the hall the stepper motor will rotate in anticlockwise direction to close the door. Fig.1Block diagram of access control system.

III. Face Recognition Algorithm
The present investigation is concerned with the general problem of characterizing, identifying, and distinguishing individual patterns drawn from well defined class of patterns. The treatment presented here is
based on a method known as the Karhunen-Loeve expansion in pattern recognition and as factor or principal component analysis in the statistical literature. The application s of this procedure, especially in the analysis of signals in time domain, is extensive, and no attempt is made to site these studies. We demonstrate that any particular face can be economically represented in terms of a best coordinate system called as eigenpictures. These are the eigenfunctions of the averaged covariance of the ensemble of faces. This is an information theory approach of coding and encoding face images, emphasizing the significant local and global features.
Such features may or may not be directly related to our intuitive of face features. The original space of an image is just one of infinitely many spaces in which the image can be examined. Our specific subspace is the subspace created by the eigenvectors of the covariance matrix of training data. Eigenspace optimizes variance among the images. Although some of the details may vary, there is a basic algorithm for identifying images by projecting them into a subspace. First the eigenspace is created and all the training images are projected onto this subspace. They are called as eigenfaces. Each test image is projected onto this subspace, compared with all the training images by a similarity or distance measure. The training image which is found to be most similar or closest to the test image is used to identify the test image. To give some idea of the data compression gained from this procedure, we observed that a fairly acceptable picture of a face can be constructed from the specification of gray levels at 2 *(e14). Instead of this, we show, through actual construction, that roughly 40 numbers giving the admixture of eigenpictures characterizes a face to within 3% of error.
Eigen space projection, also known as Karhunen- Loeve (KL) transform or principal component analysis (PCA), projects images into a subspace such that the first orthogonal dimension of this subspace captures the greatest amount of variance among the images and the last dimension of this subspace captures the least amount of variance among the images.
Two methods of creating Eigen space are examined, ie. original method and a method designed for high resolution images known as snapshot method which reduces computational complexities.
This approach of face recognition involves the following initialization operations.
1. Acquire an initial set of images ( the training set ).
2. Calculate the eigenfaces from the training set, keeping only the P images that correspond to the highest eigenvalues. These P images define the face space. As new faces are experienced, the eigenfaces can be updated or recalculated.
3. Calculate the corresponding distribution in P dimensional weight space for each known individual, by projecting their face images onto the “face space”.
Having initialized the system, the following steps are then used to recognize new faces.
1. Calculate a set of weights based on the input image and the P eigenfaces by projecting the input image onto each of the eigenfaces.
2. Determine if the image is a face at all by checking to see if the image is sufficiently close to “face space”.
3. If it is a face, classify the weight pattern as either a known person or as unknown.
4. (Optional) Update the eigenfaces and/or weight pattern.
5. (Optional) If the same unknown face is seen several times, calculate its characteristic weight pattern and
incorporate into the known faces
Download this Seminar Report

Password: be
Share on Google Plus

About Unknown

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.

0 comments:

Post a Comment

Thanks for your Valuable comment