-Face Recognition is the ability of a computer to scan, store, and recognize human faces for use in identifying people. One of its main goals is to understand the complex human visual system and the knowledge of how humans represent faces in order to discriminate different identities with high accuracy. This is made possible by complex algorithms that compare the faces observed with a database. It is increasingly becoming a popular area of research in computer vision and it is also known as one of the most successful applications of image analysis and understanding.
When a strong demand rose for user-friendly systems that secured peoples’ assets and protected their privacy without losing their identity in a sea of numbers, scientists geared their attention and studies towards biometrics. Biometrics is an automatic process that identifies a person based on their behavorial and physiological characteristics. There are many different biometrics systems such as signature, finger prints and voice. Out of all the above examples, facial recognition has become the most universal, collectable, and accessible systems.
-In 1964 and 1965, Woody Bledsoe, Helen Chan Wolf, and Charles Bison, also known as the pioneers of automated facial recognition created the first semi-automated system. However, it was still considered a man- machine because humans had to extract the coordinates of a set of features (such as eyes, ears, nose, or mouth) from a photograph before the computer calculated distances and ratios to a common reference point. From these coordinates, a list of 20 different distances, such as pupil to pupil, were computed. This when then compared to the database that was previously configured and stored into the computer. The distances match up with the database and the results are returned. Although it was able to compare 40 pictures per hour, this failed because it is highly unlikely that any two pictures would match in head rotation, tilt, lean and scale.-In 1970’s, Goldstein, Harmon and Lesk used a system of 21 subjective markers such as hair color and lip thickness attempting to automate the recognition. However, this just proved even harder to automate due to the fact that many of the measurements still had to be manually computed.In 1973, Fisher and Elschlagerb measured the features from the previous experiments using templates of features of different pieces of the face, which it then mapped them all onto a global template. However, this failed because after continued research, it was found that these features do not contain enough unique data to represent an adult face.-The “Connectionist” approach sought to classify the human face using a combination of both a range of gestures and a set of identifying markers. This was implemented by using 2-dimensional pattern recognition and neural net principles(A real or virtual device, modeled after the human brain, in which several interconnected elements process information simultaneously, adapting and learning from past patterns). Although this seemed like it could be successful, it required a huge number of training forces to achieve decent accuracy and it has yet to be implemented on a large scale.In 1988, at Brown University, Kirby and Sirovich, the pioneers of eigenface approach, applied the Principle Component Analysis, which was a standard linear algebra technique. This was somewhat of a milestone because it showed that less than one hundred values were required to accurately code a suitably aligned and normalized face image. Many people have built and expanded on this basic idea.In 1991, Turk and Pentland discovered that while using the eigenface technique, the residual error could be used to detect faces in images. This discovery enabled a reliable real-time automated face recognition systems. Although this process was somewhat constrained by environmental factors, it created a tremendous amount of interest in furthering the development of automated face recognition technologies.At the Super Bowl in January 2001, technology first captured the public’s attention by using surveillance images and compared them to a database of digital mugshots. This significantly increased security.All of these events in history have made face recognition systems one of the most popular and rising technologies around.
Face Detection is the first stage and it is necessary for Face Recognition to work. When given any image, Face Detection tells whether there is any human face and, if there is, it tells where the face or faces are. Face recognition is the second stage and divided into two main parts. These two parts are:Face Identification: Given a face image that belongs to a person in a database, the face recognition system is able to tell whose image it is.Face Verification: Verifies whether the face image is from the person it is claimed to be in the database.The difference between Detection and Recognition is that Detection is a two-class classification (face vs. non-face) while Recognition is a multi-class classification (one person vs. all the others).
The Face Recognition Process is accomplished in a five steps, no matter what algorithm is used. The first three steps are in the Face Detection stage while the last two deal with Face Recognition.The first step is image acquisition. This step is accomplished by digitally scanning an existing photograph or by acquiring a live picture of the subject. The next step is Image Preprocessing. In this step, the image is enhanced to bring out detail that is obscured, or to highlight certain features of interest. This improves the recognition performance of the system. Histogram equalization is used to accomplish this. This enhances important features by modifying the contrast of the image, reducing the noise, resulting in an improved quality image. The third step is Face Detection. This stage is a computer technology that determines the sizes and locations of human faces in images. This technology detects facial features and ignores anything else, such as the surroundings. This is possible because software is downloaded into the device that contain generalized patterns of what a face looks like and to detect location of faces. The fourth step is Feature Extraction. This step is responsible for composing a feature vector that is well enough to represent the face image. The main goal is to extract the relevant data from the specific image. The step is divided into two categories. The different methods for facial recognition, which I will talk about later, is used here to extract the identifying features of a face. Holistic feature category- deals with the input face image as a wholeLocal features category- tries to automatically locate specific facial features (eyes, nose, mouth) based on the known distances between them A template is then composed and is ready to be compared to the database. 5. Finally, the last step is Declaring a Match. In this step, the template that was previously generated is compared to the templates in a database of known faces.
The main purpose of these methods is to reduce a facial image containing thousands of pixels before making comparisons. This is done by taking a face image and transforming it into a space that is spanned by basis image functions.
In Direct Correlation, two images are superimposed and the correlation between corresponding pixels is computed for different alignments.
PCA is a data-reduction method that finds an alternative set of parameters for a set of raw data or features such that most of the variability in the data is compressed down to the first few parameters. These transformed PCA parameters are orthogonal (intersecting or lying at right angles). The PCA diagonalizes the covariance matrix, and the resulting diagonal elements are the variances of the transformed PA parameters. Next, a face image defines a point in the high-dimensional image space. Different face images share a number of similarities with each other and they can be described by a relatively low-dimensional subspace. They are then projected into an appropriately chosen subspace of eigenfaces and classification can be performed by similarity computation (distance).
In Elastic Graph Matching, each face is represented by a set of feature vectors positioned on the nodes of a coarse 2D grid placed on the face. Each one of these vectors is comprised of a set of responses of 2D Gabor wavelets, differing in orientation and scale. Two faces are then compared by matching and adapting the grid of a test image to the grid of a reference image, where both grids have the same number of nodes. The elasticity of the test grid allows accommodation of face distortions and changes in view point, to a lesser extent. The quality of the resulted match is evaluated by using a distance function.
The purpose of Face Geometry is to model a human face in terms of particular face features (such as eyes, mouth, etc.) and the geometry of the layout of these features. Face Recognition is then a matter of matching feature constellations.
In this method, the three-dimensional geometry of the human face is used. 3D face recognition can achieve significantly higher accuracy because it measures the geometry of the rigid features on the face. This helps avoid 2D pitfalls such as lighting, change in facial expressions, make-up or head orientation. Another benefit of the 3D method is the transformation of the head into a known view. However, the limitation of this method is that it requires a range camera in order to acquire 3D images or a 3D model needs to be created with significant post-processing.
There are many advantages to facial recognition that create benefits for everyone. Face recognition systems are the least intrusive from a biometric sampling point of view because it requires no contact or no awareness by the subject. With the thousands of photos of faces from passports and driver’s licenses, the database is always increasing and because of the possession authentication protocol, there is a wide public acceptance. This biometric works with photograph data-bases, videos or other image sources and with the screening, unwanted individuals can be located in an area. With facial recognition, security is improving and people feel a sense of safety. The disadvantages include the fact that a face needs to be well lighted by controlled light sources in automated face recognition systems, along with many other technical challenges.
Researchers at Carnegie-Mellon University warn that it is possible to identify strangers and gain their personal information — perhaps even their social security numbers — by using face recognition software and social media profiles
. Tools like PittPatt and other cloud-based facial recognition services rely on finding publicly available pictures of you online, whether it's a profile image for social networks like Facebook and Google Plus or from something more official from a company website or a college athletic portrait.
Face Recognition By: Deirdre Keane and Kathleen Reilly
“the ability of acomputer to scan, store,and recognize humanfaces for use inidentifying people.” http://dictionary.reference.com/browse/face+recognition
Biometrics “Biometrics consists of methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits.” Examples of biometric systems are: • Signature • Retina • Finger Prints • Hand • Voice Geometry • Iris • Ear Geometry • Face
The History of Face Recognition1964-1965 1970 1973 “Connectionist” 1988 1991 2001Bledsoe, Wolf, and Bison created the first semi-automatedsystem Goldstein, Harmon, and Lesk use markers to automate the recognitionFisher and Elshlagerb take a face template and map it onto a globaltemplate Connectionist Approach: uses gestures and identifying markers and Sirovich pioneered the eigenface Kirby approachTurk and Pentland enabled a reliable real-time automated face recognitionsystem Face Recognition first captured the public’s
FaceDetection Face Recognition Face Face Identification Verification
Face Recognition Process Image Acquisition Declaring A Image Match Preprocessing Feature Face Detection Extraction
Face Recognition Methods Direct Correlation Function-Based Method Principal Component Analysis Geometry-Based Methods Elastic Graph Matching Face Geometry Three-Dimensional Face Recognition
Benefits Limitations1. Checks for criminal 1. Quality of Database records Images2. Enhances security by using surveillance 2. Quality of Captured cameras Images3. Finding lost children from public cameras v 3. Database Sizes4. Know in advance if a VIP is entering s. 4. Ineffectiveness in Public Places5. Detection of a criminal at a certain place6. Pattern recognition 5. False Identifications7. Used in science to compare an entity with a 6. Ethical issues set of entities surrounding facial recognition technology
Question? Name something face recognition is used in our generation today.
Face recognition in marketing “Once the stuff of science fiction and high- tech crime fighting, facial recognition technology has become one of the newest tools in marketing, even though privacy concerns abound” http://www.youtube.com/watch?v=OXz6fr5HPBk
Carnegie-Mellon Facebook studyGoal: "that it is possible to start from an anonymous face in the street,and end up with very sensitive information about that person." The resultswere "made possible by the convergence of face recognition, socialnetworks, data mining and cloud computing - that [is referred] to asaugmented reality." Facial Recognition in the future Tests performed: could used to.. -On campus identification via collected facebook pictures • Identify strangers - Dating site profiles matched to faceook • Get social security numbers profiles • Gain personal information
Facial Recognition Major Risk: Privacy Handing stalkers their material “and its new face-recognition feature could become the latest example of a seemingly innocuous development morphing into a serious threat to the privacy of our (visual) data.”
Cloud Based Facial Recognition PittPatt rely’s on finding publicly available pictures of you online whether its a profile image for social networks Facebook Google Plus company website college athletic portrait. Just recently acquired by Google
Facial Recognition in smart phonesAugmented ID And all of this data can be accessed just by aiming your mobile phone at someones face
Questions1) What application has Google consistently used face recognition on since 2009? Picasa2) This local place has same kind of photo database asFacebook and some of them have wisely incorporated facialrecognition technology to prevent identity theft (and identitycreation)… Your local DMV 3.) When was face recognition introduced to the world?