Biometric Security QUTLINE ● Introduction Lecture 8 Face Recognition System Face Detection Location Traditional Uni-Modal Face normalization Face Recognition Feature Extraction Recognition Face Recognition Application Face Recognition Problems Introduction(1) Face 口 Current state k Face is the most common biometrics. Using the whole ce for automatic identification is a complex task ecause its appearance is constantly changing Introduction k One effective approach may employ rule-based logic and a neural network for the image classification process. The first face system is introduced in 1992. 口 Feature Set Size of to mouth, middle of cheek, size of mo radius vectors and feature points Introduction (2) Why Face Recognition? Introduction (3) F 日 FR analyzes facial 口Non- intrusive More nature, do not restrict user movement- socially more acceptable aIt requir This is how human beings are recognizing each other LEss expensive to setup considerable interest. Hardware is getting cheaper ole many legacy uses/database of face images to construct ne lH-image with or without consent of the 日 Fight terrorism °9ada90% asing need after the september 11 events/ Spot terrorists in to 50% only !) Require automated face detection system on suspect in sensitive Face Recognition Vendor Tests(FRVT) areas,e.g.airport, military facility ●httpl/www.frvt.org
1 Biometrics Research Centre (UGC/CRC) Lecture 6 - 1 Traditional Traditional Uni-Modal Face Recognition Face Recognition Biometric Security Biometric Security Lecture 8 Lecture 8 Biometrics Research Centre (UGC/CRC) Lecture 8 - 2 OUTLINE OUTLINE • Introduction • Face Recognition System • Face Detection & Location • Face Normalization • Feature Extraction & Recognition • Face Recognition Application • Face Recognition Problems Biometrics Research Centre (UGC/CRC) Lecture 8 - 3 Introduction Biometrics Research Centre (UGC/CRC) Lecture 8 - 4 Current State É Face is the most common biometrics. Using the whole face for automatic identification is a complex task because its appearance is constantly changing. É One effective approach may employ rule-based logic and a neural network for the image classification process. The first face system is introduced in 1992. Feature Set Facial geometry - Size of eye, distance from eye to mouth, middle of mouth to chin, side of eye to cheek, size of mouth, radius vectors and feature points Face Introduction (1) Introduction (1) Biometrics Research Centre (UGC/CRC) Lecture 8 - 5 Introduction (2) Introduction (2) Why Face Recognition? Why Face Recognition? Non-intrusive z More nature, do not restrict user movement - Socially more acceptable z This is how human beings are recognizing each other Less expensive to setup z Hardware is getting cheaper z Available many legacy uses/database of face images z Easy to construct new facial-image with or without consent of the people Fight terrorism z Increasing need after the September 11 events/ Spot terrorists in public z Require automated face detection system on suspect in sensitive areas, e.g. airport, military facility Biometrics Research Centre (UGC/CRC) Lecture 8 - 6 FR analyzes facial characteristics. It requires a digital (web) camera (of low quality is enough). This technique has attracted considerable interest. Uses distinctive features of the human face to verify or identify individuals Introduction (3) Introduction (3) Accuracy: the best performance had a 90% verification rate at a FAR of 1%. (However, when the face is captured at outdoor, for the same 1% FAR, the verification rate is dropped to 50% only!) Face Recognition Vendor Tests (FRVT) zhttp://www.frvt.org/
Introduction (4) Introduction (5) OFace Recognition is the identification or Basic Notions verification of a person solely from the facial OFacial recognition analyzes the characteristics of a persons face images input through a digital video camera 日 Source of face images OIt measures the overall facial structure, including distances ● Still Image between eyes, nose, mouth, and jaw edges uThese measurements are retained in a database and used o Color or black and white as a comparison when a user stands before the camera OThis biometric has been widely, and perhaps wildly, touted facial thermogram as a fantastic system for recognizing potential threats (whether terrorist, scam artist, or known criminal) but so far 3D techniques has been unproven in high-level usage Structured light Introduction(6) Introduction(7) How it Works Current Situation aUser faces the camera, standing about two feet from it. DThe system will locate the users face and perform matches against the claimed identity or the facial O After 911, US deployed FRS in airports to prevent terrorism, OIt is possible that the user may need to move and and used to capture suspect in public area ttempt the verification based on his facial position OBut, some US government departments have already O The system usually comes to a decision in less than 5 abandoned the use of FRS since it has found high"false positives rate"and"false negative rate DTo prevent a fake face or mold from faking out the s is easily to be abused by the oper system, many systems now require the user to smile d Revenue: from $34. 4m(2002)to $429 1m(2007) blink, or otherwise move in a way that is human before a Market Share: -10%of entire biometrics market(2007) FR System(1): Overview Get Reference Face Face Recognition System 2 品 Match Flowchart of face recognition system Face images or:Face image sequences
2 Biometrics Research Centre (UGC/CRC) Lecture 8 - 7 Introduction (4) Introduction (4) Face Recognition is the identification or verification of a person solely from the facial appearance Source of face images: zStill Image z Video z Color or Black and White z Non-visible wavelengths facial thermogram z 3D techniques - Stereo - Structured light Biometrics Research Centre (UGC/CRC) Lecture 8 - 8 Introduction (5) Introduction (5) Basic Notions Basic Notions Facial recognition analyzes the characteristics of a person's face images input through a digital video camera. It measures the overall facial structure, including distances between eyes, nose, mouth, and jaw edges. These measurements are retained in a database and used as a comparison when a user stands before the camera. This biometric has been widely, and perhaps wildly, touted as a fantastic system for recognizing potential threats (whether terrorist, scam artist, or known criminal) but so far has been unproven in high-level usage. Biometrics Research Centre (UGC/CRC) Lecture 8 - 9 Introduction (6) Introduction (6) How it Works How it Works User faces the camera, standing about two feet from it. The system will locate the user's face and perform matches against the claimed identity or the facial database. It is possible that the user may need to move and reattempt the verification based on his facial position. The system usually comes to a decision in less than 5 seconds. To prevent a fake face or mold from faking out the system, many systems now require the user to smile, blink, or otherwise move in a way that is human before verifying Biometrics Research Centre (UGC/CRC) Lecture 8 - 10 Facial Recognition System (FRS) usually be used in combination with general surveillance system for security control After 911, US deployed FRS in airports to prevent terrorism, and used to capture suspect in public area But, some US government departments have already abandoned the use of FRS since it has found high "false positives rate" and "false negative rate“ The FRS is easily to be abused by the operator Revenue: from $34.4m (2002) to $429.1m (2007) Market Share: ~10% of entire biometrics market (2007) Introduction (7) Introduction (7) Current Situation Biometrics Research Centre (UGC/CRC) Lecture 8 - 11 Face Recognition System Biometrics Research Centre (UGC/CRC) Lecture 8 - 12 1-to-1 Authentication Face detection and location Feature extraction & Face recognition Face images or image sequences Name Flowchart of face recognition system FR System (1): Overview
FR System(3): General Steps FR System(2): Two stages 曰 Face Detection In General Different Approaches Face detection and location O Locate face in a given image o Motion detecting and head tracking 1. Detect whether the input images or image sequences a Separate it from the scene D"Face Space distance 2. If they do include faces, figure out the position of the faces absEr Features extraction and Face recognition 1. Look for face features which distinguish individuals a Face Normalization -Adjustmen Rotation 2. Judge whether the people in image is the given person or in the database 口 Face Identification D Fe Tures extracton and Face recognition Face Detection Location(1): Two Kinds of methods CS-pase 2. Neural networks method (classification into face non- Face Detection Location ce classes) Knowle 3. Distribution ruler of gray-value-based (e.g. gray values of eyes 4. Contour ruler 5. Color, Movement Symmetry Information 7. Symmetry Information Face Detection Location(2): Face Detection Location(2) Subspace Method Subspace Method (cont featutes dspace of face hich shows common faces, which is a good representation of face 口 This can be done B品6 e-based normalized fi O Each face image is considered as a higher dimensional a Calculate the covariance matrix of the specimen images a Find out the eigenvalues(A t z,,A) and a Face images can be represented by fewer base vectors, the"eigenfaces
3 Biometrics Research Centre (UGC/CRC) Lecture 8 - 13 FR System (2): Two stages Face detection and location 1. Detect whether the input images or image sequences include faces 2. If they do include faces, figure out the position of the faces 3. Segment each face from background Features extraction and Face recognition 1. Look for face features which distinguish individuals 2. Judge whether the people in image is the given person or in the database Biometrics Research Centre (UGC/CRC) Lecture 8 - 14 FR System (3): FR System (3): General Steps General Steps Different Approaches Motion detecting and head tracking “Face Space” distance In General Locate face in a given image Separate it from the scene Face Detection Face Normalization - Adjustment – Expression – Rotation – Lighting – Scale – Head tilt – Eye location Face Identification z Features extraction and Face recognition Biometrics Research Centre (UGC/CRC) Lecture 8 - 15 Face Detection & Location Biometrics Research Centre (UGC/CRC) Lecture 8 - 16 Face Detection & Location (1): Face Detection & Location (1): Two Kinds of Methods Two Kinds of Methods Statistics-based method 1. Subspace method 2. Neural networks method (classification into face & nonface classes) Knowledge-based method 3. Distribution ruler of gray-value-based (e.g. gray values of eyes’ area) 4. Contour ruler 5. Color, Movement & Symmetry Information 6. Movement Information 7. Symmetry Information Biometrics Research Centre (UGC/CRC) Lecture 8 - 17 Face Detection & Location (2): Face Detection & Location (2): Subspace Method Find the subspace of face images which shows common features of faces, which is a good representation of face This can be done by using Karhunen-Loeve transformation, which is an image-gray-value-based method, and the image gray values have to be normalized first. Each face image is considered as a higher dimensional vector Calculate the covariance matrix of the specimen images Find out the eigenvalues (λ1,λ2,…,λd) and corresponding eigenvectors (ϕ1, ϕ 2,..., ϕ d) of the covariance matrix Face images can be represented by fewer base vectors, the “eigenfaces” Biometrics Research Centre (UGC/CRC) Lecture 8 - 18 Face Detection & Location (2): Face Detection & Location (2): Subspace Method (cont.) Subspace Method (cont.)
Face Detection Location(2): Face Detection Location (2) Subspace Method(cont Subspace Method (cont 「回司 「冂 The number of principal components Face Detection Location ( 3): Face Detection Location(4) Neural Network method Distribution Ruler of Gray-Value Based a Two-class classification problem face class and non- face class O Detect faces using the nearly universal distribution rulers O Need to train the neural network with face and non-face of gray values of faces under normal light condition 口 Mosaic method u Problem: many kinds of non-face images which are not O Divides the image areas into image blocks of 4x4 collected O IF face area-> satisfy some distribution rulers of gray a Slow-lots of specimens or input node values d Further divides these areas into image blocks of Face Detection Location(5): Face Detection Location(6): Contour Ruler Color Information O Contour is an important feature of fat O Face contour is modeled as ellipse O The skin colors are usually different to background color O Two straight lines(cheek) and two arcs of ellipse u the face colors in the same race is similar O Use snake techniques to get the face contour O the pixels in face areas are clustered in a small area
4 Biometrics Research Centre (UGC/CRC) Lecture 8 - 19 Face Detection & Location (2): Face Detection & Location (2): Subspace Method (cont.) Subspace Method (cont.) Biometrics Research Centre (UGC/CRC) Lecture 8 - 20 Detection rate The number of principal components Face Detection & Location (2): Face Detection & Location (2): Subspace Method (cont.) Subspace Method (cont.) Biometrics Research Centre (UGC/CRC) Lecture 8 - 21 Face Detection & Location (3): Face Detection & Location (3): Neural Network Method Two-class classification problem: face class and nonface class Need to train the neural network with face and non-face image specimens Problem: many kinds of non-face images which are not collected Slow - lots of specimens or input nodes Biometrics Research Centre (UGC/CRC) Lecture 8 - 22 Face Detection & Location (4): Face Detection & Location (4): Distribution Ruler of Gray-ValueBased Detect faces using the nearly universal distribution rulers of gray values of faces under normal light condition Mosaic method Divides the image areas into image blocks of 4x4 IF face area -> satisfy some distribution rulers of gray values Further divides these areas into image blocks of 8x8 and repeat the process. Biometrics Research Centre (UGC/CRC) Lecture 8 - 23 Detect and extract face contour with edge detection algorithms Contour is an important feature of face Face contour is modeled as ellipse Two straight lines (cheek) and two arcs of ellipse Use snake techniques to get the face contour Face Detection & Location (5): Face Detection & Location (5): Contour Ruler Contour Ruler Biometrics Research Centre (UGC/CRC) Lecture 8 - 24 Face Detection & Location (6): Face Detection & Location (6): Color Information Detect faces with the use of color information of face, as usually color of faces are different from that of background color in an image The skin colors are usually different to background color the face colors in the same race is similar the pixels in face areas are clustered in a small area
Face Detection Location(7) Face Detection Location(8): Movement Information Symmetry Information d Sequence of images showing people moving relative to the background are used as input of the system(e. g a Face is symmetrical in general video surveillance system) a Symmetrical objects in a face can be us O Movement information can be used to segment the face from the background 国 小小小 taya Face Normalization 1. Image is rotated to align the eyes(eye coordinates must 2. The image is scaled to make the distance between the eyes constant. The image is also cropped to a smaller Face Normalization size that is nearly just the fac 3. A mask is applied that zeros out pixels not in an oval that contains the typical face. The oval is generated of gray values for the non-masked pixels mean zero and standard deviation one Feature Extraction& Recognition (1 O Principal Component Analysis(PCA)ie Eigenfaces O Local Feature Analysis O Linear Discriminant Analysis Feature Extraction Recognition a Probabilistic Principal Component Analysis(PPCA) u Geometry-feature- based method(e.g position between eyes, nose, mouth& chin) 口 Deformation models D Automatic Face Processing(AFP) O Neural networks method 5
5 Biometrics Research Centre (UGC/CRC) Lecture 8 - 25 Face Detection & Location (7): Face Detection & Location (7): Movement Information Sequence of images showing people moving relative to the background are used as input of the system (e.g. video surveillance system) Movement information can be used to segment the face from the background Biometrics Research Centre (UGC/CRC) Lecture 8 - 26 Face Detection & Location (8): Face Detection & Location (8): Symmetry Information Face is symmetrical in general Symmetrical objects in a face can be used Biometrics Research Centre (UGC/CRC) Lecture 8 - 27 Face Normalization Biometrics Research Centre (UGC/CRC) Lecture 8 - 28 Face Normalization 1. Image is rotated to align the eyes (eye coordinates must be known). 2. The image is scaled to make the distance between the eyes constant. The image is also cropped to a smaller size that is nearly just the face. 3. A mask is applied that zeros out pixels not in an oval that contains the typical face. The oval is generated analytically. 4. Histogram equalization is used to smooth the distribution of gray values for the non-masked pixels. 5. The image is normalized so the non-masked pixels have mean zero and standard deviation one. Biometrics Research Centre (UGC/CRC) Lecture 8 - 29 Feature Extraction & Recognition Biometrics Research Centre (UGC/CRC) Lecture 8 - 30 Feature Extraction & Recognition(1) Feature Extraction & Recognition(1) Principal Component Analysis (PCA) i.e. Eigenfaces Local Feature Analysis Linear Discriminant Analysis Probabilistic Principal Component Analysis (PPCA) Geometry-feature-based method (e.g. position between eyes, nose, mouth & chin) Deformation models Automatic Face Processing (AFP) Neural networks method
ature Extraction(2) Feature Extraction (2B Eigenface Eigenface(Cont Principal components are gained by training step, each aTraining set: global grayscale face images image in a training set is projected to eigenface subspace OFind the principal component of the distribution of faces DAlso called Principal Component Analysis(PCA), patented at MIT, currently used by Visage 's facePCA e. Select k eigenvectors that have the largest eigenvalues represent the most significant variation within the image t which are calle real faces in a Roughly translated as"one,s own eigenfaces THese k eigenfaces span a O Take advantage redundancy existing in the training set and represent it in called the"face space a more compact and meaningful way J Each image in the training variations of eigenface are frequently set can be represented as a used as basis of other face recognition methods linear combination of Feature Extraction(2) Feature Extraction(2 Eigenface(Cont Eigenface(Cont Initialization ach image in the training set: Compute the covariance matrix of this et of difference images; Compute the eigenvectors of the covariance 1. Acquire and align an nitial set of fa and translate such located at the same xperience, the eigenfaces coordinate be updated or recalculated Feature Extraction(2): Feature Extraction(2): Eigenface(Cont) Eigenface(Cont. 1. Calculate a set of weights based on the input image and the M 3. Calculate the corresponding distribution in k- eigenfaces by projecting the input image onto each of the dimensional weight space for each known individual, by projecting their face images onto the"face space. 2. Determine if the image is a face at all by checking to see if the nage is sufficiently close to face spa 3. If it is a face, classify the weight pattern as either a known person or as O Each training image can be represented by a k dimer O For 1-to-many identification, project the concerned face space and get a k dimensional vector, the ' live OA distance measure is used to compare the similarity en the live template and the training vectors 6
6 Biometrics Research Centre (UGC/CRC) Lecture 8 - 31 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface Principal components are gained by training step, each image in a training set is projected to eigenface subspace Also called Principal Component Analysis (PCA), patented at MIT, currently used by Viisage’s face recognition software Roughly translated as “one’s own face” Take advantage redundancy existing in the training set and represent it in a more compact and meaningful way Variations of eigenface are frequently used as basis of other face recognition methods Biometrics Research Centre (UGC/CRC) Lecture 8 - 32 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface (Cont.) (Cont.) Training set: global grayscale face images Find the principal component of the distribution of faces, i.e. Select k eigenvectors that have the largest eigenvalues to represent the most significant variation within the image set, which are called eigenfaces These k eigenfaces span a k-dimensional subspace, called the “face space” Each image in the training set can be represented as a linear combination of eigenvectors Biometrics Research Centre (UGC/CRC) Lecture 8 - 33 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface (Cont.) (Cont.) Initialization 1. Acquire and align an initial set of face images (the training set) - Rotate, scale and translate such that the eyes are located at the same coordinates. Biometrics Research Centre (UGC/CRC) Lecture 8 - 34 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface (Cont.) (Cont.) 2. Compute the average face image; Compute the difference image for each image in the training set; Compute the covariance matrix of this set of difference images; Compute the eigenvectors of the covariance matrix Get the eigenfaces from the training set, keeping only the k images that correspond to the highest eigenvalues. These k images define the face space. As new faces are experienced, the eigenfaces can be updated or recalculated Biometrics Research Centre (UGC/CRC) Lecture 8 - 35 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface (Cont.) (Cont.) 3. Calculate the corresponding distribution in kdimensional weight space for each known individual, by projecting their face images onto the “face space.” Biometrics Research Centre (UGC/CRC) Lecture 8 - 36 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface (Cont.) (Cont.) 1. Calculate a set of weights based on the input image and the M eigenfaces by projecting the input image onto each of the eigenfaces. 2. Determine if the image is a face at all by checking to see if the image is sufficiently close to “face space.” 3. If it is a face, classify the weight pattern as either a known person or as unknown. 4. (Optional) Update the eigenfaces and/or weight patterns. Each training image can be represented by a k dimensional vector For 1-to-many identification, project the concerned image to the face space and get a k dimensional vector, the ‘live’ template A distance measure is used to compare the similarity between the ‘live’ template and the training vectors
Feature Extraction(2): Feature Extraction(2) Eigenface(Cont Eigenface(Cont Problen size, or scale, is misjudged. The head size in the input image must be close to that of the eigenfaces for the known, a sample system achieved approximately 96% correct classification averaged over lighting variation 85% correct averaged over orientation variation, and 64% correct averaged over size variation Eigenfaces Developer Feature Extraction(3) Visage Technology) Geometry-Feature-Based Method 28 archetypes on record OUsing geometric information of different parts of the face DIfferences/similarities with models like eyes, nose, mouth, chin, cheekbones etc, as features on record Use eigenface-based of the face, for instance, distance between eyes, width of O Map characteristics of a OPosition relationship between face parts, such as eyes face into a multi-dimensional face nose. n outh and chin, their shapes and sizes have OUse in conjunction with identification cards(e gvernment ID cards)in driver's licenses and pRoblem: geometry features can not be calculated sImilar go accurately, which effects the recognition capacity directly many States of Ohttp://www.visage.com/facialrecog.htm Geonet Feature Extraction (3): Feature Extraction(4: Geometry-Feature(Cont) Local Feature Analysis (LFA RElated to Eigenface, but more capable of accommodating changes in appearance or facial aspects UTilize features from different regions of the face, and incorporates the relative location of these features Scan the shapes search for facial features like nose, eyes 西百 choose dots(anchor point Connect dots to make a triangle net. Encode the result to a long number(key), 672 1s and Os
7 Biometrics Research Centre (UGC/CRC) Lecture 8 - 37 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface (Cont.) (Cont.) Training Images Eigenfaces Biometrics Research Centre (UGC/CRC) Lecture 8 - 38 Feature Extraction (2): Feature Extraction (2): Eigenface Eigenface (Cont.) (Cont.) Recognition performance decreases quickly as the head size, or scale, is misjudged. The head size in the input image must be close to that of the eigenfaces for the system to work well In the case where every face image is classified as known, a sample system achieved approximately 96% correct classification averaged over lighting variation, 85% correct averaged over orientation variation, and 64% correct averaged over size variation Problems: Biometrics Research Centre (UGC/CRC) Lecture 8 - 39 Eigenfaces Eigenfaces Developer Developer (Viisage Technology) 128 archetypes on record Differences/similarities with models on record Use eigenface-based recognition algorithm Map characteristics of a person’s face into a multi-dimensional face space Use in conjunction with identification cards (e.g. driver’s licenses and similar government ID cards) in many States of US http://www.viisage.com/facialrecog.htm Biometrics Research Centre (UGC/CRC) Lecture 8 - 40 Using geometric information of different parts of the face like eyes, nose, mouth, chin, cheekbones etc, as features of the face, for instance, distance between eyes, width of nose, etc. Position relationship between face parts, such as eyes, nose, mouth and chin, their shapes and sizes have strong contribution to classify faces Problem: geometry features can not be calculated accurately, which effects the recognition capacity directly Feature Extraction (3): Feature Extraction (3): Geometry Geometry-Feature Feature-Based Method Biometrics Research Centre (UGC/CRC) Lecture 8 - 41 Feature Extraction (3): Feature Extraction (3): Geometry Geometry-Feature (Cont.) Feature (Cont.) Biometrics Research Centre (UGC/CRC) Lecture 8 - 42 Feature Extraction (4): Feature Extraction (4): Local Feature Analysis (LFA) Local Feature Analysis (LFA) Currently used by Visionic’s FaceIt software Related to Eigenface, but more capable of accommodating changes in appearance or facial aspects Utilize features from different regions of the face, and incorporates the relative location of these features Represent facial images in terms of local statistically derived building blocks - Scan the shapes / search for facial features like nose, eyes. z Analyze pixels and facial protrude such as nose and cheekbones. z choose dots (anchor points). z Connect dots to make a triangle net. z Measure the angles of net. - Encode the result to a long number (key), 672 1’s and 0’s
LFA Developer LFA Developer (Visionic's Facelt Visionic's Facelt Software 口 Represent facial in terms of local statistically ONodal points are measured to generate a number, call a faceprint, 84 bytes in size JFaceprint can be matched or compared with others OFaceprint is resistant to changes in lighting, facial e Distance between eyes expression and is robust with respect to pose variations, up o Width to 35 degrees a Being incorporated into a Close Circuit Television anti- ● Cheekbone crime system in UK ●Chn at 26 June 2002 aUse local feature analysis(geometric-feature based Jhttp://www.identix.com/products/prosecuritybnpargushtml Feature Extraction(5): Feature Extraction(6: Local Representation- Gabor Jets Handling Lighting and Pose Sur ace a Control the lighting or pose ation of small face regions using Gabor filter a Capture lighting variability scales x 8 orientations a Feature points on a regular lattice, or chosen to be salient points 口 Create s3 D model 口 Feature Use Pose correction O Aggregate score gives total similarity] reprocessing step for a variety of algorithms to substantially improve the ability to recognize non-frontal feature comparis faces O Model the lighting -Belhumeur Deformation Models Color Feature Extraction(7): OThis model considered distortion characteristics of faces Deformation Models(Cont e.g. the face image may vary in terms of sizes, angles, and vary when the person smile REcognize distortion invariant object by expressing them a sparse graph whose vertices can be marked with multi-resolution description of local energy spectrum, and whose edges show topological relation between vertices JA face in normal condition can be expressed by a Models for face parts in deformation template method dFace recognition is transformed as a graphic matching 8
8 Biometrics Research Centre (UGC/CRC) Lecture 8 - 43 LFA Developer Developer (Visionic’s FaceIt) Represent facial images in terms of local statistically derived building blocks Identify 80 nodal points on a face z Distance between eyes zWidth of nose z Depth of eye sockets z Cheekbones z Jaw line z Chin Use local feature analysis (geometric-feature based method) Biometrics Research Centre (UGC/CRC) Lecture 8 - 44 LFA Developer Developer Visionic Visionic’s FaceIt Software Software Nodal points are measured to generate a number, call a faceprint, 84 bytes in size Faceprint can be matched or compared with others Faceprint is resistant to changes in lighting, facial expression and is robust with respect to pose variations, up to 35 degrees Being incorporated into a Close Circuit Television anticrime system in UK Visionics Corporation has merged with Identix Incorporated at 26 June, 2002 http://www.identix.com/products/pro_security_bnp_argus.html Biometrics Research Centre (UGC/CRC) Lecture 8 - 45 Feature Extraction (5): Feature Extraction (5): Local Representation- Gabor Jets Used in Elastic Bunch Graph methods z Von der Malsburg et al. Representation of small face regions using Gabor filter responses z ~4 scales x 8 orientations Feature points on a regular lattice, or chosen to be salient points z Features are “self-localizing” Feature points compared pairwise z Aggregate score gives total similarity] Elastic bunch graph involves displacement between features as well as feature comparison Biometrics Research Centre (UGC/CRC) Lecture 8 - 46 Feature Extraction (6): Feature Extraction (6): Handling Handling Lighting and ighting and Pose Control the lighting or pose Simple normalization (e.g. mean subtraction) Capture lighting variability Enroll multiple views Create s 3D model z Use Pose correction z e.g. FRVT uses Blanz & Vetter’s 3D morphable models as a preprocessing step for a variety of algorithms to “…substantially improve the ability to recognize non-frontal faces.” Model the lighting - Belhumeur Biometrics Research Centre (UGC/CRC) Lecture 8 - 47 Feature Extraction (7): Feature Extraction (7): Deformation Models Deformation Models This model considered distortion characteristics of faces, e.g. the face image may vary in terms of sizes, angles, and vary when the person smile Recognize distortion invariant object by expressing them in a sparse graph whose vertices can be marked with multi-resolution description of local energy spectrum, and whose edges show topological relation between vertices, and edges have distance property A face in normal condition can be expressed by a uniform image Face recognition is transformed as a graphic matching problem Biometrics Research Centre (UGC/CRC) Lecture 8 - 48 Feature Extraction (7): Feature Extraction (7): Deformation Models (Cont.) Deformation Models (Cont.) Models for face parts in deformation template method
Feature Recognition(8) Neural Networks Method Face Recognition Developer aTrain a neural network with samples (Cognitec's FaceVACS O Acquire implicit expression for the rules of face recognition O Technology used is believed to be neutral nets Derive face features by other methods and then design neural video camera (or even a r Deriving features and classifying faces are completed with neural standard webcam) cessing algorithm and 8予 Ire it with user's reference set stored in database ♀♀9 systems. delproducts-entry htm ARN Face Recognition Developer ZN-Face Extending face recognition a Use an extension of the Elastic Graph atching Algorithm DMost systems acquire faces under controlled lighting and a Can perform image acquisition, face calization and identification in 3.5 OHID project seeks to extend that to greater distances O Allows robust identification of persons OHow much can we improve face recognition by previously stored recognition from video vs recognition from still images a Reliable rejection of unknown persor Choosing keyfaces a Use in areas-air traffic, identity ages is documents, forensic investigation, ID Face images from a sequence may contain multiple views surveillance lighting expression Face Applicati Fa Applications range from static, mug-shot verification to a dynamic, uncontrolled face entification and tracking in a cluttered background Face Recognition Application et Referance Face Access Smart Card Allowed Access 9
9 Biometrics Research Centre (UGC/CRC) Lecture 8 - 49 Feature Recognition (8): Feature Recognition (8): Neural Networks Method Neural Networks Method Train a neural network with samples Acquire implicit expression for the rules of face recognition Two types : ¾ Derive face features by other methods and then design neural network classifier ¾ Deriving features and classifying faces are completed with neural network Output Layer Hidden Layer Input Layer Biometrics Research Centre (UGC/CRC) Lecture 8 - 50 Face Recognition Face Recognition Developer Developer (Cognitec’s FaceVACS) Technology used is believed to be neutral nets Take user’s face image with a video camera (or even a standard webcam) Extract feature using its image processing algorithm and compare it with user’s reference set stored in database http://www.cognitecsystems.de/products-entry.htm Biometrics Research Centre (UGC/CRC) Lecture 8 - 51 Face Recognition Face Recognition Developer Developer (ZN-Face) Use an extension of the Elastic Graph Matching Algorithm Can perform image acquisition, face localization and identification in 3.5 seconds Allows robust identification of persons previously stored Reliable rejection of unknown persons Use in areas - air traffic, identity documents, forensic investigation, ID systems, access control and video surveillance http://www.zn-ag.com/content.en/face.htm Biometrics Research Centre (UGC/CRC) Lecture 8 - 52 Extending face recognition Extending face recognition Most systems acquire faces under controlled lighting and geometry HID project seeks to extend that to greater distances How much can we improve face recognition by recognition from video vs recognition from still images zChoosing ‘keyfaces’ zSequence of face images is not independent zFace images from a sequence may contain multiple views / lighting / expressions Biometrics Research Centre (UGC/CRC) Lecture 8 - 53 Face Recognition Application Biometrics Research Centre (UGC/CRC) Lecture 8 - 54 Face Applications • Applications range from static, mug-shot verification to a dynamic, uncontrolled face identification and tracking in a cluttered background Smart Card Access Control
Current Applications Face Applications Face One of the worlds leading face recognition systems ndustry/Sector Application Where Phantomas for access control at airport in Berlin and by police forces throughout Europe Police/Corrections Digitalize mug shot, Tampa, London for criminals Military Monitor movements Passenger surveillance Keflavik Intl. Airport, Airport Application to Fight Terrorism Application-Airport to Fight Terrorism 3. Image is passed to database WARNING!!!! HOLD FOR QUESTIONING 4. Recognition result Application -Protect Public Security in busy street chePpe keep watching on the monitor Face Recognition Problems o Snapshots of faces are compared a database of criminal mugshots Application -Eliminate Vote Fraud a It can ensure that people register to vote only one time O An image database is dynamically built for every voter aFor every incoming voter, the system will grab an image and match it against the image database a Successfully in the parliamentary elections in Uganda(Africa) 10
10 Biometrics Research Centre (UGC/CRC) Lecture 8 - 55 Current Applications Current Applications Keflavik Intl. Airport, Iceland Passenger surveillance against hotlist Airport Rapid progression U.S./Mexico Border through customs Immigration Military Monitor movements Israel Digitalize mug shot, Tampa, London scan for criminals Police/Corrections Foxwoods, Trump Casino Security, search for repeated cheat Casinos/Gaming Industry/Sector Application Where Biometrics Research Centre (UGC/CRC) Lecture 8 - 56 Face Applications • One of the world’s leading face recognition systems: -- Phantomas for access control at airport in Berlin and by police forces throughout Europe. Biometrics Research Centre (UGC/CRC) Lecture 8 - 57 1. Image is sent to computer for manipulation. Hi I am Dr. Larry Bliss Airport Application to Fight Terrorism 2. Feature extraction. Biometrics Research Centre (UGC/CRC) Lecture 8 - 58 Application – Airport Airport to Fight Terrorism 3. Image is passed to database for possible match WARNING!!!! WARNING!!!! HOLD FOR QUESTIONING 4. Recognition result Biometrics Research Centre (UGC/CRC) Lecture 8 - 59 Application – Protect Public Security The Tampa Police Department (Florida) use FRS to spot criminals in busy street The images are captured from cameras positioned in different areas of downtown The operator keep watching on the CCTV monitor Snapshots of faces are compared to a database of criminal mugshots. Application Application – Eliminate Vote Fraud It can ensure that people register to vote only one time An image database is dynamically built for every voter For every incoming voter, the system will grab an image and match it against the image database Successfully in the parliamentary elections in Uganda (Africa) Biometrics Research Centre (UGC/CRC) Lecture 8 - 60 Face Recognition Problems Face Recognition Problems