The Azure AI Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identification, touchless access control, and face blurring for privacy.
Liveness detection: Liveness detection is an anti-spoofing feature that checks whether a user is physically present in front of the camera. It's used to prevent spoofing attacks using a printed photo, video, or a 3D mask of the user's face. Liveness tutorial
Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces.
Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoof). This is a crucial building block in a biometric authentication system to prevent spoofing attacks from imposters trying to gain access to the system using a photograph, video, mask, or other means to impersonate another person.
Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building or airport access to a certain group of people or verifying the user of a device.
Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for access control, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID. It can also be used as a final check on the results of an Identification API call.
The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
The service supports two working modes, matchPerson and matchFace. The matchPerson mode returns similar faces after filtering for the same person by using the Verify API. The matchFace mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
To find four similar faces, the matchPerson mode returns A and B, which show the same person as the target face. The matchFace mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the Facial recognition concepts guide or the Find Similar API reference documentation.
The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the Facial recognition concepts guide or the Group API reference documentation.
Important: You will need to enable cloud sync on your device if you want to use face or touch unlock to sign in to Login.gov across devices. Your device may show you other authentication features, such as signing in by scanning a QR code.
Face or touch unlock will only work on the original device you set it up with unless you set up face or touch unlock on a newer device and browser that supports passkeys, and has bluetooth and/or cloud-sync between your accounts enabled.
NIST has published NISTIR 8331 - Ongoing FRVT Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms on November 30, 2020, the second out of a series of reports aimed at quantifying face recognition accuracy for people wearing masks. This report adds 1) 65 new algorithms submitted to FRVT 1:1 since mid-March 2020 (and includes cumulative results for 152 algorithms evaluated to date) and 2) assessment of when both the enrollment and verification images are masked (in addition to when only the verification image is masked). Our initial approach has been to apply masks to faces digitally (i.e., using software to apply a synthetic mask). This allowed us to leverage large datasets that we already have. This report quantifies the effect of masks on both false negative and false positives match rates. For more information, visit the FRVT Face Mask Effects webpage.
NIST describes and quantifies demographic differentials for contemporary face recognition algorithms in this report, NISTIR 8280. NIST has conducted tests to quantify demographic differences for nearly 200 face recognition algorithms from nearly 100 developers, using four collections of photographs with more than 18 million images of more than 8 million people.
The FRVT Ongoing activity is conducted on a continuing basis and will remain open indefinitely such that developers may submit their algorithms to NIST whenever they are ready. This approach more closely aligns evaluation with development schedules. The evaluation will use very large sets of facial imagery to measure the performance of face recognition algorithms developed in commercial and academic communities worldwide. Multiple evaluation tracks relevant to face recognition will be conducted under this test. For more information, visit the FRVT Ongoing webpage.
The FRVT 1:N 2018 will measure advancements in the accuracy and speed of one-to-many face recognition identification algorithms searching enrolled galleries containing at least 10 million identities. The evaluation will primarily use standardized portrait images, and will quantify how accuracy depends on subject-specific demographics and image-specific quality factors. For more information, visit the FRVT 1:N 2018 webpage.
Facial morphing and the ability to detect it is an area of high interest to a number of photo-credential issuance agencies and those employing face recognition for identity verification. The FRVT MORPH test will provide ongoing independent testing of prototype facial morph detection technologies.
NIST is establishing an evaluation of face image quality assessment algorithms. NIST will run quality assessment algorithms on large sets of images and relate their outputs to face recognition outcomes.
While not part of the FRVT series, the Face-in-Video-Evaluation (FIVE) conducted 2015-2016 will be of interest to the FRVT audience. The FIVE activity assessed face recognition capability in video sequences. The outcomes of FIVE were published in NIST Interagency Report 8173.
The Face Recognition Algorithm Independent Evaluation (CHEXIA-FACE) was conducted to assess the capability of face detection and recognition algorithms to correctly detect and recognize children's faces appearing in unconstrained imagery.
FRVT 2013 tested state-of-the-art face recognition performance. It used very large sets of facial imagery to measure the accuracy and computational efficiency of face recognition algorithms developed in commercial and academic communities worldwide. The test itself ran from July 2012 to the end of 2013. The detailed plans, procedures and outcomes of the test are documented on the FRVT 2013 homepage.
Under the name MBE 2010, 2D face recognition algorithms were evaluated, yielding two reports. First, NIST Interagency Report 7709 gave results for both verification and identification algorithms. Second, the NIST Interagency Report 7830 surveyed compression and resolution parameters for storing face images on identity credentials.
FRVT 2000 consisted of two components: the Recognition Performance Test and the Product Usability Test. The Recognition Performance Test was a technology evaluation. The goal of the Recognition Performance Test was to compare competing techniques for performing facial recognition. All systems were tested on a standardized database. The standard database ensured all systems were evaluated using the same images, which allowed for comparison of the core face recognition technology. The product usability test examined system properties for performing access control.
The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties. The task of the sponsored research was to develop face recognition algorithms. The FERET database was collected to support the sponsored research and the FERET evaluations. The FERET evaluations were performed to measure progress in algorithm development and identify future research directions.
Through collaboration with Depuy Synthes, the Orthopedics Company of Johnson & Johnson, and Materialise, state-of-the-art technology played a pivotal role in both presurgical planning and the actual surgery. Cutting-edge three-dimensional (3D) computer surgical planning, along with patient-specific 3D cutting guides, enabled precise alignment of bones and optimal placement of implantable plates and screws. This meticulous approach fit the grafted partial face and whole left eye onto James.
dd2b598166