The Azure AI Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identification, touchless access control, and face blurring for privacy.
Face service access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the Face Recognition intake form to apply for access. For more information, see the Face limited access page.
Liveness detection: Liveness detection is an anti-spoofing feature that checks whether a user is physically present in front of the camera. It's used to prevent spoofing attacks using a printed photo, video, or a 3D mask of the user's face. Liveness tutorial
On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it.
Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces.
Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
Microsoft has retired facial recognition capabilities that can be used to try to infer emotional states and identity attributes which, if misused, can subject people to stereotyping, discrimination or unfair denial of services. These include capabilities that predict emotion, gender, age, smile, facial hair, hair and makeup. Read more about this decision here.
The Face client SDKs for liveness are a gated feature. You must request access to the liveness feature by filling out the Face Recognition intake form. When your Azure subscription is granted access, you can download the Face liveness SDK.
Face Liveness detection can be used to determine if a face in an input video stream is real (live) or fake (spoof). This is a crucial building block in a biometric authentication system to prevent spoofing attacks from imposters trying to gain access to the system using a photograph, video, mask, or other means to impersonate another person.
The goal of liveness detection is to ensure that the system is interacting with a physically present live person at the time of authentication. Such systems have become increasingly important with the rise of digital finance, remote access control, and online identity verification processes.
The liveness detection solution successfully defends against a variety of spoof types ranging from paper printouts, 2d/3d masks, and spoof presentations on phones and laptops. Liveness detection is an active area of research, with continuous improvements being made to counteract increasingly sophisticated spoofing attacks over time. Continuous improvements will be rolled out to the client and the service components over time as the overall solution gets more robust to new types of attacks.
Modern enterprises and apps can use the Face recognition technologies, including Face verification ("one-to-one" matching) and Face identification ("one-to-many" matching) to confirm that a user is who they claim to be.
If you are using Microsoft products or services to process Biometric Data, you are responsible for: (i) providing notice to data subjects, including with respect to retention periods and destruction; (ii) obtaining consent from data subjects; and (iii) deleting the Biometric Data, all as appropriate and required under applicable Data Protection Requirements. "Biometric Data" will have the meaning set forth in Article 4 of the GDPR and, if applicable, equivalent terms in other data protection requirements. For related information, see Data and Privacy for Face.
Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building or airport access to a certain group of people or verifying the user of a device.
Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for access control, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID. It can also be used as a final check on the results of an Identification API call.
The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
The service supports two working modes, matchPerson and matchFace. The matchPerson mode returns similar faces after filtering for the same person by using the Verify API. The matchFace mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
To find four similar faces, the matchPerson mode returns A and B, which show the same person as the target face. The matchFace mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the Facial recognition concepts guide or the Find Similar API reference documentation.
The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the Facial recognition concepts guide or the Group API reference documentation.
As with all of the Azure AI services resources, developers who use the Face service must be aware of Microsoft's policies on customer data. For more information, see the Azure AI services page on the Microsoft Trust Center.
The Microsoft Face API uses state-of-the-art cloud-based face algorithms to detect and recognize human faces in images. Capabilities include features like face detection, face verification, and face grouping to organize faces into groups based on their visual similarity.
US government entities are eligible to purchase Azure Government services from a licensing solution provider with no upfront financial commitment, or directly through a pay-as-you-go online subscription.
Hi Community,
Depending on time of the day and the corresponding lighting situation in my office, my face is frequently shown very dark.
a) How can I access the camera properties menu during (!) a call. Note: I know the properties menu, but it is not accessible during a call.
b) Any current smartphone recognizes a face in a video stream and brightens the picture so the face is lit properly. I'd like to suggest that Teams should have the same functionality.
Best
Ted_MUC
@Ed Woodrick Thank you for the suggestions. Unfortunately I cannot move the camera closer. I've thought about zoom. Again: The zoom settings aren't available during the call. I already have installed indirect lighting, but this cannot compete with the bright daylight in the background, and direct light is too bright for my eyes. Btw: Making a selfie from the camera's location with my smartphone results in a perfectly lit image.
Yes, there are ways to compensate for MS Teams providing terrible camera auto-adjustments. That does not address the chief issue: MS Teams provides terrible camera auto-adjustments. Same position seated, same lighting, same everything, but open Skype or Slack or a myriad of competitors instead of Teams, and the lighting is just fine. Teams is the problem here.
@Ed Woodrick I actually tried using the smartphone when I learned that I can transfer calls from phone (Teams app) to PC vice versa and even use it parallel. Unfortunately, due to the design of the integration I cannot only use the camera of the smartphone and keep all the rest on the big screen.
I just found that if you have the google font installed locally (eg if you've been doing a mockup), edge will not display the web font version. I did a lot of reading round to find the root of the issue and did not see anyone mention this.
The procedure I followed in order to install all necessary formats was to find which font-weight I needed from each font and then go and download it from Google Fonts. Then using the -face (font face generator) I downloaded all the formats along with the CSS code. Then I incorporated them all into my CSS and HTML.
c80f0f1006