The database is publically available for non-commercial use. Inorderfor us to track those using it, please fill in the following form. Letus know if you have any problems. (NOTE: This works only withNetscape4.75 (and below) under Windows or Linux/Unix. It will NOT work withInternetExplorer or Netscape 6.)
yale face database download
Download
https://t.co/G94Yk7ezRX
The database contains 5760 single light source images of 10 subjectseach seen under 576 viewing conditions(9poses x 64 illumination conditions). For every subject in aparticularpose, an image with ambient (background) illumination was alsocaptured.Hence, the total number of images is in fact 5760+90=5850. The totalsizeof the compressed database is about 1GB.
The 65 (64 illuminations + 1 ambient) images of a subject in aparticular pose have been "tarred" and "gzipped" into a singlefile. There are 90 (10 subjects x 9 poses) '*.tar.gz'files. Each '*.tar.gz' file is about 11MB big. All filenamesbegin with the base name 'yaleB' followed by a two digit numbersignifyingthe subject number (01 - 10). The 2 digit number after '_P' signifiesthepose number (00 - 08). (See below for the relative pose positions.) Theimages in each '*.tar.gz' file can be unpacked using the following twocommands (under Unix):
The coordinates of faces in each set (e.g., 'yaleB01_P00.tar') canbefound here.For the set 'yaleB01_P00.tar', for example, the coordinates are in thefile 'yaleB01_P00.crop'. Each 'yaleB**_P**.crop' file contains twocolumnscorresponding to the x- and the y-coordinates. For all the sets in thefrontal pose (i.e., for the files 'yaleB**_P00.tar') the coordinates ofthe left eye, right eye, and mouth in each image have been appended ontop of each other into two columns of length 195. The top 65 rows areforthe left eye, the next 65 are for the right eye, and the rest are forthemouth centers. Files other than for the frontal pose (e.g.,'yaleB01_P07.crop')contain only the coordinates of the face centers (i.e., columns have alength of 65). As a final note, each of the 65 rows in the'yaleB**_P0*.crop'files correspond (in the same order) to the images whose filenamesappearin the file 'yaleB**_P**.info'. This '*.info' is unpacked together withthe images in 'yaleB**_P0*.tar'.
Now, a word about the naming of each image: The first part of thefilenameof an image follows the same convention as the filename of one of the"tarred"(and "gzipped") files. It begins with the base name 'yaleB' and isfollowedby the two digit number signifying the subject number and then by thetwodigit number signifying the pose. The rest of the filename deals withtheazimuth and elevation of the single light source direction. Forexample,the image with the filename
The images in the database were captured using a purpose-built illuminationrig. This rig is fitted with 64 computer controlled strobes. The 64images of a subject in a particular pose were acquired at camera framerate (30 frames/second) in about 2 seconds, so there is only smallchangein head pose and facial expression for those 64 (+1 ambient) images.Theimage with ambient illumination was captured without a strobe goingoff.The positions of the strobes in spherical coordinates are shown in thispostscript file. (This postscript filealsoshows four rings containing the position of the strobes correspondingtothe images of four subsets with increasingextremityin illumination. These subsets were used in the recognition experimentsreported in the above-mentioned paper.)
Poses 1, 2, 3, 4, and 5 were about 12 degrees from the cameraopticalaxis (i.e., from Pose 0), while poses 6, 7, and 8 were about 24degrees.Hereyou can find a sample image per subject per pose with frontalillumination.Note that the position of a face in an image varies from pose to posebutis fairly constant within the images of a face seen in one of the 9poses,since the 64 (+1 ambient) images were captured in about 2seconds.
The Extended Yale B database contains 2414 frontal-face images with size 192168 over 38 subjects and about 64 images per subject. The images were captured under different lighting conditions and various facial expressions.
The following is a directory of databases containing face stimulus sets available for use in behavioral studies. Please read the rights, permissions, licensing information on the database's webpage before proceeding with use. Make sure to obtain the permissions required and credit/cite as requested by the creators.
This database contains 10,168 natural face photographs and several measures for 2,222 of the faces, including memorability scores, computer vision and psychology attributes, and landmark point annotations. The face photographs are JPEGs with 72 pixels/in resolution and 256-pixel height.
Citation: Bainbridge, W.A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face images. Journal of Experimental Psychology: General. Journal of Experimental Psychology: General, 142(4), 1323-1334.
The American Multiracial Face Database contains 110 faces (smiling and neutral expression poses) with mixed-race heritage and accompanying ratings of those faces by naive observers that are freely available to academic researchers. The faces were rated on attractiveness, emotional expression, racial ambiguity, masculinity, racial group membership(s), gender group membership(s), warmth, competence, dominance, and trustworthiness.
Our Database of Faces, (formerly 'The ORL Database of Faces'), contains a set of face images taken between April 1992 and April 1994 at the lab. There are ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
The Basel Face Database (BFD) is built upon portrait photographs of forty different individuals. All these photographs have been manipulated to appear more or less agentic and communal (Big Two personality dimensions) as well as open to experience, conscientious, extraverted, agreeable, and neurotic (Big Five personality dimensions). Thus, the database consists of forty photographs of different individuals and 14 variations of each of them signaling different personalities. Using this database therefore allows to investigate the impact of personality on different outcome variables in a very systematic way.
The Bogazici Face Database is a database of Turkish undergraduate student targets. High-resolution standardized photographs were taken and supported by the following materials: (a) basic demographic and appearance-related information, (b) two types of landmark configurations (for Webmorph and geometric morphometrics (GM)), (c) facial width-to-height ratio (fWHR) measurement, (d) information on photography parameters, (e) perceptual norms provided by raters.
The dataset contains images of people collected from the web by typing common given names into Google Image Search. The coordinates of the eyes, the nose and the center of the mouth for each frontal face are provided in a ground truth file. This information can be used to align and crop the human faces or as a ground truth for a face detection algorithm. The dataset has 10,524 human faces of various resolutions and in different settings, e.g. portrait images, groups of people, etc. Profile faces or very low-resolution faces are not labeled.
The Chicago Face Database was developed at the University of Chicago by Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. The CFD is intended for use in scientific research. It provides high-resolution, standardized photographs of male and female faces of varying ethnicity between the ages of 17-65. Extensive norming data are available for each individual model. These data include both physical attributes (e.g., face size) as well as subjective ratings by independent judges (e.g., attractiveness). The database consists of a main image set and several extension sets.
A novel emotional database that contains movie clip / dynamic images of 12 ethnically diverse children. This unique database contains spontaneous / natural facial expression of children in diverse settings with diverse recording scenarios showing six universal or prototypic emotional expressions (happiness, sadness, anger, surprise, disgust and fear). Children are recorded in constraint free environment (no restriction on head movement, no restriction on hands movement, free sitting setting, no restriction of any sort) while they watched specially built / selected stimuli. This constraint free environment allowed us to record spontaneous / natural expression of children as they occur.
The CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. Subjects were imaged under 15 viewpoints and 19 illumination conditions while displaying a range of facial expressions.
Subjects were instructed by an experimenter to perform a series of 23 facial displays that included single action units (e.g., AU 12, or lip corners pulled obliquely) and action unit combinations (e.g., AU 1+2, or inner and outer brows raised). Each begins from a neutral or nearly neutral face. For each, an experimenter described and modeled the target display. Six were based on descriptions of prototypic emotions (i.e., joy, surprise, anger, fear, disgust, and sadness).
Citation: Kanade, T., Cohn, J. F., & Tian, Y. (2000, March). Comprehensive database for facial expression analysis. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580) (pp. 46-53). IEEE.
The Face Database consists of 575 individual faces ranging from ages 18 to 93. Our database was developed to be more representative of age groups across the lifespan, with a special emphasis on recruiting older adults. The resulting database has faces of 218 adults age 18-29, 76 adults age 30-49, 123 adults age 50-69, and 158 adults age 70 and older.
35fe9a5643