Skip to content

Ramot-Lab/Face_databases

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 

Repository files navigation

Face databases

List of used and downloaded databases available on lab StorWis

  1. List of databases
  2. Used databases
  3. Emotions
  4. Paid databases
  5. Audiovisual databases
  6. Databases with artificially created faces
  7. Tools to create artificial faces

List of databases

Places where to look for more databases:

Used databases

Carolinska

The Averaged Karolinska Directed Emotional Faces dataset (Lundqvist, D., & Litton, J. E. (1998). The Averaged Karolinska Directed Emotional Faces, the homepage)

Reference

Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska Directed Emotional Faces—KDEF (CD ROM). Stockholm: Karolinska Institute, Department of Clinical Neuroscience, Psychology Section.

Conditions

They state here that you have to cite but can freely use (but their point C and B are weirdly contraicting themselves).

Stimuli can be published.

Ratings of the stimuli

The paper is https://www.frontiersin.org/articles/10.3389/fpsyg.2017.02181/full and the data https://osf.io/35ga8/.

Front. Psychol., 19 December 2017 | https://doi.org/10.3389/fpsyg.2017.02181 KDEF-PT: Valence, Emotional Intensity, Familiarity and Attractiveness Ratings of Angry, Neutral, and Happy Faces Margarida V. Garrido and Marília Prada.

SPOS - Spontaneous vs. Posed Facial Expression Database

Reference

Pfister, T.; Xiaobai Li; Guoying Zhao; Pietikainen, M.; "Differentiating spontaneous from posed facial expressions within a generic facial expression recognition framework". Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference, vol., no., pp.868-875, 6-13 Nov. 2011

Conditions

The user will send an electronic copy of all papers that reference the database to: xiaobai.li@oulu.fi.

PICS -- Psychological Image Collection at Stirling, University of Stirling

This is a collection of images useful for conducting experiments in psychology, primarily faces, though other submissions are welcome. They are free for research use.

http://pics.psych.stir.ac.uk/2D_face_sets.htm

There are several datasets:

  • Aberdeen:
    687 Colour faces from Ian Craw at Aberdeen. Between 1 and 18 images of 90 individuals. Some variations in lighting, 8 have varied viewpoint. Resolution: varied: 336x480 to 624x544

  • Iranian women:
    369 images, 34 women, mostly with smile and neutral in each of five orientations. Resolution: 1200x900 colour

  • Nottingam scans (not that good):
    50 males and 50 females in neutral, frontal pose. Resolution: 358-463 to 468x536 monochrome

  • Stirling faces (not that good):
    312 images: 35 identities (18 female, 17 male), in 3 poses and 3 expressions. Resolution: 269x369 monochrome

  • Pain expressions:
    599 images; posed expressions, usually 2 of each of the six basic emotions plus 10 of pain, also 45 degree and profile neutral. 13 women, 10 men. Resolution: 720x576 colour

  • Utrecht
    131 images, 49 men, 20 women, usually a neutral and smile of each, collected at the European Conference on Visual Perception in Utrecht, 2008. Some more to come, and 3d versions of these images in preparation. Resolution: 900x1200 colou

  • Mooney
    2 sets of 24 Mooney faces, coded as male/female or looking left/right. BMP format, 575x580 pixels

Reference

To cite PICS in a paper, please quote this URL, pics.stir.ac.uk.

Aberdeen -- I think this is the same

https://www.researchgate.net/post/Im-looking-for-a-data-base-with-lots-of-pictures-of-faces-of-different-ethnicity-with-neutral-expressions-Any-suggestions

Oslo database

https://sirileknes.com/oslo-face-database/

The folder on Dropbox contains the images, face ratings, luminescense matching and some other masks. But may not be accessible (I'm not the owner).

Reference

Chelnokova O, Laeng B, Eikemo M, Riegels J, Løseth G, Maurud H, Willoch F, Leknes S (2014) Rewards of Beauty: The Opioid System Mediates Social Motivation in Humans. Mol Psychiatry 19:746-747. article

Conditions

There is an agreement pdf file (and on a webpage they have a form), I obrained from one guy in the lab who had it (Arik Shkolnikov). They have 4 pictures (M071, M023, F074, M048) which can be published, they're in the pdf.

CFD -- Chicago face database

https://chicagofaces.org/default/

There are two main databases:

  1. CFD
    The main CFD set consists of images of 597 unique individuals. They include self-identified Asian, Black, Latino, and White female and male models, recruited in the United States. All models are represented with neutral facial expressions. A subset of the models is also available with happy (open mouth), happy (closed mouth), angry, and fearful expressions. Norming data are available for all neutral expression images. Subjective rating norms are based on a U.S. rater sample.

  2. CFD-MR
    The CFD-MR extension set includes images of 88 unique individuals, who self-reported multiracial ancestry. All models were recruited in the United States. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressions are in production and will become available with a future update of the database. Norming data include the standard set of CFD objective and subjective image norms as well as the models’ self-reported ancestry. Subjective norms are based on a U.S. rater sample.

  3. CFD-INDIA
    The CFD-India extension set includes images of 142 unique individuals, recruited in Delhi, India. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressionsare in production and will become available with a future update of the database. Norming data include the standard set of CFD objective and subjective image norms and self-reported background information on the models (e.g., ancestry, home state, caste). Subjective norms are available for a U.S. rater sample and for a sample of raters recruited in India. In addition, an extended set of self-reported model background data (e.g., ancestry, home state, caste) is available upon request.

NOTE: The addition background data not requested yet.

Reference

Use of the database materials should be acknowledged as follows:

  1. CFD: Ma, Correll, & Wittenbrink (2015). The Chicago Face Database: A Free Stimulus Set of Faces and Norming Data. Behavior Research Methods, 47, 1122-1135.
  2. CFD-MR: Ma, D.S., Kantner, J. & Wittenbrink, B. (2020). Chicago Face Database: Multiracial Expansion. Behavior Research Methods. https://doi.org/10.3758/s13428-020-01482-5.
  3. Lakshmi, Wittenbrink, Correll, & Ma (2020). The India Face Set: International and Cultural Boundaries Impact Face Impressions and Perceptions of Category Membership. Frontiers in Psychology, 12, 161. https://doi.org/10.3389/fpsyg.2021.627678.

Conditions

The CFD and its expansion sets are a free resource for the scientific community. The database photographs and their accompanying information may be used free of charge for non-commercial research purposes only. The database materials cannot be re-distributed or published without written consent from the copyright holder, the University of Chicago, Center for Decision Research.

Had to submit a form after version 3.0 (ie after the India addition).

3DSK face set with webmorph templates

https://osf.io/vrd4e/

In the folder above is a readme file discussing rating of the stimuli (and bunch of rating containing csv files). Open set of 50 male and 50 female face photographs with webmorph templates

Last Updated: 2020-02-06 12:46 PM

Reference

Identifier: DOI 10.17605/OSF.IO/A3947

Conditions

License: CC-By Attribution 4.0 International

ADFES -- Amsterdam Dynamic Facial Expression Set

https://aice.uva.nl/research-tools/adfes-stimulus-set/adfes-stimulus-set.html

Reference

Van der Schalk, J., Hawk, S. T., Fischer, A. H., & Doosje, B. J. Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES), Emotion. https://pubmed.ncbi.nlm.nih.gov/21859206/

https://www.researchgate.net/profile/Agneta_Fischer/publication/51587980_Moving_Faces_Looking_Places_Validation_of_the_Amsterdam_Dynamic_Facial_Expression_Set_ADFES/links/0fcfd510b913c9f849000000/Moving-Faces-Looking-Places-Validation-of-the-Amsterdam-Dynamic-Facial-Expression-Set-ADFES.pdf

Conditions

The researchers involved in producing this stimulus set permit its use for scientific research, free of charge, on the basis of the following agreement with the user:

  1. No copies will be made or distributed without explicit consent of the producers. Publishing examples of the stimuli in scientific papers is allowed after obtaining written consent from the producers.
  2. Any commercial use of the stimulus set is strictly prohibited.
  3. The user will provide the Department of Social Psychology at theUniversity of Amsterdam with copies of any publications in which the set has been used (for information only).
  4. The user will cite the stimulus set with the following reference: Van der Schalk, J., Hawk, S. T., Fischer, A. H., & Doosje, B. J. (in press).Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES), Emotion.
  5. The user will confirm agreement by ticking the box below, by providing a valid e-mail address and by declaring affiliation with an accredited research institution.

London Set

https://figshare.com/articles/dataset/Face_Research_Lab_London_Set/5047666

Reference

Can be generated on the website for any given journal. DeBruine, Lisa; Jones, Benedict (2017): Face Research Lab London Set. figshare. Dataset. https://doi.org/10.6084/m9.figshare.5047666.v3

DOI

Conditions

All individuals gave signed consent for their images to be "used in lab-based and web-based studies in their original or altered forms and to illustrate research (e.g., in scientific journals, news media or presentations)."

Wilma Bainbridge

http://www.wilmabainbridge.com/facememorability2.html

This database contains 10 168 natural face photographs and several measures for 2 222 of the faces, including memorability scores, computer vision and psychology attributes, and landmark point annotations. The face photographs are JPEGs with 72 pixels/in resolution and 256-pixel height. The attribute data are stored in either MATLAB or Excel files. Landmark annotations are stored in TXT files.

Downloaded are all the files, there is a lot of readme and mat/xlsx/html files, check it.

Reference

Main Citation: Bainbridge, W.A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face images. Journal of Experimental Psychology: General. Journal of Experimental Psychology: General, 142(4), 1323-1334.

Related citation: Khosla, A., Bainbridge, W.A., Torralba, A., & Oliva, A. (2013). Modifying the memorability of face photographs. Proceedings of the International Conference on Computer Vision (ICCV), Sydney, Australia. Please cite in addition if you use the annotations.

Conditions

This one has a few of them, needs to be contacted (and takes quite long but Michal can help). Based on our emails:

Keep in mind that you CANNOT publish faces from the individuals in the 10k-image dataset, only faces from the 49-image dataset. Also keep in mind that the database cannot be used for profit.

we would appreciate it if you update us on who has access to the database if it changes, just so we can keep our access records up to date. A quick email with the names of people who have access will work!

BFDB -- Basel face database

https://bfd.unibas.ch/en/

The Basel Face Database (BFD) is built upon portrait photographs of forty different individuals. All these photographs have been manipulated to appear more or less agentic and communal (Big Two personality dimensions) as well as open to experience, conscientious, extraverted, agreeable, and neurotic (Big Five personality dimensions). Thus, the database consists of forty photographs of different individuals and 14 variations of each of them signaling different personalities. Using this database therefore allows to investigate the impact of personality on different outcome variables in a very systematic way.

The BFD consists of photographs of 18 male and 22 female undergraduate students from the University of Basel with an average age of 23 years. The individuals look directly towards the camera with a neutral, relaxed facial expression.

All 40 photographs have been systematically manipulated to show reduced or enhanced values on the Big Two (agency and communion) and on the Big Five (openness, conscientiousness, extraversion, agreeableness, and extraversion) personality dimensions. This results in a total of 15 images per individual, and 600 images in total.

The validation file contains means and standard deviations of the Big Two and Big Five personality judgments as well as naturalness judgements for all 600 images.

Downloaded, not used, came from mirella.walker@unibas.ch but rebecca.goetsch@stud.unibas.ch replied.

Reference

Walker, M., Schönborn, S., Greifeneder, R., & Vetter, T. (2018). The Basel Face Database: A validated set of photographs reflecting systematic differences in Big Two and Big Five personality dimensions. PloS one, 13(3). doi: https://doi.org/10.1371/journal.pone.0193190

Conditions

The BFD has been developed by Mirella Walker and colleagues at the University of Basel, Switzerland. The BFD can be used freely for non-commercial scientific research.

The only problem is that they state that the images may not be modified (point 4).

Bogazici face database (Turks)

University and the paper where there is a form for asking for the database.

Face database of Turkish undergraduate student targets. High-resolution standardized photographs were taken and supported by the following materials: (a) basic demographic and appearance-related information, (b) two types of landmark configurations (for Webmorph and geometric morphometrics (GM)), (c) facial width-to-height ratio (fWHR) measurement, (d) information on photography parameters, (e) perceptual norms provided by raters

Reference

Saribay SA, Biten AF, Meral EO, Aldan P, Třebický V, Kleisner K (2018) The Bogazici face database: Standardized photographs of Turkish faces with supporting materials. PLoS ONE 13(2): e0192018. doi:10.1371/journal.pone.0192018

Here is the updated citation:

Saribay, S. A., Biten, A. F., Meral, E. O., Aldan, P., Třebický, V., & Kleisner, K. (2018). The Bogazici face database: Standardized photographs of Turkish faces with supporting materials. PLoS ONE, 13(2): e0192018. doi:10.1371/journal.pone.0192018

Conditions

Data are available from Bogazici University, Department of Psychology for researchers who meet the criteria for access to confidential data. As stated within the paper, an e-mail request must be sent to psy@boun.edu.tr and an agreement form must be signed, scanned, and sent back, upon which data are electronically transferred.

Email sent, took them quite a long time to answer (15 dyas) but good.

RADIATE Emotional Face Stimulus Set

http://fablab.yale.edu/page/assays-tools The racially diverse affective expression (RADIATE) face stimulus set is designed to provide an open-access set of 1,721 facial expressions of Black, White, Hispanic and Asian adult models. Psychometric results are provided describing the initial validity and reliability of the stimuli based on judgments of the emotional expressions.

There are 3 links to download from:

Reference

The paper and the reference:

Conley, M. I., Dellarco, D. V., Rubien-Thomas, E., Cohen, A. O., Cervera, A., Tottenham, N., & Casey, B. J. (2018). The racially diverse affective expression (RADIATE) face stimulus set. Psychiatry research.

They also write to cite NimStim paper, do not know why. Tottenham, N, Tanaka, JW, Leon, AC, McCarry, T, Nurse, M, Hare, TA, Marcus, DJ, Westerlund, A, Casey, BJ & Nelson, C. (2009).

Conditions

By downloading these stimuli, I agree to use them solely for approved institutional research or educational purposes and to not use them in any way to deliberately or inadvertently identify the individuals in the pictures. When referring to these stimuli in publications or talks, please cite the following references (see above).

FEI Face Database

https://fei.edu.br/~cet/facedatabase.html

The FEI face database is a Brazilian face database that contains a set of face images taken between June 2005 and March 2006 at the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil. There are 14 images for each of 200 individuals, a total of 2800 images. All images are colourful and taken against a white homogenous background in an upright frontal position with profile rotation of up to about 180 degrees. Scale might vary about 10% and the original size of each image is 640x480 pixels. All faces are mainly represented by students and staff at FEI, between 19 and 40 years old with distinct appearance, hairstyle, and adorns. The number of male and female subjects are exactly the same and equal to 100. Figure 1 shows some examples of image variations from the FEI face database.

Reference

Probably this but I'm unsure: Thomaz, C.E.; Giraldi, G.A. A new ranking method for Principal Components Analysis and its application to face image analysis. Image Vision Comput. 2010, 28, 902–913.

It's not the database per se but was cited in here.

Conditions

The FEI face database is available for research purposes only. Permission to use but not distribute or reproduce this database is granted to all researchers.

Wits database, WFD -- African database (not used, not downloaded, not contacted)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7986986/

sample http://wiredspace.wits.ac.za/handle/10539/29924

This paper aims to review the available databases and describe the development of a high resolution, standardised facial photograph and CCTV recording database of male Africans. The database is composed of a total of 6220 standardised and uncontrolled suboptimal facial photographs of 622 matching individuals in five different views, as well as corresponding CCTV footage of 334 individuals recorded under different realistic conditions.

Reference

Bacci N, Davimes J, Steyn M and Briers N. Development of the Wits Face Database: an African database of high-resolution facial photographs and multimodal closed-circuit television (CCTV) recordings [version 1; peer review: 2 approved]. F1000Research 2021, 10:131 (https://doi.org/10.12688/f1000research.50887.1).

Conditions

The WFD is stored on the Wits Institutional Repository environment on DSpace (WIReDSpace) and published under the following unique identifier: http://doi.org/10.17605/OSF.IO/WMA4C (this registration also contains the PhD study protocol that led to the development of the WFD and an addendum to the protocol registration highlighting the major changes to the methodological approach of the original protocol). A sample of the dataset is freely accessible at https://hdl.handle.net/10539/29924.

The database is an open access resource for use in strictly non-commercial research. In order to access the WFD, prospective users will have to apply for access to the Institutional Review Board overseeing ethical and scientific use of the database in order to safeguard the privacy and decency of the database’s participants. Once approved a researcher may use the database free of charge. Database access is restricted and limited to following the above-mentioned procedure, due to the nature of the data including potentially identifying information (facial physiognomy) of participants. In addition, strict limitations were imposed by the Human Research Ethics Committee (Medical) as well as the consent permissions agreed upon with the participants, which assign responsibility to the School of Anatomical Sciences to review access applications in ethical and scientific merit in order to exclusively conduct research. The access procedures and limitations are governed by a legally binding Conditions of Use document available on https://hdl.handle.net/10539/29924 40 in conjunction to the freely accessible sample. Data will be made available to successful applicants under a temporary restricted licence guided by the aforementioned conditions of use document.

Wild Faces Database, WFD (not used, not downloaded)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7986986/

sample https://osf.io/6p4r7/

Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as ‘happy’ and ‘angry’). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.

Reference

Long, H., Peluso, N., Baker, C.I. et al. A database of heterogeneous faces for studying naturalistic expressions. Sci Rep 13, 5383 (2023). https://doi.org/10.1038/s41598-023-32659-5

Conditions

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Mac Brain (not working)

https://www.macbrain.org/resources.htm

Not downloaded, contacted but with no response.

Conditions

By downloading the set, you agree to use the images for research purposes only and agree to the following terms: only models #01, 03, 18, 28, 40, & 45 may be published, and these may only be published in scientific journals. Researchers may not publish images from the remaining models anywhere.

Radboud Faces Database (not working)

https://www.ru.nl/bsi/research/facilities/radboud-faces-database/

Database validated in the paper by Langner 2010, but at the page www.rafd.nl neither on www.socsci.ru.nl › RaFD2 › RaFD, I am not able to download the database (the pages do not load).

Reference

Oliver Langner, Ron Dotsch, Gijsbert Bijlstra, Daniel H. J. Wigboldus, Skyler T. Hawk & Ad van Knippenberg (2010) Presentation and validation of the Radboud Faces Database, Cognition and Emotion, 24:8, 1377-1388, DOI: 10.1080/02699930903485076

DOI

Emotions

FACES -- A database of facial expressions in younger, middle-aged, and older women and men

https://faces.mpdl.mpg.de/imeji/

FACES is a set of images of naturalistic faces of 171 young (n = 58), middle-aged (n = 56), and older (n = 57) women and men displaying each of six facial expressions: neutrality, sadness, disgust, fear, anger, and happiness. The FACES database was developed between 2005 and 2007 by Natalie C. Ebner, Michaela Riediger, and Ulman Lindenberger at the Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany.

The database comprises two sets of pictures per person and per facial expression (a vs. b set), resulting in a total of 2,052 images. A subset of 72 pictures is available without having to register in order to apply for a personal account. Research-related publication and display of the publicly available FACES objects are permitted for the purpose of illustrating research methodology. However, if you plan to use the corresponding objects for such purposes, please register for FACES and send the FACES Platform Release Agreement (with a short description of how you plan to use the publicly available objects) to the FACES Technical Agent.

For detailed information about the development and validation of the database see Ebner, N., Riediger, M., & Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior research Methods, 42, 351-362. doi:10.3758/BRM.42.1.351. Please always refer to this publication whenever you use objects from the FACES platform.

After development of the face stimuli, in a subsequent validation study, a total of 154 young (n = 52), middle-aged (n = 51), and older (n = 51) women and men rated the faces in terms of facial expression, perceived age, attractiveness, and distinctiveness. These picture-specific normative ratings can be downloaded here :

rating results for facial expression
rating results for perceived age
rating results for attractiveness
rating results for distinctiveness

The first two dimensions (facial expression and perceived age) are described in Ebner, N., Riediger, M., & Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior research Methods, 42, 351-362. doi:10.3758/BRM.42.1.351. More detailed descriptions for the latter two dimensions are available in Ebner, N. C., Luedicke, J., Voelkle, M. C., Riediger, M., Lin, T., & Lindenberger, U. (2018). An adult developmental approach to perceived facial attractiveness and distinctiveness. Frontiers in Psychology, 9:561. doi:10.3389/fpsyg.2018.00561.

Dynamic FACES

Dynamic FACES is an extension of the original FACES database. It is a database of morphed videos (n = 1,026) of young, middle-aged, and older adults displaying six naturalistic emotional facial expressions including neutrality, sadness, disgust, fear, anger, and happiness. Static images used for morphing came from the original FACES database. Videos were created by transitioning from a static neutral image to a target emotion. Videos are available in 384 x 480 pixels as .mp4 files or in original size of 1280 x1600 as .mov files.

For further information about Dynamic FACES see:

Holland, C. A. C., Ebner, N. C., Lin, T., & Samanez-Larkin, G. R. (2019). Emotion identification across adulthood using the Dynamic FACES database of emotional expressions in younger, middle aged, and older adults. Cognition and Emotion, 33, 245-257. doi:10.1080/02699931.2018.1445981.

Scrambled FACES

All 2,052 images from the original FACES database were scrambled using MATLAB. With the randblock function, original FACES files were treated as 800x1000x3 matrices – the third dimension denoting specific RGB values – and partitioned into non-overlapping 2x2x3 blocks. The matrices were then randomly shuffled by these smaller blocks, providing final images that matched the dimensions of the original image and were composed of the same individual pixels, although arranged differently. All scrambled images are 800x1000 jpeg files (96 dpi).

Reference

There are many on their webpage, this is the main, the rest I believe we do not need (some are about age, the dynamic database, ...). https://link.springer.com/article/10.3758/BRM.42.1.351

Ebner, N., Riediger, M., & Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior research Methods, 42, 351-362. doi:10.3758/BRM.42.1.351

Conditions

FACES is freely available for usage in scientific research.

Without a user account, only the pictures of six exemplary persons (72 pictures) can be viewed. Full access to this online service and all its objects is possible after registration and log in.

Researchers can apply for an account on a case-by-case (i.e., person-by-person and study-by-study) basis.

-> Applied, took ~ 15 days to get an answer after sending them the agreement and a mail to Natalie.

NimStim

https://danlab.psychology.columbia.edu/content/nimstim-set-facial-expressions

Paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3474329/

This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.

Info: Dataset is comprising of 672 images of naturally posed photographs by 43 professional actors (18 female, 25 male) ranging from 21 to 30 years old. Actors from a diverse sample were chosen to portray emotional expressions within this dataset. To be precise, the actors were African-American (N = 10), Asian-American (N = 6), European-American (N = 25), Latino-American (N = 2). The images contained in this dataset include eight emotional expressions, namely: neutral, angry, disgust, surprise, sad, calm, happy, and afraid. Both open and closed mouth versions were provided for all emotional expressions, with the exception of surprise (only open mouth provided) and happy (high arousal open mouth/exuberant provided). The face stimuli can be accessed for free on: http://www.macbrain.org/resources.htm.

Checked with them, not possible to use online at all!

Reference

N. Tottenham, J.W. Tanaka, A.C. Leon, et al., The NimStim set of facial expressions: judgments from untrained research participants, Psychiatry Res, 168 (2009), pp. 242-249

Conditions

They write the following conditions for download which might be problematic:

a) research institution where the laboratory resides (including address and phone number associated with the institution)

b) all of the names and emails of the Principal Investigators (i.e. laboratory heads)

c) a description of the research study

d) confirm that the data will be collected in a laboratory (not online, in a classroom, or other public medium)

Padova Emotional Dataset of Facial Expressions (PEDFE)

A unique dataset of genuine and posed emotional facial expressions

Paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3474329/

This dataset tries to fill this gap, providing a considerable amount (N = 1458) of dynamic genuine (N = 707) and posed (N = 751) clips of the six universal emotions from 56 participants. The dataset is available in two versions: original clips, including participants’ body and background, and modified clips, where only the face of participants is visible. Notably, the original dataset has been validated by 122 human raters, while the modified dataset has been validated by 280 human raters. Hit rates for emotion and genuineness, as well as the mean, standard deviation of genuineness, and intensity perception, are provided for each clip to allow future users to select the most appropriate clips needed to answer their scientific questions.

Reference

Miolla, A., Cardaioli, M. & Scarpazza, C. Padova Emotional Dataset of Facial Expressions (PEDFE): A unique dataset of genuine and posed emotional facial expressions. Behav Res (2022). https://doi.org/10.3758/s13428-022-01914-4

Conditions

The PEDFE stimuli as well as the Supplementary Materials are made freely available to the research community (https://osf.io/cynsx/). An End User License Agreement (EULA) needs to be produced for accessing the database.

Paid databases

Pictures of Facial Affect (POFA), The Ekman 60 Faces Test

https://www.paulekman.com/product/pictures-of-facial-affect-pofa/

Costs $399, has been the most widely used and validated series of photographs in facial expression research. The Ekman 60 Faces Test can be used to assess recognition of facial expressions of basic emotions. The maximum test score indicating best performance is 60 for all six emotions (happiness, sadness, disgust, fear, surprise and anger) and 10 for each basic emotion.

The POFA collection consists of 110 photographs of facial expressions that have been widely used in cross-cultural studies, and more recently, in neuropsychological research. All images are black and white. A brochure providing norms is included with the collection. It is important to note that these images are not identical in intensity or facial configuration.

Reference

P. Ekman, W. Friesen, Pictures of facial affect, Consulting Psychologists Press, Palo Alto, CA (1976)

Conditions

Permission is given to the person who has made this purchase and may use these images solely for the research project originally intended. Permission must be requested separately to reprint the images within the paper associated with this project.

Any subsequent project(s) or paper(s) will require you to seek new permission.


Audiovisual databases

The Sabancı University Dynamic Face Database (SUDFace)

The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.

Reference

https://link.springer.com/article/10.3758/s13428-022-01951-z

Şentürk, Y.D., Tavacioglu, E.E., Duymaz, İ. et al. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res (2022). https://doi.org/10.3758/s13428-022-01951-z

Conditions

Freely accessible for research purposes:

The SUDFace database generated in the development study can be accessed in the “Sabancı University Dynamic Face Database” folder from drive.google.com/drive/folders/1xzxLbza4qiI3XkkHIPycAKyypFspu_vb?usp=sharing. Also, datasets generated during and analyzed during the validation study are available in the “Dataset” folder, https://osf.io/b4vju/?view_only=1dd006d0d4504a2982d31968d9c360f6.


Databases with artificially created faces

One Million Impressions (OMI)

The diversity of human faces and the contexts in which they appear gives rise to an expansive stimulus space over which people infer psychological traits (e.g., trustworthiness or alertness) and other attributes (e.g., age or adiposity). Machine learning methods, in particular deep neural networks, provide expressive feature representations of face stimuli, but the correspondence between these representations and various human attribute inferences is difficult to determine because the former are high-dimensional vectors produced via black-box optimization algorithms. Here we combine deep generative image models with over 1 million judgments to model inferences of more than 30 attributes over a comprehensive latent face space. The predictive accuracy of our model approaches human interrater reliability, which simulations suggest would not have been possible with fewer faces, fewer judgments, or lower-dimensional feature representations. Our model can be used to predict and manipulate inferences with respect to arbitrary face photographs or to generate synthetic photorealistic face stimuli that evoke impressions tuned along the modeled attributes.

Reference

https://www.pnas.org/doi/abs/10.1073/pnas.2115228119

Github: https://github.com/jcpeterson/omi

Peterson, J. C., Uddenberg, S., Griffiths, T. L., Todorov, A., & Suchow, J. W. (2022). Deep models of superficial face judgments. Proceedings of the National Academy of Sciences (PNAS), 119(17), e2115228119. doi:10.1073/pnas.2115228119

Conditions

The data is distributed under the Creative Commons BY-NC-SA 4.0 license. If you intend to use it, please see LICENSE.txt for more information:

The One Million Impressions (OMI) face dataset is a dataest of both
synthetic human face stimuli and corresponding attribute ratings
from a large-scale behavioral experiment corresponding to the
following publication:
  
    Peterson, J. C., Uddenberg, S., Griffiths, T., Todorov, A., & Suchow,
    J. W. (2022). Deep models of superficial face judgments. Proceedings 
    of the National Academy of Sciences (PNAS).
    
It is the joint creation of the authors listed above.

The OMI face dataset (including any metadata, scripts, and
documentation) is made available under Creative Commons BY-NC-SA 4.0 license.
You can use, redistribute, and adapt it for non-commercial purposes, 
as long as you (a) give appropriate credit by citing our paper, 
(b) indicate any changes that you've made, and
(c) distribute any derivative works under the same license.

    https://creativecommons.org/licenses/by-nc-sa/4.0/
    
The face dataset used to create the underlying models that in
turn were used to generate our synthetic faces can be found at:

https://github.com/NVlabs/ffhq-dataset
https://github.com/NVlabs/stylegan2

Tools to create artificial faces

Makehuman and FaReT

The reference for the first is this website: http://www.makehumancommunity.org/. The second is a paper and a package.

Face Research Toolkit: A free and open-source toolkit of three-dimensional models and software to study face perception.

A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and six expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a “morphing” sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D “anti-faces” and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.

Reference

https://link.springer.com/article/10.3758/s13428-020-01421-4 or https://osf.io/preprints/psyarxiv/jb53v (preprint)

Github: https://github.com/fsotoc/FaReT

Hays, J. S., Wong, C., & Soto, F. (2020). FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behavior Research Methods, 5.(6), 2604-2622.

About

List of used and downloaded databases available on lab StorWis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published