Thesite is secure.
The ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
The question of whether animals have emotions and respond to the emotional expressions of others has become a focus of research in the last decade [1-9]. However, to date, no study has convincingly shown that animals discriminate between emotional expressions of heterospecifics, excluding the possibility that they respond to simple cues. Here, we show that dogs use the emotion of a heterospecific as a discriminative cue. After learning to discriminate between happy and angry human faces in 15 picture pairs, whereby for one group only the upper halves of the faces were shown and for the other group only the lower halves of the faces were shown, dogs were tested with four types of probe trials: (1) the same half of the faces as in the training but of novel faces, (2) the other half of the faces used in training, (3) the other half of novel faces, and (4) the left half of the faces used in training. We found that dogs for which the happy faces were rewarded learned the discrimination more quickly than dogs for which the angry faces were rewarded. This would be predicted if the dogs recognized an angry face as an aversive stimulus. Furthermore, the dogs performed significantly above chance level in all four probe conditions and thus transferred the training contingency to novel stimuli that shared with the training set only the emotional expression as a distinguishing feature. We conclude that the dogs used their memories of real emotional human faces to accomplish the discrimination task.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Abstract: Human facial emotion detection is one of the challenging tasks in computer vision. Owing to high inter-class variance, it is hard for machine learning models to predict facial emotions accurately. Moreover, a person with several facial emotions increases the diversity and complexity of classification problems. In this paper, we have proposed a novel and intelligent approach for the classification of human facial emotions. The proposed approach comprises customized ResNet18 by employing transfer learning with the integration of triplet loss function (TLF), followed by SVM classification model. Using deep features from a customized ResNet18 trained with triplet loss, the proposed pipeline consists of a face detector used to locate and refine the face bounding box and a classifier to identify the facial expression class of discovered faces. RetinaFace is used to extract the identified face areas from the source image, and a ResNet18 model is trained on cropped face images with triplet loss to retrieve those features. An SVM classifier is used to categorize the facial expression based on the acquired deep characteristics. In this paper, we have proposed a method that can achieve better performance than state-of-the-art (SoTA) methods on JAFFE and MMI datasets. The technique is based on the triplet loss function to generate deep input image features. The proposed method performed well on the JAFFE and MMI datasets with an accuracy of 98.44% and 99.02%, respectively, on seven emotions; meanwhile, the performance of the method needs to be fine-tuned for the FER2013 and AFFECTNET datasets. Keywords: emotion classification; SVM; triplet loss; transfer learning; ResNet18
This study investigated the behavioral and neural indices of detecting facial familiarity and facial emotions in human faces by dogs. Awake canine fMRI was used to evaluate dogs' neural response to pictures and videos of familiar and unfamiliar human faces, which contained positive, neutral, and negative emotional expressions. The dog-human relationship was behaviorally characterized out-of-scanner using an unsolvable task. The caudate, hippocampus, and amygdala, mainly implicated in reward, familiarity and emotion processing, respectively, were activated in dogs when viewing familiar and emotionally salient human faces. Further, the magnitude of activation in these regions correlated with the duration for which dogs showed human-oriented behavior towards a familiar (as opposed to unfamiliar) person in the unsolvable task. These findings provide a bio-behavioral basis for the underlying markers and functions of human-dog interaction as they relate to familiarity and emotion in human faces.
Recent literature shows that dogs process human faces similarly to humans. They are able to discriminate familiar human faces using the global visual information both of the faces and the head (Huber, Racca, Scaf, Virnyi, & Range, 2013), scanning all the facial features systematically (e.g. eyes, nose and mouth; Somppi et al., 2016) and relying on configural elaboration (Pitteri, Mongillo, Carnier, Marinelli, & Huber, 2014). Moreover, dogs, as well as humans, focus their attention mainly in the eye region, showing faces identification impairments when it is masked (Pitteri et al., 2014; Somppi et al., 2016). Interestingly, their gazing pattern of faces informative regions varies according to the emotion expressed. Dogs tend to look more at the forehead region of positive emotional expression and at the mouth and the eyes of negative facial expressions (Barber, Randi, Mller, & Huber, 2016), but they avert their gaze from angry eyes (Somppi et al., 2016). The attentional bias shown toward the informative regions of human emotional faces suggests, therefore, that dogs use facial cues to encode human emotions. Furthermore, in exploring human faces (but not conspecific ones), dogs, as humans, rely more on information contained in their left visual field (Barber et al., 2016; Guo, Meints, Hall, Hall, & Mills, 2009; Ley & Bryden, 1979; Racca, Guo, Meints, & Mills, 2012). Although symmetric, the two sides of human faces differ in emotional expressivity. Previous studies employing mirrored chimeric (i.e. composite pictures made up of the normal and mirror-reversed hemiface images, obtained by splitting the face down the midline) and 3-D rotated pictures of faces, reported that people perceive the left hemiface as displaying stronger emotions more than the right one (Lindell, 2013; Nicholls, Ellis, Clement, & Yoshino, 2004), especially for negative emotions (Borod, Haywood, & Koff, 1997; Nicholls et al., 2004; Ulrich, 1993). Considering that the muscles of the left side of the face are mainly controlled by the contralateral hemisphere, such a difference in the emotional intensity displayed suggests a right hemisphere dominant role in expressing emotions (Dimberg & Petterson, 2000). Moreover, in humans, the right hemisphere has also a crucial role in the processing of emotions, since individuals with right-hemisphere lesions showed impairments in their ability to recognize others emotions (Bowers, Bauer, Coslett, & Heilman, 1985). A right-hemispheric asymmetry in processing human faces has also been found in dogs, which showed a left gaze bias in attending to neutral human faces (Barber et al., 2016; Guo et al., 2009; Racca et al., 2012). Nevertheless, the results on dogs looking bias for emotional faces are inconsistent. Whilst a left gaze bias was shown in response to all human faces regardless the emotion expressed (Barber et al., 2016), Racca et al. (2012) observe this preference only for neutral and negative emotions, but not for the positive ones. Thus, the possibility that such a preference is dependent on the valence of the emotion conveyed and subsequently perceived cannot be excluded. Furthermore, it remains still unclear whether dogs understand the emotional message conveyed by human facial expressions and which significance and valence they attribute to it.
All the facial emotional expressions were captured using a full HD digital camera (Sony Alpha 7 II ILCE-7M2K) positioned on a tripod and centrally placed in front of the subject at a distance of about 2 m. Before being portrayed, subjects were informed about the aim of the study and the procedure to be followed. They had to avoid make-up (except mascara) and to take off glasses, piercings, and earrings that could be used by dogs as a cue to discriminate the different expressions. Furthermore, an experimenter showed them a picture of the emotional facial expressions used by Schmidt and Cohn (2001), as a general reference for the expressive characteristics required. Subjects were then asked upon oral command to pose the different emotional facial expressions with the greatest intensity as possible. The order of the oral command was randomly assigned.
All the 56 visual stimuli (due pictures seven emotions four subjects) were then presented to four women and four men, between 23 and 62 years of age, in order to select the most significant ones. The pictures were shown as a PowerPoint slideshow in full screen mode on a monitor (Asus VG248QE) and in a random order between subjects. Each volunteer seated in front of the screen and had to rate on a 6-point scale (ranging between 0 and 5) the intensity of neutral, happiness, disgust, fear, anger, surprise, and sadness perceived per each facial expression shown. According to the questionnaire results, the pictures of a man and a woman were selected for the final test. (see Fig. 1).
3a8082e126