Wepresent a multimodal dataset for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection was used, utilising retrieval by affective tags from the
last.fm website, video highlight detection and an online assessment tool.
If you are interested in using this dataset, you will have to print, sign and scan an EULA (End User License Agreement) and upload it via the dataset request form. We will then supply you with a username and password to download the data. Please head on over to the downloads page for more details.
First and foremost we'd like to thank the 32 participants in this study for having the patience and goodwill to let us record their data.
This dataset was collected by a crack squad of dedicated researchers:
AMIGOS: A database for research on affect, personality traits and mood by means of neuro-physiological signals. We elicited reactions using both short and long videos in two configurations, one with single viewer at a time, and one with viewers in groups of four persons.
Zhiyi Cheng Xiatian Zhu Shaogang Gong Computer Vision Group, School of Electronic Engineering and Computer Science, Queen Mary University of London Home Protocols Leaderboard Description To facilitate more studies for developing face recognition methods that are effective and robust against low-resolution surveillance facial images, a new Surveillance Face Recognition challenge, QMUL-SurvFace, is introduced. This new challenge is the largest and more importantly the only true surveillance face recognition benchmark to our best knowledge, where low-resolution face images are native and not synthesised by artificial down-sampling of native high-resolution images. This challenge contains 463,507 face images of 15,573 distinct identities captured in real-world uncooperative surveillance scenes across wide space and time. Face recognition is generally more difficult in an open-set setting which is typical for surveillance person search scenarios, owing to an arbitrarily large number of non-target people (distractors) appearing over open space and unconstrained time.
Notice that the QMUL-SurvFace challenge is made available for research purposes. All the images were collected from the existing person re-identification datasets, and the copyright belongs to the original owners.
We present a database for research on affect, personality traits and mood by means of neuro-physiological signals. Different to other databases, we elicited affect using both short and long videos in two configurations, one with individual viewers and one with groups of viewers. The database allows the multimodal study of the affective responses of individuals in relation to their personality and mood, and the analysis of how these responses are affected by (i) the individual/group configuration, and (ii) the duration of the videos (short vs long).The data is collected in two experimental settings. In the first one, 40 participants watched 16 short emotional videos while they were alone. In the second one, the same participants watched 4 long videos, some of them alone and the rest in groups. In both settings, the participants' signals, namely, Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR), were recorded using wearable sensors. Frontal, full-body and depth videos were also recorded. Participants have been profiled for personality using the Big-five personality traits, and for mood with the baseline Positive Affect and Negative Affect Schedules. Participants emotions have been annotated with both, self-assessment of affective levels (valence, arousal, control, familiarity, like/dislike, and selection of basic emotion) felt by the participants during the first experiment, and external-assessment of participants' levels of valence and arousal for both experiments. We present a detailed correlation analysis that includes correlations between self-assessment and external-assessment of affect, between valence and arousal elicited by short and long videos on individuals and groups, as well as, between personality, mood, social context, and affect dimensions. We also present baseline methods and results for single-trial classification of valence and arousal, and for single-trial classification of personality traits, mood and social context (alone vs group), using EEG, GSR and ECG and fusion of modalities for both experiments.
DEAP: A multimodal dataset for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. For 22 of the 32 participants, frontal face video was also recorded.
Zhiyi Cheng Xiatian Zhu Shaogang Gong Computer Vision Group, School of Electronic Engineering and Computer Science, Queen Mary University of London Home Description We create a large scale face recognition benchmark, named TinyFace, to facilitate the investigation of natively LRFR at large scales (large gallery population sizes) in deep learning. The TinyFace dataset consists of 5,139 labelled facial identities given by 169,403 native LR face images (average 2016 pixels) designed for 1:N recognition test. All the LR faces in TinyFace are collected from public web data across a large variety of imaging scenarios, captured under uncontrolled viewing conditions in pose, illumination, occlusion and background. Beyond artificially down-sampling HR face images for LRFR performance test as in previous works, to our best knowledge, this is the first systematic study focusing specially on face recognition of native LR images.
Please notice that the QMUL-SurvFace challenge is made available for academic research purpose only. All the images were collected from the existing person re-identification datasets, and the copyright belongs to the original owners.
RNA-seq data (bam files) from the hypothalamus of 4 individuals with Prader-Willi syndrome and 4 age-matched control individuals. Detailed information about the study design, case-control matching and RNA-seq data processing is provided in the accompanying publication [Bochukova et al (2018) Cell Reports].
Studies are experimental investigations of a particular phenomenon, e.g., case-control studies on a particular trait or cancer research projects reporting matching cancer normal genomes from patients.
This table displays only public information pertaining to the files in the dataset. If you wish to access this dataset, please submit a request. If you already have access to these data files, please consult the download documentation.
What is social data science? Social data science is concerned with the use of a wide range of computational and data science methods to analyse social phenomena through (often large-scale) datasets. It deals with the analysis of data generated by users in collaborative platforms such as social media or through devices that have direct interaction with the real world, such as the internet of things and mobile devices, giving insights into human behaviour as well as about the real world. Research in the field has a strong link to the humanities and social sciences, insofar as it is informed by scientific theory from fields including but not limited to sociology, psychology, linguistics, ethics, privacy and journalism.
Compression tools can significantly reduce the amount of disk space consumed byyour data. In this article, we will look at the effectiveness of somecompression tools on real-world data sets, make some recommendations, andperhaps persuade you that compression is worth the effort.
Lossless compression of files is a great way to save space, and thereforemoney, on storage costs. Not all compression tools are equal and yourexperience will vary depending of which of the wide range of availablecompression tools you use. There is also a historical perception thatcompression and decompression are slow and time consuming, introducingunnecessary delays into the workflow.
Other tools considered include lrzip and lz4.Some of these tools offer multi-threaded options, which have stunning resultsin conjunction with the QMUL HPC Cluster, where typically the Research Dataresides.
The Human Genome reference file only contains the charactersg,a,c,t,G,A,C,T,N but is an example of a genome reference file commonlyused in bioinformatics. Effective compression for bioinformatics is importantdue to frequent use of large data files. The scope of this article is toinvestigate the efficacy of a range of generalist tools to work with a varietyof datasets, while bearing in mind that other specialist tools may producebetter compression for a narrow range of data types. For example,GeCocompressed the human genome file toaround half the size of the general tools. However, these specialist toolscarry risk; some of the projects are abandoned, and produce proprietaryformats, and don't necessarily decompress to a file identical to the original!It's better to stick to popular open-source tools under active development.
3a8082e126