All19 participants were asked to watch a 30-minute montage of images and videos of cute animals, having their heart rate and blood pressure measured before and after the study, with the majority wearing a heart rate monitor throughout. The main bulk of participants were students who were due to take an exam 90 minutes after this session, with the remainder being academic support staff who had declared they were feeling stressed because of work.
It was clear that students were anxious ahead of their exams, with heart rates and blood pressure for most participants mildly elevated before our session took place. Indeed, in some individuals, heart rate and blood pressure was even higher indicating a higher level of stress for those participants.
The psychological aspect investigated was the state anxiety, which is the anxiety provoked by a particular event, like an exam. The findings had shown a significant drop in anxiety levels, in some individual cases even by almost 50%, proving that watching cute animals can be a powerful stress reliever and a mood enhancer.
According to the plea agreement, in September 2023, Homeland Security Investigations (HSI) in Jacksonville received information regarding an individual residing in Jacksonville, who was identified as an administrator of a social media application group chat that was dedicated to the abuse, torture, and death of various-aged monkeys. The HSI investigation revealed that numerous people involved in the group exchanged hundreds of messages about the abuse and torture of monkeys, as well as videos depicting the abuse and torture of monkeys. The purpose of the group was to fund, view, distribute, and promote animal crush videos that depicted the torture, murder, and sadistic mutilation of animals, specifically baby and adult monkeys. The co-conspirators agreed to create animal crush videos using videographers and animals in other countries, to include Indonesia, which would then be sent to the United States. The name of the group changed multiple times to innocuous names that were inconsistent with the goals and interests of the group, which appeared to be in an effort to avoid detection by law enforcement.
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Copyright: 2021 Whiteway et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Code Availability A python/PyTorch implementation of the PS-VAE and MSPS-VAE is available through the Behavenet package, available at In addition to the (MS)PS-VAE, the Behavenet package also provides implementations for the VAE and -TC-VAE models used in this paper. Please see the Behavenet documentation at for more details. A NeuroCAAS (Neuroscience Cloud Analysis As a Service) (Abe et al. 2020) implementation of the PS-VAE can be found at NeuroCAAS replaces the need for expensive computing infrastructure and technical expertise with inexpensive, pay-as-you-go cloud computing and a simple drag-and-drop interface. To fit the PS-VAE, the user simply needs to upload a video, a corresponding labels file, and configuration files specifying desired model parameters. Then, the NeuroCAAS analysis will automatically perform the hyperparameter search as described above, parallelized across multiple GPUs. The output of this process is a downloadable collection of diagnostic plots and videos, as well as the models themselves. See the link provided above for the full details. Data Availability We have publicly released the preprocessed single-session videos, labels, and trained PS-VAE models for this project. The Jupyter notebooks located at -vae guide users through downloading the data and models, and performing some of the analyses presented in this paper. head-fixed (IBL) dataset: -vae_demo_head-fixed.zip moving mouse dataset: _recording_of_a_freely_moving_mouse/16441329/1 mouse face dataset: _recording_of_a_mouse_face/13961471/1 two-view dataset: _camera_recording_of_a_mouse/14036561/1 The raw data for the head-fixed sessions analyzed with the MSPS-VAE can be accessed through the IBL website. The Jupyter notebook located at -vae guides users through downloading and preprocessing the data into the format required by the Behavenet package. Session 1: -01-20/001/ Session 2: -01-08/001/ Session 3: -12-10/001/ Session 4: _043/2020-09-21/001/.
In this work we seek to combine the strengths of these two approaches by finding a low-dimensional, latent representation of animal behavior that is partitioned into two subspaces: a supervised subspace, or set of dimensions, that is required to directly reconstruct the labels obtained from pose estimation; and an orthogonal unsupervised subspace that captures additional variability in the video not accounted for by the labels. The resulting semi-supervised approach provides a richer and more interpretable representation of behavior than either approach alone.
We first apply the PS-VAE to a head-fixed mouse behavioral video [46]. We track paw positions and recover unsupervised dimensions that correspond to jaw position and local paw configuration. We then apply the PS-VAE to a video of a mouse freely moving around an open field arena. We track the ears, nose, back, and tail base, and recover unsupervised dimensions that correspond to more precise information about the pose of the body. We then demonstrate how the PS-VAE enables downstream analyses on two additional head-fixed mouse neuro-behavioral datasets. The first is a close up video of a mouse face (a similar setup to [47]), where we track pupil area and position, and recover unsupervised dimensions that separately encode information about the eyelid and the whisker pad. We then use this interpretable behavioral representation to construct separate saccade and whisking detectors. We also decode this behavioral representation with neural activity recorded from visual cortex using two-photon calcium imaging, and find that eye and whisker information are differentially decoded. The second dataset is a two camera video of a head-fixed mouse [22], where we track moving mechanical equipment and one visible paw. The PS-VAE recovers unsupervised dimensions that correspond to chest and jaw positions. We use this interpretable behavioral representation to separate animal and equipment movement, construct individual movement detectors for the paw and body, and decode the behavioral representation with neural activity recorded across dorsal cortex using widefield calcium imaging. Importantly, we also show how the uninterpretable latent representations provided by a standard VAE do not allow for the specificity of these analyses in both example datasets. These results demonstrate how the interpretable behavioral representations learned by the PS-VAE can enable targeted downstream behavioral and neural analyses using a single unified framework. Finally, we extend the PS-VAE framework to accommodate multiple videos from the same experimental setup by introducing a new subspace that captures variability in static background features across videos, while leaving the original subspaces (supervised and unsupervised) to capture dynamic behavioral features. We demonstrate this extension on multiple videos from the head-fixed mouse experimental setup [46]. A python/PyTorch implementation of the PS-VAE is available on github as well as the NeuroCAAS cloud analysis platform [48], and we have made all datasets publicly available; see the Data Availability and Code Availability statements for more details.
The goal of the PS-VAE is to find an interpretable, low-dimensional latent representation of a behavioral video. Both the interpretability and low dimensionality of this representation make it useful for downstream modeling tasks such as learning the dynamics of behavior and connecting behavior to neural activity, as we show in subsequent sections. The PS-VAE makes this behavioral representation interpretable by partitioning it into two sets of latent variables: a set of supervised latents, and a separate set of unsupervised latents. The role of the supervised latents is to capture specific features of the video that users have previously labeled with pose estimation software, for example joint positions. To achieve this, we require the supervised latents to directly reconstruct a set of user-supplied labels. The role of the unsupervised subspace is to then capture behavioral features in the video that have not been previously labeled. To achieve this, we require the full set of supervised and unsupervised latents to reconstruct the original video frames. We briefly outline the mathematical formulation of the PS-VAE here; full details can be found in the Methods, and we draw connections to related work from the machine learning literature in S1 Appendix.
3a8082e126