Natural Locomotion Torrent Download [full Version]

0 views
Skip to first unread message
Message has been deleted

Tommasa Gaetz

unread,
Jul 15, 2024, 7:37:07 PM7/15/24
to peugoldlingca

Copyright: 2022 Matthis et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All raw and processed data from this manuscript is available on Figshare at the following DOI High resolution.`mp4` video files for each of the videos in this manuscript are available here - The Matlab code necessary to process the data and produce the figures, videos, and simulations used in this study is hosted on GitHub - _PLoS_Comp_Bio.

Natural Locomotion Torrent Download [full Version]


DOWNLOAD https://ssurll.com/2yLOxd



The goal of this paper is to measure eye, body, and head movements during natural locomotion and to use this data to investigate the resulting optic flow patterns. We first calculated the flow patterns relative to the head, as this reflects the way that the movement of the body during gait impacts instantaneous heading direction by showing an eye-movement-free representation of optic flow. Then, we combine these head-centered flowfields with measured eye position to estimate the retinal optic flow experienced during natural locomotion. By characterizing the optic flow stimulus experienced during natural locomotion, we may gain a greater insight into the ways that the nervous system could exploit these signals for locomotor control.

(A) shows the subject walking in the Woodchips terrain wearing the Pupil Labs binocular eye tracker and Motion Shadow motion capture system. Optometrist roll-up sunglasses were used to shade the eyes to improve eye tracker performance. (B) shows a sample of the data record, presented as a sample frame for S1 Video. On the right is the view of the scene from the head camera, with gaze location indicated by the crosshair. Below that are the horizontal and vertical eye-in-head records, with blinks/tracker losses denoted by vertical gray bars. The high velocity regions (steep upwards slope) show the saccades to the next fixation point, and the lower velocity segments (shallow downwards slope) show the component that stabilizes gaze on a particular location in the scene as the subject moves towards it, resulting a characteristic saw-tooth appearance for the eye signal (without self-motion and the associated stabilizing mechanisms these saccades would exhibit a more square-wave like structure). On the left, the stick figure shows the skeleton figure reconstructed form the Motion Shadow data. This is integrated with the eye signal which is shown by the blue and pink lines. The representation of binocular gaze here shows the gaze vector from each eye converging on a single point (the mean of the two eyes). The original ground intersection of the right and left eye is shown as a magenta or cyan dot (respectively, more easily visible in S1 Video). The blue and red dots show the foot plants recorded by the motion capture system. The top left figure shows the scene image centered on the point of gaze reconstructed from the head camera as described in the Methods and Materials section.

Walking over the rocky terrain (Rocks) was demanding enough that subjects were not given instructions other than to walk from the start to the end position at a comfortable pace. Note that these conditions were primarily selected because the behaviors they represent provide opportunities for later analysis. In most cases, data gathered in the various conditions were not notably different in the dimensions explored in this manuscript (e.g. Fig 2). Detailed analysis of behavioral differences between conditions is beyond the scope of this paper.

To measure the optic flow patterns induced by the body motion independently of eye position, we first ran the videos from the head-mounted camera of the eye trackers through a computational optic flow estimation algorithm DeepFlow [47], which provides an estimate of image motion for every pixel of each frame of the video (S2 Video). As an index of heading we tracked the focus of expansion (FoE) within the resulting flow fields using a novel method inspired by computational fluid dynamics (See Methods and materials). This analysis provides an estimate of the FoE location in head-centered coordinates for each video frame (S3 Video).

Focus of Expansion velocity across all conditions (black histogram), as well as split by condition (colored insets). The thick line shows the mean across subjects, and shaded regions show +/-1 standard error.

Stabilization of gaze during fixation nulls visual motion at the fovea, so the basic structure of retinal optic flow will always consist of outflowing motion centered on the point of fixation. The retinal motion results from the translation and rotation of the eye in space, carried by the body and the walker holds gaze on a point on the ground during forward movement. We found several features of the retinal flow patterns that provide powerful cues for the visual control of locomotion, which we describe below.

In experiments that use a stationary observer and simulate direction on a computer monitor, the strong sense of illusory motion (vection) and accurate estimates of simulated heading indicate that humans are highly sensitive to full field optic flow (e.g. [10, 73]). However, it does not necessarily mean that subjects use this information to control direction of the body when heading towards a distant goal. The complex, phasic pattern of acceleration shown here derives from the basic biomechanics of locomotion [50]. In the absence of direct measurements of flow during locomotion, the magnitude of the effect of gait has not been obvious. Thus it may have been incorrectly assumed that the overall structure of optic flow during locomotion would be dominated by the effects of forward motion. Such a forward-motion-dominated might be derived from temporal integration of eye-movement-corrected, head-centered optic flow, but given the large and rapid variation in the head velocity shown here it is unclear if simple temporal integration would be sufficient for accurate heading estimates.

The act of steering towards a goal does not necessarily require the use of optic flow. [86] proposed that the perceived location (visual direction) of a target with respect to the body is used to guide locomotion, rendering optic flow unnecessary. Perhaps the strongest evidence for the role of optic flow in control of steering towards a goal is the demonstration by [87] who pitted visual direction against the focus of expansion in a virtual environment, where walkers generate the flow patterns typical of natural locomotion. They found that although visual direction was used to control walking paths when environments lacked visual structure (and thereby lacked a salient optic flow signal), optic flow had an increasing effect on paths as environments became more structured. The authors interpreted this result to mean that walkers use a combination of visual direction and optic flow to steer to a goal when the visual environment contains sufficient visual structure. This is puzzling in the context of our findings, since the [87] experiment used a fully ambulatory virtual environment, so the head-centered optic flow experienced those subjects would have had the same instabilities described here. How then can we reconcile these results?

While many methods exist to compute instantaneous heading from the retinal flow field, a consideration of these patterns relative to the gaze point through the gait cycle provides a different context for the way the retinal flow information is used to control real-world, natural locomotion.

Subjects completed three out-and-back walks in the Woodchips path, for a total of 6 trial/walks in that condition. There were two repetitions of each condition in the Woodchips (one per walking direction). Subjects completed 4 out-and-back walks on the Rocky path, for total of 8 trial/walks. Because the woodchips path was significantly longer than the rocky path, a similar amount of data was collected in each condition.

We used a procedure analogous to the VOR-based calibration method developed in [34], with some alternations due to the different in eye tracker. The Pupil Labs tracker used in this study estimate gaze for each eye using 3D spherical eye models generated within the coordinate frame of each eye camera. Using the procedure described below, the gaze estimates for each eye were rotated to align with the reference frame of the full-body kinematic estimates from the IMU-based motion capture system.

We created a geometric simulation to provide a more nuanced picture of the way that the movement of the body shapes the visual motion experienced during natural locomotion. To estimate the flow experienced during various types of movements, a simulated eye model was generated using the following procedure. Most of the geometric calculations used in this model rely heavily on the Geom3D toolbox on Mathworks.com [98]

To estimate the location of the focus of expansion on each frame, each frame from world camera was first processed by the DeepFlow optical flow algorithm described above. This method provides a motion estimate for each pixel of the video frame, providing a 2 dimensional vector field with the same dimension as the original video for each recorded frame. To track the Focus of Expansion (FoE) in each frame, this vector field was first negated (all vectors were multiplied by -1), which effectively transforms the FoE from a repellor node (vectors pointing away from the FoE) into an attractor node (vectors pointing towards the FoE). Then, a grid of particles was set to drift on this negated flow field using the streamlines2 function in Matlab. The paths traced by these particles provide information about the underlying structure of the optic flow on each frame, represented as purple/white lines in Fig 2 and S3 and S12 Videos. These streamlines represent the line integrals of the optic flow vector field measured on each frame.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages