You see a crowd of point-light walkers. Your task is to adapt the ground speed to the matching translation speed of the walkers. The following video shows two trials with neither motion parallax nor a visible ground as example:
heading.through.a.crowd.mp4
These scripts are optimized for MatLab 2021b with Psychtoolbox (http://psychtoolbox.org/download.html) and OpenGL add-on libraries from the Psychtoolbox. So what needs to be installed on you computer are Matlab and Psychtoolbox.
Download all the files and add them to your Matlab folder. Within your Matlab folder, create a subfolder names "functions". Move the script "geFrustum" to this subfolder.
- github_path_heading_motion_parallax.m: This is the main script creating the scene.
- getFrustum.m: this script generates frustum data. The main script uses this script to do some calculations. No need to adapt this script.
- sample_walker3: motion data for point_light walker with normal speed
- gravel.rgb.tiff: ground type gravel
- trajectory_heading_plots.m: Recreate what your participants sketched and save the images
- github trajectory.R: Load in data and preprocess them to recreate participants' response, and to analyse the data
Open the script in Matlab and click on 'run'. Matlab automatically requires your input in the command line, and subsequently asks questions. Enter the participant id, session number, and further information about the scene (grond, motion parallax, walkers at different depth) subsequently. When done, Psychtoolbox automatically opens a window and runs the script in that window.
You will see the stimulus presentation. After each presentation, you are required to estimate your heading direction by moving the mouse along the horizontal (heading direction) and vertical (curvature of your traveled path) axis. Press the right mouse buttom to invert the curvature direction. Confirm your answer by pressing the left mouse buttom. Subsequently, the next trial starts. The script finishes when all trials are done.
You want to see the true heading direction? Just change show_true_heading (line 17) in the script from false to true:
show_true_heading = true;
We apply point-light walkers to operationalize human motion. These walkers originate from the motion-tracking data of a single walking human (de Lussanet et al., 2008). Each walker consists of 12 points corresponding to the ankles of a human body (knee, hip, hands, elbow, and shoulder joints). The walkers face either collectively to the left (-90°) or right (90°).
To decisively explore the influence of the components of biological motion on heading perception from optic flow analysis, we designed four conditions: static, natural locomotion, only translation, and only articulation. In the static condition, the walkers resemble static figures. Here, the walkers kept their posture at a fixed position. The natural locomotion condition presents the walkers naturally moving through the world and swinging their limbs. This condition combines both elements of biological motion. The only translation condition displayed walkers sliding through the world without any limb motion. So the walkers resembles figure skaters moving in the direction they were facing. Conversely to the only translation condition, walkers in the only articulation condition moved their limbs without physical translation. This condition is imaginable as pedestrians on a treadmill. Note these conditions are autamtically displayed in randomized order.
You can change the degree of depth information available in the scene If motion parallax is selected, the walkers stay at different depths in the room. While some of the group's position ranged between 7 and 9 m, the other ones are twice as far away, i.e., 14 to 18 m in depth. We adjust the walkers' size and points according to their positioning in the environment. Due to the positioning of the walkers in space, the scene is designed to induce motion parallax cues (Gibson, 1950).
You can also add a grey gravel ground plane to the scene. The ground provides independent optic flow, and thus, independent self-motion information. If no ground is visible, the points of the walkers combine biological motion and simulated self-motion. Here are some example stimuli with increasingly more depth and self-motion information:
The experimental world spans over 20 m scene depths. We placed a visible ground at eye height (1.60 m). Its appearance is structured (gravel). The gravel ground planeprovides independent optic flow from the simulated observer motion. The ground is programmed as blocking variable. In other words, you determine the ground appearance (black vs gravel) for the whole stimulus presentation. The next time you run the script, you can change the ground.
Observers encounter a crowd of point-light walkers oriented collectively to the left or right. The movements change from trial to trial. Whether the walkers move their arms and legs and whether they translated varies throughout the experiment. Observers' self-motion simulation approaching the group os always be independent of the movement and direction of the group. This simulation endures about 2500 ms. As soon as the last frame freezes, a path appears at the observers' feet. Their task is to report the perceived heading direction and adjust the pathway suiting best their perception of approaching the walkers. Horizontally moving the computer mouse changes the path position, and vertically moving modifies the curvature. Movements upwards stretches the pathway to a straight line, whereas movements downwards curves the trajectory. In each trial, the curve points randomly to the left or right. By pushing the right mouse button, respondents invert the curve direction. Subjects register their response by pressing the left mouse button. After their response, the subsequent trial started instantly, and the self-motion simulation starts without any time delay.
You can recreate your participant's (average) response per condition. The matlab skript loads in preprocessed data from the R script, redraws the stimulus scene, indicates true heading direction and plots the sketched trajectory. Basic information about the id and the walker facing can be added. The script automatically saves each image. Note this process can take some time. Here is an example of how the images could look like:
We recommend using the linear mixed model framework for two reasons. First, the dependent variables in our study exhibited non-normal distributions across conditions, violating the assumption of ANOVA based on ordinary least squares regression models. Linear mixed models can accommodate non-normal data, providing more robust estimates. Secondly, the mixed-modeling framework offers great-er flexibility, accuracy, and power for repeated-measures data by accounting for both fixed and ran-dom effects, as well as accommodating varying variances, covariances, and distributions (Kristensen & Hansen, 2004; Jaeger, 2008).