This project uses the Bela hardware device to sonify motion capture data from Qualisys Track Manager (QTM).
- QTM Low-latency Sonification
- Contents
- Harmony in Motion: Real-time Sonification Strategies for Joint Action Research
- Project Structure
- Usage
- Open Source Libraries
Cognitive Science Bachelor's Project
The thesis was written around the investigation of the effect of low-latency sonification on the synchroniciy of joint action tasks. The project was supervised by Anna Zamm (PhD, Assistant Professor, Aarhus University).
Joint actions involving high levels of coordination often require individuals to represent and monitor their own actions as well as their partner’s actions in parallel, but current research is unclear on how this occurs under various circumstances. Using different movement sonification mapping strategies, we enhance attention towards either individual or joint outcomes of actions, and separate them into experimental conditions. Five subject pairs participated in a pilot experiment investigating whether synchrony is optimized when focusing on self-other or joint outcome representations. In the experiment, blindfolded subjects moved sleds along a track, while attempting to remain as synchronous as possible. The sled movements were captured with a motion capture system which sent 3D positional data to a low-latency sonification pipeline to implement the mapping strategies. The results showed that there were significant differences between the two sonification strategies. Notably, the No Sonification control condition consistently outperformed both sonification conditions, possibly due to environmental auditory localization that may have been masked during the sonification conditions. This pilot experiment successfully implemented a novel paradigm for joint action research that can be used in further studies in the field.
The final thesis can be found in this repository in the docs
directory in the following formats:
The R Markdown file can be used to rerun the analyses and generate the LaTeX and PDF files.
The project is structured as follows:
data
: Contains the anonymized raw data from the experimentdocs
: Contains the final thesis and other documentationdocs/templates
: Contains the latex template used for the thesis as well as the citation styleres
: Contains the resources for the project (audio files, images, etc.)scripts
: Scripts used to prepare the data for analysissrc
: Contains the source code for the projectsrc/qsdk
: Contains the source code for the Qualisys SDKsrc/res
: Soundfiles used in auditory stimuli generationsrc/utils
: Various functions sort-of organised into what they do (sound, spatial, etc)src/utils/config.h
: Pretty much anything you would want to change is in here, the experiment, label, and sonification optionssrc/utils/globals.h
: Global variables and constants. I think defining things here (especially instead of in the main render loop) can help bela performance to avoid mallocs?src/utils/latency_check.h
: If this file is included, you can get the output of a round-trip QTM API call, saved in/var/log/qtm_latency.log
on the Bela.src/render.cpp
: The main Bela sonification applicationsrc/settings.json
: The Bela settings file that is used by default
- Connect the Bela to the computer where Qualisys Track Manager (or with network access) will run via USB.
- Clone / download the repository to your PC important note: if you would like to access the data in the
data
directory, you will need to install Git LFS and rungit lfs pull
after cloning.git clone https://github.com/zeyus/QTM_Bela_Sonification.git # optionally, with git lfs git lfs pull
- Start Qualisys Track Manager, open a project with at least 2 labelled markers
- Edit
src/render.cpp
to change the marker names to match the ones in your project and the IP address of the computer running QTM - TBD: sonification scheme
- Copy the src directory to the Bela project directory
scp -r QTM_Bela_Sonification/src root@192.168.2.6:/root/Bela/projects/QTM_Bela_Sonification
- Start the QTM recording / playback with real-time output enabled
- Connect to the Bela board via SSH and run the project
ssh root@192.168.2.6 /root/Bela/scripts/run_project.sh QTM_Bela_Sonification -c "--use-analog no --use-digital no --period 32 --high-performance-mode --stop-button-pin=-1 --disable-led"
Important note: When compiling, you must ensure that the compiler is in C++14 mode by using CPPFLAGS=-std=c++14
Only relevant if you have collected your own data, using the same setup as the thesis.
Will be read from the data/raw
directory, and the subject information will be used to generate the plots and condition information.
It should be called subject_information.tsv
and have the following required columns:
- id [string]: The subject ID
- trial_order [n|t|y]: n = no sonification, t = task based, y = sync based
- partner [string]: The partner ID
- door_or_window [d|w]: The door or window side (room specific, a way of distinguishing which of the two tracks a subject used)
- handedness [l|r|a|NA]: The handedness of the subject (left, right, ambidextrous, or no answer)
- age [range n|NA]: The age range bracket the subject reported, or no answer
- tone_deaf [y|n|NA]: Whether the subject reported being tone deaf, or no answer
- gender [string|NA]: The gender reported by the subject, or no answer
- years_formal_music_training [float|NA]: The number of years of formal music training reported by the subject, or no answer
- rating1
- rating2
- rating3
- rating4
The raw data are expected to be 3D data exported from QTM as a TSV including the events/labels. The files should be placed in the data/raw
directory and should be named as follows:
qtm_capture_{subjecta}_{subjectb}{_suffix}.tsv
Where:
subjecta
is one of the subject ids (e.g.azsw
) (can be anything valid in a filename but will stop at the first_
or.
)subjectb
is the other subject id (e.g.wsza
) (can be anything valid in a filename but will stop at the first_
or.
)_suffix
is an optional suffix (e.g._test
) (can be anything valid in a filename but will stop at the file extension.tsv
)
Example (using above):
qtm_capture_azsw_wsza_test.tsv
The data preparation can be replicated by running the R scripts in the scripts
directory.
scripts/step_01_import_raw_data.R
: Imports the raw data from thedata/raw
directory and saves the data (compressed) in thedata/
directory. Note: this is only necessary if you are using your own datascripts/step_02_combine_data.R
: Combines the data from all of the subjects into a single data frame and saves it in thedata/combined_data.tsv.bz2
filescripts/step_03_remove_invalid_trials.R
: Removes trials that are invalid (e.g. due to missing data) and saves the data in thedata/combined_data_labeled.tsv.bz2
file- Invalid trials are removed
- Trial sequence numbers are added
- Condition labels are added
- Only data within trials are retained
scripts/step_04_long_format_data.R
: Organizes the data into long form and saves it in thedata/combined_data_long.tsv.bz2
file, this prepares it for processing with mousetrapscripts/step_05_align3D.R
: Aligns the y-axis trajectorie starts and standardizes the data to be centered around 0 and scaled to be between -1 and 1. Saves the data in the tsvdata/standardized_trajectories.tsv.bz2
file and rdatadata/standardized_trajectories.Rda.bz2
file