-
Notifications
You must be signed in to change notification settings - Fork 2
/
92-appendix_eyetracking.Rmd
executable file
·171 lines (136 loc) · 10.2 KB
/
92-appendix_eyetracking.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
# Eye-tracking control experiment {#appendix-eyetracking}
```{r setupAPPENDIXeyetracking, include = FALSE, message = FALSE, warning = FALSE, results = "hide"}
library(ggbeeswarm)
library(DiagrammeR)
library(tidyverse)
library(patchwork)
library(tidybayes)
library(ggforce)
library(sjstats)
library(plotly)
library(papaja)
library(GGally)
library(here)
library(BEST)
library(brms)
library(glue)
# setting seed for reproducibility
set.seed(666)
# setting up knitr options
knitr::opts_chunk$set(
cache = TRUE, echo = FALSE, warning = FALSE, message = FALSE,
out.width = "100%"
)
```
The purpose of this control experiment was to check whether the two motor tasks used in the main experiment presented in Chapter \@ref(chap6), namely, finger tapping and articulatory suppression (mouthing) were equivalent in terms of task difficulty or general dual-task demand [@emerson_role_2003]. Participants performed a computer-based visual search task (i.e., finding a T among an array of Ls), adapted from @treisman_feature-integration_1980 paradigm (see below for details).^[This appendix is presented in a submitted manuscript reformatted for the need of this thesis. Source: Nalborczyk, L., Perrone-Bertolotti, M., Baeyens, C., Grandchamp, R., Spinelli, E., Koster, E.H.W., \& L\oe venbruck, H. (*submitted*). Articulatory suppression effects on induced rumination. Pre-registered protocol, preprint, data, as well as reproducible code and figures are available at: [https://osf.io/3bh67/](https://osf.io/3bh67/).]
## Sample
Twenty-four participants (Mean age = 19.46, SD = 1.18, Min-Max = 18-21, 21 females, 21 right-handed), drawn from the same population (i.e., undergraduate Psychology students at Univ. Grenoble Alpes) as the main experiment took part in this eye-tracking pretest.
## Sample size
As we aimed to compare four conditions (i.e., visual search task alone, visual search + finger tapping, visual search + foot tapping and visual search + mouth movements), we recruited 24 participants in order to have at least one participant per order in our random counter-balanced repeated measures design ($n = k!$ where $n$ is the number of possible orders of conditions for $k$ conditions, then $n =4 != 24$).
## Material
Experiment took place individually in a dark room. Participants had to seat in front of a 22 inches, Iyama Vision Master Pro 513-MA203DT CRT Monitor (resolution: 1024x768 pixels, refresh rate: 85Hz) with a NVIDIA GeForce 9800 GTX+ graphic processor. A camera-based eye-tracker (EyeLink\textregistered\ 1000 from SR Research) with a sampling rate of 250 Hz and a minimum accuracy of 0.5\textdegree \ was used, in the pupil-corneal reflection tracking mode. Participants were positioned on a seat so as to keep distance from the camera to the forehead target between 50 and 60cm. A five-point calibration was completed before presenting stimuli, at the beginning of each condition.
## Procedure
The target (the letter "T") was presented at each trial, either on the right or on the left of the central vertical axis of the grid. The grid was an array of 6x6 items. Each stimulus was displayed until the participant response (maximum duration in case of no response: 5 seconds). Each grid of letters was preceded by a central fixation circle, that was displayed for 500ms after the participant moved his/her gaze towards it. In order to give their response ("left" or "right"), participants had to gaze towards a large filled gray circle, situated either on the left or on the right side of the grid. Each participant went through each condition, in a random order. A first general training session was proposed, at the beginning of the experiment, using ten items that were not used subsequently in the four conditions. Each condition was composed of 90 trials (45 left and 45 right), knowing that the first ten trials of each condition were considered as training trials and thus not included in analysis. All participants were filmed in order to ensure that they effectively performed the motor activity.
Our measure of interest was the delay between the apparition of the grid and the participant's response (the time at which his/her gaze reached the response circle), below referred to as "response time" (RT).
## Data preprocessing
Raw data from EyeLink\textregistered\ includes gaze on screen spatial coordinates, pupil diameter and forehead target spatial coordinates, with its distance from the camera. For this experiment, since only RTs (in ms) of correct trials are interesting, invalid trials (when no response has been given) and wrong responses were removed from the analysis.
## Data analysis
Data were analysed using *Condition* (4 modalities) as a within-subject predictor and the RT as a dependent variable in a generalised MLM with a lognormal distribution for the response. This model included a varying intercept for both *participant* and *item* and was fitted using weakly informative priors and the `brms` package [@R-brms].
## Results
```{r model_control, results = "hide", warning = FALSE}
##############################
# Importing eyetracking data #
##############################
eye_track_data <- read.csv(
here::here("data", "ch6", "eyetracking_control.csv"),
header = TRUE, sep = ","
)[, - 1]
####################
# Fiting the model #
####################
model1 <- brm(
RT ~ 1 + Condition + (1|Participant) + (1|Item),
family = lognormal(),
data = eye_track_data,
chains = 2, cores = parallel::detectCores(),
warmup = 2000, iter = 5000
)
```
```{r RTestimates, results = "asis"}
########################
# pairwise comparisons #
########################
pairwise <-
eye_track_data %>%
modelr::data_grid(Condition) %>%
add_fitted_draws(model1, re_formula = NA) %>%
compare_levels(.value, by = Condition) %>%
mean_hdi() %>%
arrange(rev(Condition) ) %>%
select(Contrast = Condition, Estimate = .value, Lower = .lower, Upper = .upper) %>%
mutate_if(is.numeric, round, 3) %>%
data.frame
apa_table(
pairwise,
placement = "H",
align = c("l", "c", "c", "c"),
caption = "Mean and 95\\% CrI of the posterior distribution on the difference between each pair of condition.",
note = NULL,
small = TRUE,
format.args = list(
digits = c(3, 3, 3),
margin = 2,
decimal.mark = ".", big.mark = ""),
escape = FALSE
)
```
Results of the MLM are reported in Table \@ref(tab:RTestimates) and Figure \@ref(fig:eyetrack), that report the estimated average RT (and it 95% CrI) by condition and for the difference between each pair of conditions. This analysis revealed that all dual tasks induced longer RTs in comparison with the control condition, with the finger-tapping condition inducing the greatest discrepancy as compared to the control condition (`r glue_data(pairwise %>% filter(Contrast == "Finger - Control"), "$\\beta$ = {Estimate}, 95% CrI [{Lower}, {Upper}]")`), followed by the mouthing condition (`r glue_data(pairwise %>% filter(Contrast == "Mouth - Control"), "$\\beta$ = {Estimate}, 95% CrI [{Lower}, {Upper}]")`) and the foot-tapping condition (`r glue_data(pairwise %>% filter(Contrast == "Foot - Control"), "$\\beta$ = {Estimate}, 95% CrI [{Lower}, {Upper}]")`). Pairwise comparisons between dual-task conditions revealed that the difference between the mouthing and the finger-tapping conditions was in the opposite direction and slightly smaller (`r glue_data(pairwise %>% filter(Contrast == "Mouth - Finger"), "$\\beta$ = {Estimate}, 95% CrI [{Lower}, {Upper}]")`) than the difference between the mouthing and the foot-tapping conditions (`r glue_data(pairwise %>% filter(Contrast == "Mouth - Foot"), "$\\beta$ = {Estimate}, 95% CrI [{Lower}, {Upper}]")`).
```{r eyetrack, fig.pos = "H", fig.width = 12, fig.height = 6, fig.cap = "Left panel: average RT by condition predicted by the model along with 95\\% CrIs. Underlying dots represent the average raw RT by participant. Right panel: posterior distribution of the difference between each pair of conditions, along with its mean and 90\\% and 95\\% CrI."}
#########################################
# plotting raw data + model predictions #
#########################################
p1 <-
eye_track_data %>%
group_by(Condition, Participant) %>%
dplyr::summarise(RT = mean(RT) ) %>%
ungroup %>%
ggplot(aes(x = Condition, y = RT, fill = Condition) ) +
# plot raw data
geom_violin(show.legend = FALSE, alpha = 0.2, colour = "white") +
geom_quasirandom(
data = eye_track_data %>% group_by(Condition, Participant) %>% dplyr::summarise(RT = mean(RT) ) %>% ungroup,
aes(x = Condition, y = RT, fill = Condition),
size = 2, alpha = 0.8, shape = 21, width = 0.2,
inherit.aes = FALSE, show.legend = FALSE
) +
# add model predictions
geom_pointrange(
data = marginal_effects(model1, points = TRUE)$Condition,
aes(x = effect1__, fill = effect1__, y = estimate__, ymin = lower__, ymax = upper__),
shape = 21, fatten = 3, size = 2, alpha = 1, show.legend = FALSE
) +
theme_minimal(base_size = 12) +
labs(y = "Response time (in ms)") +
scale_colour_brewer(palette = "Dark2", direction = -1) +
scale_fill_brewer(palette = "Dark2", direction = -1)
##################################
# tidybayes pair-wise comparison #
##################################
p2 <-
eye_track_data %>%
modelr::data_grid(Condition) %>%
add_fitted_draws(model1, re_formula = NA) %>%
compare_levels(.value, by = Condition) %>%
ggplot(aes(x = .value, y = Condition) ) +
geom_vline(xintercept = 0, linetype = 2) +
geom_halfeyeh(alpha = 0.1, .width = c(.9, .95) ) +
theme_minimal(base_size = 12) +
theme(legend.position = "none") +
labs(x = "Response time difference (in ms)", y = "")
##########################
# putting plots together #
##########################
p1 + p2
```
## Discussion
This control experiment shows that there is no apparent difference (or a negligible one) in terms of attentional demand between the two motor tasks used in the main experiment (i.e., finger-tapping and mouthing), although performing a dual motor task (of any type) does seem costly, because of the observed difference between the control condition and the mean of the three others conditions. These results are in line with the results obtained by @cefidekhanie_interaction_2014 in their control experiment.