-
Notifications
You must be signed in to change notification settings - Fork 2
/
06-chap6.Rmd
executable file
·1072 lines (848 loc) · 78.6 KB
/
06-chap6.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Articulatory suppression effects on induced rumination {#chap6}
```{r setupCH6, include = FALSE, message = FALSE, warning = FALSE, results = "hide"}
library(wordcountaddin)
library(ggbeeswarm)
library(DiagrammeR)
library(tidyverse)
library(patchwork)
library(tidybayes)
library(ggforce)
library(sjstats)
library(plotly)
library(papaja)
library(GGally)
library(here)
library(BEST)
library(brms)
library(glue)
# setting seed for reproducibility
set.seed(666)
# setting up knitr options
knitr::opts_chunk$set(
cache = TRUE, echo = FALSE, warning = FALSE, message = FALSE,
fig.pos = "ht",
fig.align = "center",
out.width = "100%"
)
```
<!-- NB: You can add comments using these tags -->
\initial{T}his study explores whether the speech motor system is involved in verbal rumination, a particular kind of inner speech. The motor simulation hypothesis considers inner speech as an action, accompanied by simulated speech percepts, that would as such involve the speech motor system. If so, we could expect verbal rumination to be disrupted by concurrent involvement of the speech apparatus. We recruited 106 healthy adults and measured their self-reported level of rumination before and after a rumination induction, as well as after five minutes of a subsequent motor task (either an articulatory suppression -silent mouthing- task or a finger tapping control task). We also evaluated to what extent ruminative thoughts were experienced with a verbal quality or in another modality (e.g., visual images, non-speech sounds). Self-reported levels of rumination showed a decrease after both motor activities (silent mouthing and finger-tapping), with only a slightly stronger decrease after the articulatory suppression than the control task. The rumination level decrease was not moderated by the modality of the ruminative thoughts. We discuss these results within the framework of verbal rumination as simulated speech and suggest alternative ways to test the engagement of the speech motor system in verbal rumination.^[This experimental chapter is a submitted manuscript reformatted for the need of this thesis. Source: Nalborczyk, L., Perrone-Bertolotti, M., Baeyens, C., Grandchamp, R., Spinelli, E., Koster, E.H.W., \& L\oe venbruck, H. (*submitted*). Articulatory suppression effects on induced rumination. Pre-registered protocol, preprint, data, as well as reproducible code and figures are available at: [https://osf.io/3bh67/](https://osf.io/3bh67/).]
## Introduction
A large part of our inner conscious experience involves verbal content, with internal monologues and conversations. Inner speech is considered as a major component of conscious experience and cognition [@hubbard_auditory_2010;@klinger_dimensions_1987;@Hurlburt2013]. An important issue concerns the format and nature of inner speech and whether it is better described as a mere evocation of abstract amodal verbal representations (i.e., without articulatory or auditory sensation) or as a concrete motor simulation of actual speech production [for reviews, see @alderson-day_inner_2015;@loevenbruck_cognitive_2018;@Perrone-Bertolotti2014]. In the first case, inner speech is seen as divorced from bodily experience, and includes, at most, faded auditory representations. In the second case, inner speech is considered as a physical process that unfolds over time, leading to an enactive re-creation of auditory percepts, via the simulation of articulatory actions. The latter hypothesis is interesting in the context of persistent negative and maladaptive forms of inner speech, such as rumination. If this hypothesis is correct, we could expect rumination --as a particular type of inner speech-- to be disrupted by concurrent involvement of the speech muscles. The present study aims at testing this specific idea.
Introspective explorations of the characteristics of inner speech have led to different views on the relative importance of its auditory and articulatory components, and on the involvement of motor processes. It has been suggested successively that speech motor representations would be purely motoric [@stricker_studien_1880], that they would be expressed dominantly in an auditory format [@egger_parole_1881], or that they would consist in a mix of these in the overall population [@ballet_langage_1886]. The intuitive distinction between auditory and motor phenomena is sometimes referred to in contemporary research by the terms of *inner ear* and *inner voice*, in line with Baddeley's classic model of working memory [e.g.,@baddeley_exploring_1984; see also @buchsbaum_role_2013]. Baddeley's model relies on a partnership between an *inner ear* (i.e., storage) and an *inner voice* (i.e., subvocal rehearsal), which can be assessed by selectively blocking either one of these components [e.g.,@smith_role_1995].
Empirical arguments supporting the crucial role of the inner voice in verbal working memory (subvocal articulatory rehearsal) can be found in studies using articulatory suppression, in which the *action* component (i.e., the *inner voice*) of inner speech is disrupted. Articulatory suppression usually refers to a task which requires participants to utter speech sounds (or to produce speech gestures without sound), so that this activity disrupts ongoing speech production processes. Articulatory suppression can be produced with different degrees of vocalisation, going from overt uttering to whispering, mouthing (i.e., silent articulation), and simple clamping of the speech articulators. Many studies have shown that articulatory suppression can be used to disrupt the subvocal rehearsal mechanism of verbal working memory and --as a consequence-- impair the recall of verbal material [e.g.,@baddeley_exploring_1984;@larsen_disruption_2003].
Based on the study of errors accompanying the covert production of tongue twisters, inner speech has also been suggested to be impoverished (as compared to overt speech) and to lack a full specification of articulatory features [e.g.,@oppenheim_inner_2008;@oppenheim_motor_2010]. More precisely, these studies have shown the phonemic similarity effect (the tendency, in overt speech, to exchange phonemes with similar articulatory features) to be absent in inner speech. In contrast to these results, however, @corley_error_2011 found the phonemic similarity effect to be present in inner speech, suggesting that inner speech may not necessarily be impoverished at the articulatory level.
In a study aiming at investigating the role of *covert enactment* in auditory imagery (defined as imagined speech, produced by oneself or another individual), @reisberg_enacted_1989 observed that the verbal transformation effect [@warren_auditory_1958], namely the alteration of speech percepts when certain speech sounds are uttered in a repetitive way, also occurred during inner speech (although the verbal transformation effect was smaller than during overt speech), but was suppressed by concurrent articulation (e.g., chewing) or clamping the articulators. The fact that the verbal transformation effect was observed during inner speech and that it was reduced by concurrent chewing, even in inner speech, speaks in favour of the view of inner speech as an enacted simulation of overt speech.
Another piece of evidence for the effect of articulatory suppression on inner speech comes from a recent study by @topolinski_motormouth_2009 on the mere exposure effect, namely the fact that repeated exposure to a stimulus influences the evaluation of this stimulus in a positive way [@zajonc_attitudinal_1968]. Topolinski and Strack’s study showed that the mere exposure effect for visually presented verbal material could be completely suppressed by blocking subvocal rehearsal (i.e., inner speech) when asking participants to chew a gum. The effect was preserved, however, when participants kneaded a soft ball with their hand [@topolinski_motormouth_2009]. This finding suggests that blocking speech motor simulation interfered with the inner rehearsal of the visually presented verbal stimuli, thereby destroying the positive exposure effect. It provides additional experimental support to the view that inner speech involves a motor component.
The occurrence of motor simulation during inner speech is further backed by several studies using physiological measures to evaluate inner speech production properties. Using electrodes inserted in the tongue tip or lips of five participants, @jacobson_electrical_1931 was able to detect electromyographic (EMG) activity during several tasks requiring inner speech. Similarly, @sokolov_inner_1972 recorded intense lip and tongue muscle activation when participants had to perform complex tasks that necessitated substantial inner speech production (e.g., problem solving). Another study using surface electromyography (sEMG) demonstrated an increase in activity of the lip muscles during silent recitation tasks compared to rest, but no increase during the non-linguistic visualisation task [@livesay_covert_1996]. An increase in the lip and forehead muscular activity has also been observed during induced rumination [@nalborczyk_orofacial_2017]. Furthermore, this last study also suggested that speech-related muscle relaxation was slightly more efficient in reducing subjective levels of rumination than non speech-related muscle relaxation, suggesting that relaxing or inhibiting the speech muscles could disrupt rumination.
Rumination is a "class of conscious thoughts that revolve around a common instrumental theme and that recur in the absence of immediate environmental demands requiring the thoughts" [@Martin]. Despite the fact that depressed patients report positive metacognitive beliefs about ruminating, which is often seen as a coping strategy in order to regulate mood [e.g.,@papageorgiou_metacognitive_2001], rumination is known to significantly worsen mood [e.g.,@Moberly2008;@nolen-hoeksema_effects_1993], impair cognitive flexibility [e.g.,@Davis2000;@Lyubomirsky1998], and to lead toward pronounced social exclusion and more interpersonal distress [@lam_response_2003]. Although partly visual, rumination is a predominantly verbal process [@goldwin_concreteness_2012;@mclaughlin_effects_2007] and can therefore be considered as a maladaptive type of inner speech. In a study on worry, another form of repetitive negative thinking, @rapee_utilisation_1993 observed a *tendency* for articulatory suppression, but not for visuo-spatial tasks, to produce some interference with worrying. He concluded that worry involves the phonological aspect of the central executive of working memory. We further add that, since repeating a word seems to reduce the ability to worry, this study suggests that articulatory aspects are at play during worry.
In this context, the question we addressed in this study is whether verbal rumination consists of purely abstract verbal representations or whether it is better described as a motor simulation of speech production, engaging the speech apparatus. If the latter hypothesis is correct, rumination experienced in verbal form (in contrast to other forms, such as pictoral representations) should be disrupted by mouthing (i.e., silent articulation), and should not be disrupted by a control task that does not involve speech muscles (e.g., finger-tapping). Specifically, we thus sought to test the hypotheses that rumination could be disrupted by articulatory suppression (but not by finger-tapping), and that this disruption would be more pronounced when rumination is experienced in a verbal form than in a non-verbal form.
## Methods
In the *Methods* and *Data analysis* sections, we report how we determined our sample size, all data exclusions, all manipulations, and all measures in the study [@simmons_21_2012]. A pre-registered version of our protocol can be found on OSF: [https://osf.io/3bh67/](https://osf.io/3bh67/).
```{r, results = "hide", warning = FALSE}
####################
# loading raw data #
####################
DFtotal <-
read.csv(here::here("data", "ch6", "suppressiondata.csv"), header = TRUE, sep = ",") %>%
dplyr::rename(RUM = VAS)
# duplicating DFtotal for subsequent use
DF <- DFtotal
# showing excluded participants
# DFtotal %>%
# spread(key = Session, value = RUM) %>%
# # identify successful rumination induction
# group_by(Participant) %>%
# fill(Baseline) %>%
# fill(`Post-induction`, .direction = "up") %>%
# fill(`Post-motor`, .direction = "down") %>%
# fill(`Post-motor`, .direction = "up") %>% #fill(rumination, .direction = "down") %>%
# mutate(induction_success = ifelse(`Post-induction` > Baseline, 1, 0) ) %>%
# gather(key = Session, value = RUM, Baseline:`Post-motor`) %>%
# ungroup %>%
# ggplot(aes(x = Session, y = RUM, group = Participant, colour = as.factor(induction_success) ) ) +
# geom_line(show.legend = FALSE) +
# geom_point(fill = "white", shape = 21, show.legend = FALSE) +
# theme_minimal(base_size = 12) +
# scale_colour_brewer(palette = "Dark2", direction = -1)
# number of people that were higher than the CES-D threshold (not included in the study)
depressive <- 26
# removing participants showing no effects of induction
ppt_induc <- which(DF$RUM[DF$Session == "Post-induction"] <= DF$RUM[DF$Session == "Baseline"])
for (i in 1:length(ppt_induc) ) DF <- DF[!DF$Participant == ppt_induc[i], ]
# keeping the entire uncentered dataframe as "DFclean0" for future use
DFclean0 <- DF
# centering predictors
DF[, 5:11] <- lapply(DF[, 5:11], function(x) as.numeric(scale(x, scale = TRUE) ) )
# keeping the entire dataframe as "DFclean" for future use
DFclean <- DF
# filtering post-motor Session and contrast-coding Session
DF <-
DF %>%
filter(!Session == "Post-motor") %>%
mutate(Induction = ifelse(.$Session == "Baseline", -0.5, 0.5) )
```
### Sample
We originally planned for 128 participants to take part in the study. This sample size was set on the basis of results obtained by @topolinski_motormouth_2009, who observed an effect size around $\eta_{p}^{2}=.06$. We expected a similar effect size for the current rumination disruption, since rumination can be conceived of as a subtype of inner speech.^[In the original power calculations included in the OSF preregistration platform, we had inadequately specified the effect size in GPower, but we only realised this erroneous specification after the freezing of the preregistration on the OSF platform. Therefore, the current sample size slightly differs from the preregistered one.]
As we anticipated drop-out of participants due to our inclusion criteria (see below), a total of `r length(unique(DFtotal$Participant)) + depressive` undergraduate students in psychology from Univ. Grenoble Alpes took part in this experiment, in exchange for course credits. They were recruited via mailing list, online student groups, and posters. Each participant provided a written consent and this study was approved by the local ethics committee (CERNI N° 2016-05-31-9). To be eligible, participants had to be between 18 and 35 years of age, with no self-reported history of motor, neurological, psychiatric, or speech-development disorders. All participants spoke French as their mother tongue. After each participant gave their written consent, they completed the Center for Epidemiologic Studies - Depression scale [CES-D; @radloff_ces-d_1977]. The CES-D is a 12-item questionnaire, validated in French [@morin_psychometric_2011], aiming to assess the level of depressive symptoms in a subclinical population. Participants exceeding the threshold of clinical depressive symptoms [i.e., >23 for females and >17 for males; @radloff_ces-d_1977] were not included in the study for ethical reasons (N = `r depressive`). These participants were then fully debriefed about the aims of the experiment and were given the necessary information concerning available psychological care on campus.
To investigate articulatory suppression effects in the context of rumination, a successful induction of rumination is a prerequisite. Therefore, analyses were only conducted on participants who showed an effect of the rumination induction (i.e., strictly speaking, participants who reported more rumination after the induction than before). We thus discarded participants who did not show any increase in rumination level (N = `r length(unique(DFtotal$Participant) ) - length(unique(DF$Participant) )`, `r round((length(unique(DFtotal$Participant) ) - length(unique(DF$Participant) ) ) / length(unique(DFtotal$Participant) ) * 100, 2)`% of total sample). The final sample comprised `r length(unique(DF$Participant) )` participants (Mean age = `r round(mean(DF$Age), 2)`, SD = `r round(sd(DF$Age), 2)`, Min-Max = `r min(DF$Age)`-`r max(DF$Age)`, `r sum(DF$Gender=="F") / length(unique(DF$Session))` females).
### Material
The experiment was programmed with OpenSesame software [@mathot_opensesame_2012] and stimuli were displayed on a DELL latitude E6500 computer screen.
#### Questionaires
To control for confounding variables likely to be related to the intensity of the induction procedure, we administered the French version of the Positive and Negative Affect Schedule [PANAS; @watson_development_1988], adapted to French by @Gaudreau2006. This questionnaire includes 20 items, from which we can compute an overall index of both positive (by summing the scores on 10 positive items, thereafter *PANASpos*) and negative affect (*PANASneg*) at baseline. This questionnaire was administered at baseline. In order to evaluate trait rumination, at the end of the experiment participants completed the short version of the Ruminative Response Scale [RRS-R, @treynor_rumination_2003], validated in French (Douilliez, Guimpel, Baeyens, & Philippot, *in preparation*). From this questionnaire, scores on two dimensions were analysed (*RRSbrooding* and *RRSreflection*).
#### Measures
Measures of state rumination were recorded using a Visual Analogue Scale (VAS) previously used in @nalborczyk_orofacial_2017. This scale measured the degree of agreement with the sentence "At this moment, I am brooding on negative things" (translated from French), on a continuum between "Not at all" and "A lot" (afterwards coded between 0 and 100). This scale is subsequently referred to as the *RUM* scale. It was used three times in the experiment, at baseline (after training but before the experiment started), after rumination induction, and after a motor task.
Additionally, participants answered questions about the modality of the thoughts that occurred while performing the motor task. This last questionnaire consisted of one question evaluating the occurrence frequency of different modalities of inner thoughts (e.g., visual imagery, verbal thoughts, music). Then, a verbal/non-verbal ratio (i.e., the score on the verbal item divided by the mean of the score on the non-verbal items) was computed and used in the analyses, hereafter referred to as the *Verbality* continuous predictor (this scale is available online: [https://osf.io/3bh67/](https://osf.io/3bh67/)).^[We computed this ratio because we were interested in the proportion of verbal thoughts relative to all thoughts and not in the total amount of verbal thoughts per se.]
#### Tasks
In the first part of the experiment, ruminative thoughts were induced using a classical induction procedure [@nolen-hoeksema_effects_1993]. Then a motor task was executed. Participants were randomly allocated to one of two conditions. In the *Mouthing* condition, the task consisted of repetitively making mouth opening-closing movements at a comfortable pace. This condition was selected as it is commonly used in articulatory suppression studies. As a control, a finger-tapping condition was used (the *Tapping* condition), that consisted of tapping on the desk with the index finger of the dominant hand at a comfortable pace.
Although finger-tapping tasks are generally considered as good control conditions when using speech motor tasks, since they are comparable in terms of general attentional demands, it may be that orofacial gestures are intrinsically more complex than manual gestures [i.e., more costly, @emerson_role_2003]. To discard the possibility that orofacial gestures (related to the *Mouthing* condition) would be cognitively more demanding than manual ones (related to the *Tapping* condition), we designed a pre-test experiment in order to compare the two interference motor tasks used in the main experiment. Results of this control experiment showed no difference on reaction times during a visual search task between the two interference tasks (i.e., mouthing and finger-tapping). Full details are provided in Appendix \@ref(appendix-eyetracking).
### Procedure
The experiment took place individually in a quiet and dimmed room. The total duration of the session ranged between 35min and 40min. Before starting the experiment, participants were asked to perform the motor task during 1 min, while following a dot moving at a random pace on the screen in front of them. This task was designed to train the participants to perform the motor task adequately. Following this training and after describing the experiment, the experimenter left the room and each participant had to fill-in a baseline questionnaire (adaptation of PANAS, see above) presented on the computer screen. Baseline state rumination was then evaluated using the *RUM* scale. The whole experiment was video-monitored using a Sony HDR-CX240E video camera, in order to check that the participants effectively completed the task.
#### Rumination induction
Rumination induction consisted of two steps. The first step consisted of inducing a negative mood in order to enhance the effects of the subsequent rumination induction. Participants were asked to recall a significant personal failure experienced in the past five years. Then, participants were invited to evaluate the extent to which this memory was "intense for them" on a VAS between "Not at all" and "A lot", afterwards coded between 0 and 100, and referred to as *Vividness*.
The second step consisted of the rumination induction proper. We used a French translation of @nolen-hoeksema_effects_1993's rumination induction procedure. Participants had to read a list of 44 sentences related to the meaning, the causes and the consequences of their current affective or physiological state. Each phrase was presented on a computer screen for 10 seconds and the total duration of this step was 7 minutes and 20 seconds. State rumination was then evaluated again using the same VAS as the one used at baseline (*RUM*).
#### Motor task {#proc_supp}
After the rumination induction, participants were asked to continue to think about "the meaning, causes, and consequences" of their feelings while either repetitively making mouth movements (for participants allocated in the "Mouthing" condition) or finger-tapping with the dominant hand for five minutes (for participants allocated in the "Tapping" condition). Afterwards, state rumination was again evaluated using the *RUM* scale.
In order to evaluate trait rumination, participants completed the short version of the RRS (see above). Then, they filled in the questionnaire on the modality of the thoughts that occurred while performing the motor task (see above). Figure \@ref(fig:diagramCH6) summarises the full procedure.
```{r, results = "hide", warning = FALSE}
######################
# MLMs for induction #
######################
priors_induction <- c(
prior(normal(0, 100), class = Intercept),
prior(normal(0, 10), class = b),
prior(exponential(0.1), class = sigma),
prior(exponential(0.1), class = sd)
)
M1_induction <- brm(
RUM ~ Induction + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M2_induction <- brm(
RUM ~ Induction + PANASpos + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M3_induction <- brm(
RUM ~ Induction + PANASneg + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M4_induction <- brm(
RUM ~ Induction + Induction:Vividness + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M5_induction <- brm(
RUM ~ Induction + PANASpos + Induction:Vividness + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M6_induction <- brm(
RUM ~ Induction + PANASneg + Induction:Vividness + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M7_induction <- brm(
RUM ~ Induction + PANASpos + PANASneg + Induction:Vividness + RRSreflection + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M8_induction <- brm(
RUM ~ Induction + PANASpos + PANASneg + Induction:Vividness + RRSbrooding + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M9_induction <- brm(
RUM ~ Induction + PANASpos + PANASneg + Induction:Vividness + RRSbrooding + RRSreflection + (1|Participant),
family = gaussian(),
prior = priors_induction,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
##########################
# Model comparison table #
##########################
source(here::here("code", "compare_waic.R") )
WAIC_table_induction <- compare_waic(
M1_induction, M2_induction, M3_induction, M4_induction, M5_induction,
M6_induction, M7_induction, M8_induction, M9_induction
)
WAIC_table_induction <- WAIC_table_induction@output[, 1:4]
bestmodel <- get(as.character(rownames(WAIC_table_induction[1] ) ) )
rownames(WAIC_table_induction) <- c(
"$Int+Ind+PANASpos+PANASneg+Ind:Viv+RRSbro$",
"$Int+Ind+PANASpos+PANASneg+Ind:Viv+RRSbro+RRSref$",
"$Int+Ind+PANASpos+PANASneg+Ind:Viv+RRSref$",
"$Int+Ind+PANASneg+Ind:Viv$",
"$Int+Ind+PANASpos+Ind:Viv$",
"$Int+Ind+PANASneg$",
"$Int+Ind+PANASpos$",
"$Int+Ind+Ind:Viv$",
"$Int+Ind$"
)
colnames(WAIC_table_induction) <- c("$WAIC$", "$pWAIC$", "$\\Delta_{WAIC}$", "$Weight$")
##############################################
# Preparing summary table for the best model #
##############################################
# parameters1 <-
# bestmodel %>%
# broom::tidy(effects = "fixed", conf.int = TRUE) %>% head
# dplyr::select(-statistic) %>% head
# `colnames<-`(c("", "Est", "SE", "Lower", "Upper") )
parameters1 <-
tidy_stan(bestmodel, prob = 0.95, typical = "mean", digits = 3) %>%
# removing ration and mcse
select(-ratio, -mcse) %>%
data.frame %>%
# rename columns
magrittr::set_colnames(
c("Term", "Estimate", "SE", "Lower", "Upper", "Rhat")
) %>%
# replace dots by two-dots
mutate(Term = str_replace(Term, "\\.", ":") ) %>%
mutate(Term = str_replace(Term, "b_", "") ) %>%
filter(Term != "sigma") %>%
# compute BF for each effect
mutate(
BF10 = (1 / hypothesis(bestmodel, glue("{Term} = 0") )$
hypothesis$Evid.Ratio) %>% round(., 3)
# Term = rep(
# c(
# "Inner Speech", "Listening", "Overt Speech",
# "Inner Speech x Class", "Listening x Class", "Overt Speech x Class"
# ),
# 2)
)
eff_size_bestmodel_induction <- bayes_R2(bestmodel)
###########################
# Preparing data for plot #
###########################
DF2 <-
DFclean %>%
group_by(Session, Condition) %>%
summarise(Lower = Rmisc::CI(RUM)[3], Mean = Rmisc::CI(RUM)[2], Upper = Rmisc::CI(RUM)[1]) %>%
data.frame
```
```{r diagramCH6, dev = "pdf", cache = TRUE, fig.align = "center", fig.width = 6, fig.height = 6, fig.cap = "Timeline of the experiment, from top to bottom."}
# graph LR (left-right) or graph TD (top-down)
g <- mermaid("
graph TD;
a0(<center>Motor training</center>) --> a;
a(<center>Baseline measures <br> PANAS & RUM</center>) -->|<i>Mood induction</i>\ |a1(<center>Vividness control </center>);
a1-->|<i>Rumination induction</i>\ |b(<center>Post-Induction measures <br> RUM</center>);
b --> c(<center>Mouthing</center>);
b --> d(<center>Tapping</center>);
c --> e(Post-activity measures <br> RUM);
b --> |<i>Session \ ' </i>\ |e;
d --> e(<center>Post-activity measures <br> RUM</center>);
e --> f(<center>Verbality & Trait rumination: RRS-R</center>);
f --> g(<center>Debriefing</center>);
style a0 fill:lightgrey, stroke:black, stroke-width:2px;
style a1 fill:lightgrey, stroke:black, stroke-width:2px;
style a fill:lightgrey, stroke:black, stroke-width:2px;
style b fill:lightgrey, stroke:black, stroke-width:2px;
style c fill:lightgrey, stroke:black, stroke-width:2px;
style d fill:lightgrey, stroke:black, stroke-width:2px;
style e fill:lightgrey, stroke:black, stroke-width:2px;
style f fill:lightgrey, stroke:black, stroke-width:2px;
style g fill:lightgrey, stroke:black, stroke-width:2px;
")
plotly::export(g, file = "mermaid.pdf")
```
### Data analysis
Statistical analyses were conducted using `R` version 3.5.3 [@R-base] and are reported with the `papaja` [@R-papaja] and `knitr` [@R-knitr] packages.
#### Rumination induction
We centred and standardised each predictor in order to facilitate the interpretation of parameters. To assess the effects of the rumination induction on self-reported state rumination, data were then analysed using *Induction* (2 modalities, before and after induction, contrast-coded) as a within-subject categorical predictor and *RUM* as a dependent variable in a Bayesian multilevel linear model (BMLM), using the `brms` package [@R-brms].^[An introduction to Bayesian statistics is outside the scope of this paper. However, the interested reader is referred to @nalborczyk_introduction_2019 for an introduction to Bayesian multilevel modelling using the `brms` package.] This model was compared with more complex models including effects of control variables, including baseline affect state (*PANAS* scores), trait rumination (*RRS* scores), the vividness of the memory chosen during the induction (*Vividness* score), or the degree of verbality of the ruminative thoughts (*Verbality* index).
Models were compared using the Widely Applicable Information Criterion [WAIC; @watanabe_asymptotic_2010] --a generalisation of the Akaike information criterion [@akaike_new_1974]-- and evidence ratios [@burnham_model_2002;@burnham_aic_2011;@hegyi_using_2011]. The WAIC provides a relative measure of predictive accuracy of the models (the WAIC is an approximation of the out-of-sample deviance of a model) and balances underfitting and overfitting by sanctioning models for their number of parameters. Evidence ratios (ERs) were computed as the ratios of weights: $ER_{ij} = \dfrac{w_{i}}{w_{j}}$, where $w_{i}$ and $w_{j}$ are the Akaike weights of models $i$ and $j$, respectively. These weights can be interpreted as the probability of the model being the best model in terms of out-of-sample prediction [@burnham_model_2002]. Whereas the use of WAIC is appropriate for model comparison and selection, it tells us nothing about the absolute fit of the model. To estimate this fit, we computed the Bayesian $R^2$ for MLMs using the `bayes_R2()` method in the brms package [@R-brms].
Models were fitted using weakly informative priors (see the [supplementary materials](#suppCH6) for code details). Two Markov Chain Monte-Carlo (MCMC) were ran for each model to approximate the posterior distribution, including each 5.000 iterations and a warmup of 2.000 iterations. Posterior convergence was assessed examining trace plots as well as the Gelman-Rubin statistic $\hat{R}$. Constant effect estimates were summarised via their posterior mean and 95% credible interval (CrI), where a credible interval can be considered as the Bayesian analogue of a classical confidence interval. When applicable, we also report Bayes factors (BFs), computed using the Savage-Dickey method, which consists in taking the ratio of the posterior density at the point of interest divided by the prior density at that point. These BFs can be interpreted as an updating factor, from prior knowledge (what we knew before seeing the data) to posterior knowledge (what we know after seeing the data).
#### Articulatory suppression effects
To assess the effects of articulatory suppression on self-reported state rumination, data were analysed in the same fashion as in the first part of the experiment, using *Session* (2 modalities, before and after motor activity, contrast-coded) as a within-subject categorical predictor, and *Condition* (2 modalities, Mouthing and Tapping) as a between-subject categorical predictor and *RUM* as a dependent variable. Moreover, effects of baseline affect state (PANAS and RRS scores), the vividness of the rumination induction memory and the verbality index were assessed by comparing models with and without these additional predictors.
## Results
The results section follows the data analysis workflow. More precisely, for each part of the experiment (i.e., first the analysis of the induction effects and then, the analysis of the impact of mouthing vs. finger-tapping), we first present the results of the model comparison stage in which we compare different models of increasing complexity. Subsequently, we report the estimates of the best model (the model with the lowest WAIC) and base our conclusions on this model.
Recall that, to assess rumination induction, the dependent variable is *RUM*, the main categorical predictor is *Induction* and additional continuous predictors are *PANAS*, *RRS*, *Vividness*, and *Verbality.* To assess articulatory suppression effects, the dependent variable is *RUM*, the main categorical predictors are *Session* (within-subject) and *Condition* (between-subject), and additional continuous predictors are *PANAS*, *RRS* and *Vividness.* Summary statistics (mean and standard deviation) for all these variables can be found in Table \@ref(tab:sumstatCH6).
```{r summary, results = "asis"}
summary_stat <-
DFclean0 %>%
mutate_if(is.factor, funs(as.character) ) %>%
select(Condition, Session, RUM:Age) %>%
group_by(Condition, Session) %>%
gather(Variable, Value, RUM:Age) %>%
group_by(Condition, Session, Variable) %>%
dplyr::summarise(
APA_descriptive_statistics = glue::glue("{mean} ({sd})",
mean = round(mean(Value), 2), sd = round(sd(Value), 2) )
) %>%
reshape2::dcast(
Condition + Session ~ Variable,
value.var = "APA_descriptive_statistics"
) %>%
select(Condition, Session, RUM, Age:Vividness) %>%
data.frame()
```
```{r sumstatCH6, results = "asis"}
summary_stat[summary_stat$Session %in% c("Post-induction", "Post-motor"), 4:11] <- "-"
summary_stat2 <-
summary_stat %>%
select(-Valence) %>%
t %>%
data.frame(stringsAsFactors = FALSE) %>%
rownames_to_column(., var = "Variables")
summary_stat3 <- summary_stat2[c(-1, -2), ]
summary_stat3 %>%
apa_table(
placement = "ht",
align = c("l", rep("c", 6) ),
caption = "Descriptive statistics (mean and standard deviation) of each recorded variable, for the final sample of participants that were included in the study.",
small = TRUE,
landscape = TRUE,
col_spanners = list(`Mouthing` = c(2, 4), `Tapping` = c(5, 7) ),
escape = FALSE,
row.names = FALSE,
col.names = c("Variables", "Baseline", "Post-induction", "Post-motor", "Baseline", "Post-induction", "Post-motor")
)
```
Figure \@ref(fig:plotexp1) shows the overall evolution of the mean *RUM* scores (i.e., self-reported state rumination) through the experiment according to each *Session* (Baseline, Post-induction, Post-motor) and *Condition* (Mouthing, Tapping). As displayed in this figure, an important inter-individual variability was observed in all conditions. After the rumination induction, *RUM* score increased in both groups, and decreased after the motor task, with a stronger decrease in the *Mouthing* condition.
```{r plotexp1, fig.pos = "H", fig.width = 8, fig.height = 6, fig.cap = "Mean RUM score by Session and Condition, along with violin plots and individual data. Error bars represent 95\\% confidence intervals."}
pd <- position_dodge(0.9)
DFclean %>%
ggplot(aes(x = Session, y = RUM, colour = Condition, fill = Condition) ) +
# violin plots
geom_violin(
scale = "count", alpha = 0.15,
colour = "white",
position = pd,
#adjust = 0.8,
#draw_quantiles = 0.5,
show.legend = FALSE
) +
# plotting individual data points
geom_dotplot(
stackdir = "center",
binaxis = "y",
position = pd,
binwidth = 1,
dotsize = 1.5,
alpha = 0.2
) +
stat_summary(
fun.y = mean,
geom = "line", size = 1,
aes(group = Condition),
position = pd,
show.legend = FALSE
) +
# plotting means
stat_summary(
fun.y = "mean", geom = "point", shape = 16, size = 5,
position = pd,
show.legend = TRUE
) +
# plotting confidence intervals
stat_summary(
fun.data = mean_cl_normal,
geom = "errorbar", size = 1, width = 0,
fun.args = list(mult = 1.96),
show.legend = FALSE,
position = pd
) +
# grey scale
labs(x = "", y = "Self-reported levels of state rumination (RUM)") +
theme_minimal(base_size = 12) +
scale_colour_brewer(palette = "Dark2", direction = 1) +
scale_fill_brewer(palette = "Dark2", direction = 1)
```
### Correlation matrix between continuous predictors
```{r, results = "hide", warning = FALSE}
# keeping only one unique line per participant to compute correlations
DFcorr <- DFclean[DFclean$Session == "Baseline", ]
```
To prevent multicollinearity, we estimated the correlation between each pair of continuous predictors. Figure \@ref(fig:correxp1) displays these correlations along with the marginal distribution of each variable. The absence of strong correlations ($r > 0.8$) between any of these variables suggests that they can each be included as control variables in the following statistical models.
```{r correxp1, fig.pos = "H", fig.width = 10, fig.height = 10, fig.cap = "Diagonal: marginal distribution of each variable. Panels above the diagonal: Pearson's correlations between main continuous predictors, along with 95\\% CIs. The absolute size of the correlation coefficient is represented by the size of the text (lower coefficients appear as smaller). Panels below the diagonal: scatterplot of each variables pair."}
######################
# correlation matrix #
######################
my_custom_cor <- function(data, mapping, color = I("grey50"), ...) {
# get the x and y data to use the other code
x <- GGally::eval_data_col(data, mapping$x)
y <- GGally::eval_data_col(data, mapping$y)
ct <- cor.test(x, y)
r <- unname(ct$estimate)
rt <- format(r, digits = 2)[1]
ci <- unname(ct$conf.int)
cit <- format(ci, digits = 2)
range <- max(abs(as.numeric(ci) ) ) - min(abs(as.numeric(ci) ) )
# helper function to calculate a useable size
percent_of_range <- function(percent, range) {
percent * diff(range) + min(range, na.rm = TRUE)
}
# plot the correlation value
ggally_text(
label = paste0(as.character(rt), "\n [", cit[1], ", ", cit[2], "]"),
mapping = aes(),
xP = 0.5, yP = 0.5,
size = I(percent_of_range(abs(r), c(4, 9) ) ),
color = "grey40",
...) +
# remove all the background stuff and wrap it with a dashed line
theme_classic() +
theme(
panel.background = element_rect(
color = color,
linetype = "longdash"
),
axis.line = element_blank(),
axis.ticks = element_blank(),
axis.text.y = element_blank(),
axis.text.x = element_blank()
)
}
DFcorr %>%
# selecting relevant variables
select(PANASpos, PANASneg, Vividness, RRSbrooding, RRSreflection, Verbality) %>%
# centering variables
mutate_all(funs(as.numeric(scale(.) ) ) ) %>%
# plotting the correlation matrix
GGally::ggpairs(
lower = list(continuous = GGally::wrap("points", color = "grey30", shape = 16) ),
upper = list(continuous = my_custom_cor)
) +
theme_bw(base_size = 12)
```
### Rumination induction
```{r effect_size1}
# computing the effect size (Cohen's d) of the induction
induction_d_av <-
effsize::cohen.d(
d = DF$RUM, f = factor(DF$Session),
paired = FALSE, pooled = TRUE
)
d_induction <- induction_d_av$estimate %>% as.numeric * ( - 1)
d_induction_lower <- induction_d_av$conf.int[2] %>% as.numeric * ( - 1)
d_induction_upper <- induction_d_av$conf.int[1] %>% as.numeric * ( - 1)
```
To examine the efficiency of the induction procedure (i.e., the effect of *Induction*) while controlling for the other variables (i.e., *Vividness*, *RRSbrooding*, *RRSreflection*, *PANASpos*, and *PANASneg*),^[Note that we only included predictors that were theoretically relevant [as recommended, amongst others, by @burnham_model_2002;@burnham_multimodel_2004]. We did not blindly assess every combination of predictors.] we then compared the parsimony of models containing main constant effects and a varying intercept for *Participant*. Model comparison showed that the best model (i.e., the model with the lowest WAIC) was the model including *Induction*, *PANASpos*, *PANASneg*, *RRSbooding*, and an interaction term between *Induction* and *Vividness* as predictors (see Table \@ref(tab:compexp1CH6)). Fit of the best model was moderate (`r glue_data(eff_size_bestmodel_induction %>% data.frame %>% round(., 3), "$R^{2}$ = {Estimate}, 95% CrI [{Q2.5}, {Q97.5}]")`).
```{r compexp1CH6, results = "asis"}
apa_table(
WAIC_table_induction,
placement = "ht",
align = c("l", "c", "c", "c", "c"),
caption = "Comparison of models, ordered by WAIC. The best model has the lowest WAIC.",
note = "$pWAIC$ is the number of (effective) parameters in the model. $Int$ = Intercept, $Ind$ = Induction, $Viv$ = Vividness, $RRSbro$ = RRSbrooding, $RRSref$ = RRSreflection. All models include a varying intercept for Participant.",
small = TRUE,
landscape = TRUE,
format.args = list(
digits = c(2, 2, 2, 3),
margin = 2,
decimal.mark = ".", big.mark = ""
),
escape = FALSE)
```
Constant effect estimates for the best model are reported in Table \@ref(tab:paramexp1CH6). Based on these values, it seems that *Induction* (i.e., the effects of the rumination induction) increased *RUM* scores by approximately `r round(parameters1[2, 2], 2)` points on average (\(d_{av} =\) `r round(d_induction, 3)`, 95% CI [`r round(d_induction_lower, 3)`, `r round(d_induction_upper, 3)`]). The main positive effect of *PANASneg* and the main negative effects of *PANASpos* indicate, respectively, that negative baseline mood was associated with higher levels of rumination while positive baseline mood was associated with lower levels of self-reported rumination.
```{r paramexp1CH6, results = "asis"}
#######################
# format BFs in table #
#######################
changeSciNot <- function (n) {
output <- formatC(n, format = "g", digits = 4)
output <- sub("e", "*10^", output) # Replace e with 10^
output <- sub("\\+0?", "", output) # Remove + symbol and leading zeros on expoent, if > 1
output <- sub("-0?", "-", output) # Leaves - symbol but removes leading zeros on exponent, if < 1
output
}
apa_table(
parameters1 %>% mutate(BF10 = changeSciNot(BF10) ),
# parameters1 %>% mutate(BF10 = formatC(BF10, format = "g", digits = 6) ),
placement = "ht",
align = c("l", "c", "c", "c", "c", "c", "c"),
caption = "Coefficient estimates, standard errors (SE), 95% CrI (Lower, Upper), Rhat and Bayes factor (BF10) for the best model.",
#note = "As all predictors were centered to the mean for analysis, these coefficients approximate coefficients from simpler models.",
small = TRUE,
format.args = list(
digits = c(3, 3, 3, 3, 2),
#format = c("f", "f", "f", "f", "f", "g"), # scientific notation
#width = c(3, 3, 3, 3, 3, 3, 3),
margin = 2
#decimal.mark = ".", big.mark = ""
),
escape = TRUE
)
```
Higher scores on *Vividness* were associated with higher increase in self-reported rumination after induction, as revealed by the positive coefficient of the interaction term. This suggests that participants who recalled a more vivid negative memory tended to show a higher increase in rumination after the induction procedure than participants with a less vivid memory.
### Articulatory suppression effects on induced rumination
```{r, results = "hide", warning = FALSE}
# extracting baseline RUM score by participant
rum_baseline <- DFclean %>% filter(Session == "Baseline") %>% select(RUM)
DF <-
DFclean %>%
mutate(RUMbaseline = rep(rum_baseline$RUM, each = n_distinct(.$Session) ) ) %>%
filter(!Session == "Baseline") %>%
mutate(
Session = ifelse(.$Session == "Post-induction", -0.5, 0.5),
Condition = ifelse(.$Condition == "Mouthing", -0.5, 0.5),
RUMbaseline = scale(RUMbaseline, scale = TRUE)
)
#########################
# MLMs for experiment 2 #
#########################
priors_null <- c(
prior(normal(0, 100), class = Intercept),
prior(exponential(0.1), class = sigma),
prior(exponential(0.1), class = sd)
)
priors_motor <- c(
prior(normal(0, 100), class = Intercept),
prior(normal(0, 10), class = b),
prior(exponential(0.1), class = sigma),
prior(exponential(0.1), class = sd)
)
M1_motor <- brm(
RUM ~ 1 + (1|Participant),
family = gaussian(),
prior = priors_null,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M2_motor <- brm(
RUM ~ 1 + Session + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M3_motor <- brm(
RUM ~ 1 + Session + Condition + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M4_motor <- brm(
RUM ~ 1 + Session + Condition + Session:Condition + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M5_motor <- brm(
RUM ~ 1 + Session + Condition + Session:Condition + Session:Condition:Verbality + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M6_motor <- brm(
RUM ~ 1 + Session + Condition + RUMbaseline + RRSbrooding + RRSreflection + PANASneg + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M7_motor <- brm(
RUM ~ 1 + Session + Condition + Session:Condition +
RUMbaseline + RRSbrooding + RRSreflection + PANASneg + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M8_motor <- brm(
RUM ~ 1 + Session + Condition + Session:Condition + Session:Condition:Verbality +
RUMbaseline + RRSbrooding + RRSreflection + PANASpos + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
M9_motor <- brm(
RUM ~ 1 + Session + Condition + Session:Condition + Session:Condition:Verbality +
RUMbaseline + RRSbrooding + RRSreflection + PANASneg + (1|Participant),
family = gaussian(),
prior = priors_motor,
data = DF,
sample_prior = TRUE,
cores = parallel::detectCores(),
chains = 2, warmup = 2000, iter = 5000,
control = list(adapt_delta = 0.95)
)
```
```{r, results = "hide", warning = FALSE}
##########################
# model comparison table #
##########################
source(here::here("code", "compare_waic.R") )
WAIC_table_motor <- compare_waic(
M1_motor, M2_motor, M3_motor, M4_motor, M5_motor, M6_motor, M7_motor, M8_motor, M9_motor
)
WAIC_table_motor <- WAIC_table_motor@output[, 1:4]
bestmodel_motor <- get(as.character(rownames(WAIC_table_motor[1] ) ) )
rownames(WAIC_table_motor) <- c(
"$Session+Cond+Session:Cond+RUMb+PANASn+RRSb+RRSr$",
"$Session+Cond+RUMb+PANASn+RRSb+RRSr$",
"$Session+Cond+Session:Cond+Session:Cond:Verb+RUMb+PANASp+RRSb+RRSr$",
"$Session+Cond+Session:Cond+Session:Cond:Verb+RUMb+PANASn+RRSb+RRSr$",
"$Session+Cond+Session:Cond$",
"$Session+Cond$",
"$Session$",
"$Session+Cond+Session:Cond:Verb$",
"$Null\\ model$"
)
colnames(WAIC_table_motor) <- c("$WAIC$", "$pWAIC$", "$\\Delta_{WAIC}$", "$Weight$")
##############################################
# Preparing summary table for the best model #
##############################################
# bestmodel2 <- get(as.character(AICtab2[1, 1]) )
# bestmodel2 <- M7
#
# parameters2 <-
# tidy(bestmodel2, conf.int = TRUE) %>%
# dplyr::select(-statistic,-group) %>%
# `colnames<-`(c("", "Est", "SE", "Lower", "Upper") )
parameters2 <-
tidy_stan(bestmodel_motor, prob = 0.95, typical = "mean", digits = 3) %>%
# removing ration and mcse
select(-ratio, -mcse) %>%
data.frame %>%
# rename columns
magrittr::set_colnames(
c("Term", "Estimate", "SE", "Lower", "Upper", "Rhat")
) %>%
# replace dots by two-dots
mutate(Term = str_replace(Term, "\\.", ":") ) %>%
mutate(Term = str_replace(Term, "b_", "") ) %>%
# remove sigma
filter(Term != "sigma") %>%
# compute BF for each effect
mutate(
BF10 = (1 / hypothesis(bestmodel_motor, glue("{Term} = 0") )$
hypothesis$Evid.Ratio) %>% round(., 3)
)
```
```{r, results = "hide", warning = FALSE}
# bayes factor for the interaction session:condition
hyp_interaction <- hypothesis(bestmodel_motor, "Session:Condition = 0")
# effect size for the best model
eff_size_bestmodel_motor <- bayes_R2(bestmodel_motor)
# computing the intra-class correlation for varying intercept
# ICC <- parameters2[9, 2]^2 / (parameters2[9, 2]^2 + parameters2[10, 2]^2)
ICC_temp <- icc(bestmodel_motor, prob = 0.95)
ICC <- data.frame(
Est = ICC_temp$ICC_decomposed,
Lower = ICC_temp$ICC_CI[1], #attributes(ICC_temp)$hdi.icc$icc_Participant[1],
Upper = ICC_temp$ICC_CI[2] #attributes(ICC_temp)$hdi.icc$icc_Participant[2]
) %>%
round(., 3)
#####################################################################
# computing the effect size (Cohen's d average) for each motor task #
#####################################################################
d_av_mouthing <-
effsize::cohen.d(
d = DF$RUM[DF$Condition == -0.5], f = factor(DF$Session[DF$Condition == -0.5]),
paired = FALSE, pooled = TRUE
)
d_mouthing <- d_av_mouthing$estimate %>% as.numeric * ( - 1)
d_mouthing_lower <- d_av_mouthing$conf.int[2] %>% as.numeric * ( - 1)
d_mouthing_upper <- d_av_mouthing$conf.int[1] %>% as.numeric * ( - 1)
d_av_tapping <-
effsize::cohen.d(
d = DF$RUM[DF$Condition == 0.5], f = factor(DF$Session[DF$Condition == 0.5]),
paired = FALSE, pooled = TRUE
)
d_tapping <- d_av_tapping$estimate %>% as.numeric * ( - 1)
d_tapping_lower <- d_av_tapping$conf.int[2] %>% as.numeric * ( - 1)
d_tapping_upper <- d_av_tapping$conf.int[1] %>% as.numeric * ( - 1)
```
In order to examine the effect of the two motor tasks (articulatory suppression and finger-tapping, *Condition* variable) on *RUM* while controlling for other variables (i.e., *Vividness*, *RRSbrooding*, *RRSreflection*, *Verbality*, *PANASpos*, and *PANASneg*), we compared the parsimony of several models, with or without these variables or their interaction. Given the group differences on *RUM* score at baseline (i.e., after training), we also included this score as a control variable in the models, as the *RUMb* variable ($M_{mouthing}$ = `r DFclean0 %>% filter(Condition == "Mouthing", Session == "Baseline") %>% pull(RUM) %>% mean %>% round(2)`, $M_{tapping}$ = `r DFclean0 %>% filter(Condition == "Tapping", Session == "Baseline") %>% pull(RUM) %>% mean %>% round(2)`). Based on our hypotheses, we examined the three-way interaction between *Session*, *Condition* and *Verbality*. More precisely, we expected that greater amounts of verbal thoughts would be associated with a greater difference in the effects of the motor task on self-reported state rumination (i.e., *RUM*) with respect to the group (i.e., mouthing vs. finger-tapping). Model comparison showed that the best model was the model including *Session*, *Cond*, an interaction term between *Session* and *Condition*, *RUMb*, *PANASneg*, *RRSbrooding* and *RRSreflection* as predictors (cf. Table \@ref(tab:paramexp2CH6)). Absolute fit of the best model was moderate (`r glue_data(eff_size_bestmodel_motor %>% data.frame %>% round(., 3), "$R^{2}$ = {Estimate}, 95% CrI [{Q2.5}, {Q97.5}]")`). Therefore, contrary to our hypothesis, the best model did not include the three-way interaction between *Session*, *Condition* and *Verbality* as a constant effect. It did include an interaction between *Session* and *Condition*, however.
```{r compexp2CH6, results = "asis"}
apa_table(
WAIC_table_motor,
placement = "ht",
align = c("l", "c", "c", "c", "c"),
caption = "Comparison of models, ordered by WAIC. The best model has the lowest WAIC.",
note = "$K$ is the number of estimated parameters in the model. $Int$ = Intercept, $Cond$ = Condition, $RUMb$ = RUM baseline score, $Verb$ = Verbality, $RRSb$ = RRSbrooding, $RRSr$ = RRSreflection. All models include a constant intercept and a varying intercept for Participant.",
small = TRUE,
landscape = TRUE,
format.args = list(
digits = c(2, 2, 2, 3),
margin = 2,
decimal.mark = ".", big.mark = ""
),
escape = FALSE
)
```
Parameter values of the best model for the second part of the experiment are reported in Table \@ref(tab:paramexp2CH6). Based on these values, it seems that self-reported rumination decreased after both motor tasks (the coefficient for *Session* is negative), but this decrease was substantially larger in the *Mouthing* condition (\(d_{av} =\) `r round(d_mouthing, 3)`, 95% CI [`r round(d_mouthing_lower, 3)`, `r round(d_mouthing_upper, 3)`]) than in the *Tapping* condition (\(d_{av} =\) `r round(d_tapping, 3)`, 95% CI [`r round(d_tapping_lower, 3)`, `r round(d_tapping_upper, 3)`]), as can be read from the coefficient of the interaction term between *Session* and *Condition* (*Est* = `r parameters2[8, 2]`, *SE* = `r parameters2[8, 3]`, *95% CrI* [`r parameters2[8, 4]`, `r parameters2[8, 5]`]). Importantly, the large uncertainty associated with this result (as expressed by the width of the confidence interval) warrants a careful interpretation of this result, that should be considered as suggestive evidence, rather than conclusive evidence.
However, the Bayesian framework provides tools that permit richer inference. First, we can compare the relative weight of the best model (the model with the lowest WAIC) with a similar model without the interaction term (the second model in Table \@ref(tab:compexp2CH6)). This reveals that the model including an interaction term between *Session* and *Condition* is `r WAIC_table_motor[1, 4] / WAIC_table_motor[2, 4]` more *credible* than the model without the interaction term, which can be considered as weak but meaningful evidence [@burnham_model_2002].
Second, we can look at the BF for this particular parameter. As can be seen from Table \@ref(tab:paramexp2CH6), the BF for the interaction term is equal to `r round(1 / parameters2$BF10[parameters2$Term == "Session:Condition"], 2)`, which is evidence for neither the presence or the absence of effect. However, this BF is computed using the Savage-Dickey method^[This method simply consists in taking the ratio of the posterior density at the point of interest divided by the prior density at that point [for a practical introduction, see @wagenmakers_bayesian_2010].] and as such is extremely sensitive to the prior choice. Thus, other priors (for instance a prior that is more peaked on zero) would provide stronger evidence for the interaction effect.
Third, and more interestingly, we can also directly look at the posterior distribution of the parameter of interest (the interaction term). Figure \@ref(fig:postdistCH6) shows this posterior distribution, its mean and 95% credible interval, as well as the proportion of the distribution which is above 0. This reveals that although the 95% credible interval largely encompasses 0, there is a `r round(mean(posterior_samples(bestmodel_motor)["b_Session:Condition"]>0), 3)` probability that the interaction between *Session* and *Condition* is positive (given the data and the priors). This suggests that the decrease in *RUM* score was indeed larger in the mouthing than in the tapping group.
```{r postdistCH6, fig.pos = "H", fig.width = 5, fig.height = 5, fig.cap = "Posterior distribution of the interaction parameter between Session (before vs. after the motor task) and Condition (mouthing vs. finger-tapping). The mean and the 95\\% credible interval are displayed at the top and the bottom of the histogram. The green text indicates the proportion of the distribution that is either below or above zero."}
post_interaction <- posterior_samples(bestmodel_motor)$`b_Session:Condition`
plotPost(
xlab = expression(beta[Session:Condition]),
post_interaction, compVal = 0,
col = as.character(bayesplot::color_scheme_get("blue")[2])
)
```
```{r paramexp2CH6, results = "asis"}
####################################
# format BFs in table
##############################
changeSciNot <- function (n) {
output <- formatC(n, format = "g", digits = 4)
output <- sub("e", "*10^", output) # Replace e with 10^
output <- sub("\\+0?", "", output) # Remove + symbol and leading zeros on expoent, if > 1
output <- sub("-0?", "-", output) # Leaves - symbol but removes leading zeros on exponent, if < 1
output
}
apa_table(
parameters2 %>% mutate(BF10 = changeSciNot(BF10) ),
#parameters2 %>% mutate(BF10 = formatC(BF10, format = "g", digits = 6) ),
placement = "ht",
align = c("l", "c", "c", "c", "c", "c", "c"),
caption = "Coefficient estimates, standard errors (SE), 95% CrI (Lower, Upper), Rhat and Bayes factor (BF10) for the best model.",
#note = "As all predictors were centered to the mean for analysis, these coefficients approximate coefficients from simpler models.",
small = TRUE,
digits = 3,
escape = TRUE,
format.args = list(
digits = c(3, 3, 3, 3, 2),
#format = c("f", "f", "f", "f", "f", "g"), # scientific notation
#width = c(3, 3, 3, 3, 3, 3, 3),
margin = 2
#decimal.mark = ".", big.mark = ""
)
)
```
The large variation between participants can be appreciated by computing the *intra-class correlation* (ICC), expressed as $\sigma_{intercept}^{2}/(\sigma_{intercept}^{2}+\sigma_{residuals}^{2})$. For the best model, the ICC is equal to `r ICC$Est` (95% CrI [`r ICC$Lower`, `r ICC$Upper`]), indicating that `r round(100 * ICC$Est, 2)`% of the variance in the outcome that remains after accounting for the effect of the predictors, is attributable to systematic inter-individual differences.
Figure \@ref(fig:plotverbal) shows the effects of *Verbality* on the relative change (i.e., after - before) in self-reported rumination after both motor activities (i.e., *Mouthing* and *Tapping*). As *Verbality* was centred before analysis, its score cannot be interpreted in absolute terms. However, a high score on this index indicates more verbal than non-verbal (e.g., visual images, non-speech sounds) thoughts, whereas a low score indicates more non-verbal than verbal thoughts. Contrary to our predictions but consistent with the model comparison, this figure depicts a similar relationship between *Verbality* and the change in *RUM* score (between before and after the motor task), according to the Condition. In the Mouthing condition, the change in *RUM* score did decrease for participants with a higher self-reported degree of verbal content. This suggests that the more verbal the rumination is, the more it is affected by mouthing interference. But contrary to our expectation, a similar trend (although perhaps weaker) was observed in the Tapping condition. This suggests that the more verbal the rumination is, the more it is affected by any motor task.
```{r plotverbal, fig.pos = "H", fig.width = 8, fig.height = 6, fig.cap = "Mean RUM relative change after motor activity, as a function of the degree of Verbality, in the mouthing (the green dots and regression line) and finger tapping (the orange dots and regression line) conditions."}
differential <- function(x) {
x$RUM <- rep(x$RUM[DF4$Session == "Post-motor"] - x$RUM[DF4$Session == "Post-induction"], each = 2)
}
DF4 <- DFclean[!DFclean$Session == "Baseline", ]
DF4$RUM <- differential(DF4)
DF4 <- DF4[!DF4$Session == "Post-induction", ]
DF4 %>%
ggplot(
aes(
x = Verbality, y = RUM, group = factor(Condition),
color = Condition, fill = Condition)
) +
geom_hline(
yintercept = 0, linetype = 3,
alpha = 1) +
geom_point(
size = 2, shape = 21,
color = "black", alpha = 0.8,
show.legend = TRUE) +
geom_smooth(
method = "lm",
se = FALSE,