-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.qmd
368 lines (319 loc) · 50.8 KB
/
index.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
\pagenumbering{arabic}
# General Introduction {.unnumbered}
It is said that humans are creatures of habit. But even habits are established and managed by a higher-order cognitive system—a human capacity expressed in innumerous situations that remains unmatched by any other species or artificial intelligence. My thesis aims to further our understanding of higher-order cognition. More specifically, I'm interested in our ability to be goal-driven and which enables us to produce complex, meaningful, context-dependent behavior, in uncertain environments, inhibit prepotent responses, monitor and manage the cross-talk between conflicting tasks.
The role this ability plays in daily life is evident, for instance, when making pizza! We first need to plan a sequence of tasks, from creating a shopping list, buying the ingredients, preheating the oven while proofing the dough, pausing the preparation of the toppings because the oven is beeping, and possibly multitasking to wash the dishes while cooking. Some skilled chefs can make great pizza on a stovetop burner rather than an oven, demonstrating that their ability to make pizza can generalize and transfer from one environment to another. Tasks like making pizza are complex because they require a variety of cognitive functions, including planning, multitasking, task switching, attention, flexibility, monitoring, handling feedback, practice, and generalization, to name just a few. Yet most people can routinely perform such complex tasks.
Goal-driven higher cognition is of utmost importance to humans as it determines many aspects of our lives (e.g., academic and professional success, social relationships, health). Unfortunately, we don't yet fully understand how this type of higher-order cognition works and how to improve it for the benefit of individuals and society. There are, however, many ideas, theories, and experimental work across multiple scientific fields that we can draw from.
Here, I will apply a multidisciplinary approach to clarify what this specific higher-order cognition is and how it operates in computational, quantitative terms. There are two primary motivations for me to focus on computational/quantitative accounts. First, they may provide principled ways towards understanding and developing interventions to improve humans' goal-directed cognition; a large body of work indicates this is indeed possible but we currently lack a clear theoretical framework to understand why and how those effects come about. Second, there have been important advances in artificial intelligence in recent years and these may benefit our understanding of human cognition. Conversely, the study of human cognition may, as it has several times in the past, lead to insights that benefit new developments in artificial intelligence.
The scientific concept that best characterizes what I referred to as "goal-driven higher-order cognition" is "cognitive control", as articulated in @badre2020 and @cohen2017. In this context, cognitive control is an umbrella term for a set of processes that generate and monitor plans and actions in pursuit of evolving goals, often in noisy environments. Other related terms used (sometimes interchangeably) in the scientific literature include for instance "executive functions", "attentional control", "executive control", and "self-regulation". For consistency and simplicity, I will only refer to "cognitive control" in the remainder of this thesis, acknowledging, as have many before, that this is a complex and to a large extent ill-defined concept.
## Exploring cognitive control across cognitive sciences disciplines
This thesis is interdisciplinary and grounded in cognitive sciences. In particular, it applies principles and techniques from cognitive psychology, neuroscience, and artificial intelligence as these are key fields in which cognitive control related questions have been extensively investigated. This interdisciplinarity offers synergies that support the systematic study of cognitive control using modern tooling, and the development of artificial agents that may benefit from human-like control abilities by aligning to human cognitive functioning [@russell2020].
### Cognitive Psychology and Neuroscience
In cognitive psychology, concepts that capture higher-order cognitive abilities such as cognitive control are difficult to define—and consequently also to quantify. This may in part be due to cognitive control being related to many other psychological constructs [see @cohen2017], and to its role in explaining task-dependent, contextual phenomena [Appendix A\; @ralph2014; @otto2013]. It may also be due to the more general limitation of psychological constructs being low-dimensional representations of distributed brain mechanisms [@jolly2019; @zink2021]. Nevertheless, to understand cognitive control, psychologists have devised a variety of theoretical constructs and cognitive tasks [see Chapter 1 and @baggetta2016] the relationships between which are not always very clear. This lack of a cohesive understanding calls for conceptual and empirical clarifications about what researchers mean by cognitive control and how to quantify it. Greater clarity and an integrated framework of cognitive control is required to advance the field.
In this regard, greater clarity may come from recent machine learning advances in natural language processing which have made it possible to analyze a large body of texts in order to identify and connect underlying ideas [@angelov2020; @dieng2020; @beam2021]. Computational techniques such as ontologies and large language models can be leveraged to parse the ever-growing research on cognitive control in order to develop a cohesive framework that provides a holistic and pragmatic view of cognitive control that shows how cognitive control is conceptualized and operationalized in the scientific literature. This type of integrative work seems critical to make sense of currently disparate research that comprises many psychological constructs and computational models, several brain mechanisms, and multiple cognitive tasks.
An integrated and formal account of cognitive control would be invaluable for programs aiming to improve cognitive control abilities in humans. Given the role of cognitive control on daily functioning, long-term achievements, and psychological health [@diamond2019; @moffitt2011], for example, the possibility to improve cognitive control in a way that transfers to real life could have important implications across a wide range of use cases (e.g., rehabilitation, healthy aging, education, peak performance). The study of cognitive training and its consequences is also important from a theory perspective as interventional methods (as in cognitive training regimes) offer a means to causally test computational theories of cognitive control.
Despite the ubiquity of cognitive training studies [@bediou2018], we currently lack a satisfactory theory of how training on specific tasks generalizes to new ones [@moreau2014; @oei2014a]. It's not entirely clear which interventions impact the cognitive systems and how they do so—including what neural mechanisms in the brain enable cognitive control, how they are impacted by cognitive training, and how this impact causes the behavioral outcomes.
Currently, the main theories in this context revolve around one of two types of hypotheses. The first states that cognitive training interventions train multiple elementary cognitive processes and to the extent that new tasks rely on those same processes (or a subset of them), transfer effects will be observed on those new tasks [@oei2014a]. An alternative class of hypotheses state that cognitive training enhances domain-general abilities which are involved in virtually all cognitive tasks—among these domain-general abilities, cognitive, and attentional control are the most prominent [@anguera2013; @green2008]. Which of these (if any) are true, remains an open question and part of the difficulty in making progress is the lack of theories that would allow predictions of how certain forms of training would or would not transfer to which other tasks.
The study of action video game training is of particular interest in cognitive control research. There is now a large body of research, including many training studies, that have established that playing specifically action video games causes improvements in performance across a broad range of cognitive tasks [@bediou2018]—some of which generalize to real-life abilities [@franceschini2012]—and there is also an increasing body of research investigating the neural mechanisms involved in video game play and their effects (see Chapter 4). These constitute a fertile ground to build cognitive control theories and bridging a gap between experimental psychology, cognitive neuroscience, and computational cognitive sciences. Brain function may for instance inspire new computational theories and behavioral experiments that involve cognitive control and generalization. In addition, action video games may offer cognitive neuroscientists a practical and safe means to causally study cognitive control and may also provide new cognitive control assessments tools that may be more effective and valid than traditional batteries of tasks. Finally, the idea that effective cognitive training requires specific complex tasks, such as action video games, and is mostly ineffective when using simple cognitive task [@owen2010] seems to imply that as a field we need to study cognition within those complex tasks rather than focusing solely on standard cognitive tests, like the Stroop task for example. This calls for a paradigm shift in studying cognitive control which may benefit from modern technological advances in artificial intelligence [@zink2021; @perone2021; @doebel2020; @botvinick2022].
To sum up, cognitive neuroscience and psychology face two main challenges: (a) gain greater clarity on the cognitive control constructs (what it is and how to measure it), and (b) understand what features of the cognitive system (i.e., the agent) and what features of the task (i.e., the environment) determine cognitive control, its functioning, and generalization in humans. Chapters 1, 2, and 3 aim to tackle these challenges.
### Artificial intelligence
The field of artificial intelligence provides a unique perspective on human cognition. Recent advances in machine learning have dramatically changed our ability to build accurate and scalable models of human cognition that previously relied on minimal theoretical frameworks and limited data [@ho2022]. That is, modern cognitive science requires not only understanding cognitive control from a neural and psychological basis [@lindsay2020] but also understanding the computational mechanisms and to build artificial agents that are aligned and comparable to human cognition [@botvinick2022].
#### Control in artificial intelligence
Since its conception, artificial intelligence researchers have sought to develop computational models that mimic human intelligence. Unsurprisingly then, cognitive control has been investigated in artificial intelligence early on [@miller1960].
*What does cognitive control look like in AI?* Ideas in AI related to cognitive control have taken many forms. In its most abstract conception, control has been associated with optimizing parameters of computational models to allow them to learn how to perform a task and achieve a specific goal [@bensoussan2020]. This limited view of control can be nevertheless very powerful when it is implemented in advanced model architectures that allow for the emergence of complex behavior. Indeed, this approach has been very successful in designing generic artificial agents capable of performing many different, complex tasks [@reed2022; @yang2019].
There are, however, more elaborate views of cognitive control that have emerged over the past decade, inspired by research in computational cognitive science [@ho2022]. One such view offers that humans may simultaneously entertain two internal systems when performing a task: a model-free system and a model-based system [@daw2011]. In essence, the model-free system learns a policy (i.e., "how to act") that maps states (e.g., stimuli) to actions (i.e., "responses"). This system is fast but simple and task-specific and it may thus generate errors and limit generalization. The other system is model-based, meaning that in the process of learning a policy, the system exploits its understanding of how the world works (e.g., by incorporating beliefs about state-transition in the decision making process). This system is slower and more "effortful" but it may also be more flexible and lead to higher performance levels. What is interesting about this work is that it has been used to evaluate human behavior. The results of that work show that not only do humans rely on both systems [@dolan2013], but the extent to which they do so depends on how much resources they have [@otto2015]. For example, by putting people in a stressful situation it can be observed that their reliance on the model-free system increases presumably because internal resources are deviated towards addressing the stressor [@otto2013].
Recent work shows that in addition to accounting for human phenomena, this idea of "two systems" may in fact be grounded in computational principles [@moskovitz2022]. More specifically, this framework posits the existence of two systems where one of the systems aims to perform a task well, while the other system aims in addition to simplify itself (by minimizing its description length) an idea that resonates in psychology with the concepts like automation of behavior, habit formation and the reduction of effort with practice. A key motivation for a system to be implemented in this way is not only the long-term reduction of computational resources but also its ability to generalize to new tasks as simpler models will need to discard more minute elements that are specific to a task and may thus generalize more than the full model.
Other interesting ideas in this context includes what we call "recycling" [or the active attempt to match what was previously learned to a new situation rather than starting from scratch\; @tomov2021] and "composition"—the idea that complex behavior may emerge from models that are composed of computationally specific building blocks [@yang2019]. These are just a few of the many ideas that are relevant in this field and that offer new avenues for the study of cognitive control both in psychology and computer science.
### The value and challenges of interdisciplinary research
It is clear from the literature reviewed above, that there is great scientific and practical value in aiming to bridge the gaps between psychological and computer sciences; computational models can inform psychological theories and vice versa.
It is important to note that both in psychology and in artificial intelligence, the concept of generalization is a major current scientific challenge. Humans are endowed with unique abilities to flexibly adapt their behavior and generalize what they've learned in one context to new, never-before seen situations [@tenenbaum2011]. Playing action video games, for example, is thought to improve cognitive control abilities and generalize to a broad set of tasks, ranging from visual contrast perception [@chopin2019] to reading [@franceschini2017]. The mechanisms underlying these human generalization abilities remain, however, largely unknown. Current artificial agents, on the other hand, have very limited generalization abilities despite their tremendous success in performing complex tasks well [@chollet2019]. To be more specific, these models are able to generalize from a training dataset to unseen test datasets that follow the same distribution of data (e.g., a cat-dog classifier can classify new images of cats and dogs; i.e., these models are robust) but they cannot easily generalize to new tasks (e.g., a cat-dog classifier can't play chess; i.e., these models are not flexible). It appears then that there are great opportunities for psychology and artificial intelligence to join forces and develop new models of cognitive control that could help both better understand the human mind and develop the next generation of artificial agents.
A key step towards making this happen is to make it possible, and even easy, to compare human and artificial agents directly. There are many cases where this has been successfully done at the single task level [e.g., @daw2011;@otto2013; @otto2015]. There is comparatively less work comparing human and artificial agents across multiple tasks [@yang2019; @mnih2015]. Yet, as stated by @yang2019: "The brain has the ability to flexibly perform many tasks, but the underlying mechanism cannot be elucidated in traditional experimental and modeling studies designed for one task at a time." A virtual environment allowing human and artificial agents to perform the exact same battery of tasks would be highly valuable and support the integration of cognitive control theories across psychology and artificial intelligence. It may help ground cognition in computational terms [e.g., which types of tasks can be performed by a given computational architecture and which cannot\; @yang2019; @mnih2015], provide new insights and concepts to both psychology and computer science [@laird2017; @stocco2021; @christian2016], offer benchmarks for human and artificial agents as well as their comparison (relative performance profiles;), lead to the development of new tasks (e.g., tasks that are diagnostic of types of artificial agents and that could be tested on humans), and perhaps new computational architectures that truely generalize [@chollet2019].
## Current research
The main strategy in this thesis has been to establish a broader, interdisciplinary view of cognitive control that can be conceptually, computationally, and empirically studied and integrates work within and across scientific fields. In line with this strategy, the current work explores a diverse set of approaches that together aim to better delineate the fuzzy concept of cognitive control.
The thesis comprises five research articles. Each of these articles are summarized in the following information sheets and discussed as a whole in the general discussion. Together this work illustrates, I hope, the benefits of the synergy between experimental psychology, neuroscience, and artificial intelligence in the study of cognitive control and opens up interesting future research perspectives.
\newpage
## Information sheets
\singlespacing
\small
+--------------+-------------------------------------------------------------------------------------------------------+
| **Title** | **Linking Theories and Methods in Cognitive Sciences via Joint Embedding of the Scientific** |
| | **Literature: The Example of Cognitive Control** |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Challenge**| Gain clarity on what is meant by cognitive control in the scientific literature and how |
| | it can be measured empirically. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Context** | Despite a large volume of publications, cognitive control remains a rather vague concept |
| | both theoretically and operationally [@baggetta2016]. Literature reviews by |
| | human domain experts have had limited success in bringing such clarity: they are not |
| | exhaustive, can't keep up with the rate of new publications, and may depict a biased, |
| | subjective perspective rather than an objective, quantitative view of the research field. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Why it** | Greater clarity on cognitive control and its measurement are critical to advance the field |
| **matters** | and integrate currently disparate research branches. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Method** | We conducted automated text analysis on a large corpus of scientific abstract (+500K) |
| | downloaded from PubMed. We used a state-of-the-art language model (GPT-3) to encode |
| | scientific texts and create a joint view of cognitive control related constructs and |
| | tasks. This method allows the grounding of theoretical constructs on cognitive tasks |
| | (in the sense that tasks are used to measure the constructs) as well as the grounding of |
| | tasks on cognitive constructs (in the sense that constructs are used to theorize behavior |
| | in tasks). It also offers a unique holistic view of cognitive control constructs and tasks |
| | within a single knowledge graph. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Results** | The results confirm the complex nature of cognitive control, explain the difficulty of |
| | defining cognitive control and may lead to new theoretical and empirical insights. We |
| | conclude that cognitive control can't be assessed using a single task and should instead |
| | be measured using a battery of tasks (varying contexts and demands) or more complex tasks |
| | (e.g., video games). We also conclude that as a construct cognitive control may benefit |
| | from being decomposed into smaller, better defined constructs to make progress in the |
| | field. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Output** | The article was accepted as a conference paper for the CogSci2022 conference, the preprint |
| | is published on ArXiv [@ansarinia2022] and will be submitted for publication soon. The |
| | dataset is available on |
| | [huggingface.co/datasets/morteza/cogtext](https://huggingface.co/datasets/morteza/cogtext) |
| | ,and the code is publicly available on |
| | [github.com/morteza/CogText](https://github.com/morteza/CogText). |
| | |
| | The methods and implications are further described in Chapter 1. |
+--------------+-------------------------------------------------------------------------------------------------------+
: TL;DR -- Chapter 1 (CogText) {#tbl-tldr-cogtext}
\newpage
+--------------+-------------------------------------------------------------------------------------------------------+
| **Title** | **CogEnv: A Virtual Environment for Contrasting Human and Artificial Agents across Cognitive Tests** |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Challenge**| Modeling the environment: develop a virtual environment that allows the direct comparison of human |
| | versus artificial agents and thus supports the integration of cognitive control theories across |
| | psychology and artificial intelligence. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Context** | There have been important advances in artificial intelligence but those advances are not readily |
| | accessible to psychological scientists. Similarly, psychological scientists have developed tasks, |
| | concepts, and theories that might not be accessible or perceived as relevant by computer scientists. |
| | One impediment to a shared understanding is the lack of an interoperable environment in which both |
| | human and artificial agents can interact with the exact same tasks. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Why it** | Being able to record and directly compare behavior from both human and artificial agents opens up many|
| **matters** | new possibilities. It may help ground cognition in computational terms [e.g., which types of tasks |
| | can be performed by a given computational architecture and which can't\; @yang2019; @mnih2015], |
| | offer benchmarks for human and artificial agents as well as their comparison (relative performance |
| | profiles), lead to the development of new tasks (e.g., tasks that are diagnostic of types of |
| | artificial agents and that could be tested on humans), and new computational models. It also allows to|
| | train a given artificial agent on a battery of tasks and to study task correlation and transfer |
| | effects (i.e., training on one task leads to improved performance on other tasks depending on how |
| | "similar" the tasks are) that can be compared with and tested on human participants. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Method** | We developed CogEnv, a virtual environment that lets us interface both human and artificial agents to |
| | perform the exact same computerized battery of cognitive tasks. A wide range of artificial agents can |
| | be tested with this battery, provided they follow a common protocol (i.e., use pixels/symbols as |
| | input, process reward signals, and emit action). The data collected from these agents is in the same |
| | shape and format as human data and can thus be processed using the exact same data analysis code that |
| | is typical in experimental psychology (thus facilitating the direct comparison of human and artificial|
| | agents). As a proof of concept, we successfully trained baseline RL agents to perform a battery of |
| | cognitive tasks for which we also collected human data. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Results** | The overall framework is operational and appears very promising. A preliminary investigation |
| | illustrates the idea that the comparison of performance/error profiles of human versus baseline RL |
| | agents may reveal aspects of human cognitive control that are yet to be addressed by artificial |
| | agents. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Output** | The article was accepted and published as a conference paper for the CCN2022 conference. The code is |
| | available at [github.com/morteza/CogEnv](https://github.com/morteza/CogEnv). |
| | |
| | The method and implications of the proposed environment and expected performance profiles are further |
| | described in Chapter 2. |
+--------------+-------------------------------------------------------------------------------------------------------+
: TL;DR -- Chapter 2 (CogEnv) {#tbl-tldr-cogenv}
\newpage
+--------------+-------------------------------------------------------------------------------------------------------+
|**Title** | **CogPonder: Towards a Computational Framework of General Cognitive Control** |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Challenge** | Modeling the agent: developing a shared account of response times for human and artificial agents |
| | using a new type of computational model that functionally decouples control from controlled processes.|
+--------------+-------------------------------------------------------------------------------------------------------+
|**Context** | Computational models embody our theoretical understanding in an explicit and testable way. Current |
| | computational models of cognitive control are lacking in important ways. In psychology, cognitive |
| | control models tend to be designed for specific tasks (e.g., Stroop) which makes it hard to study |
| | cognitive control in general (e.g., across a battery of tasks, while playing video games or in |
| | real-life activities). Computer science, on the other hand, has recently been able to develop |
| | artificial agents that can perform complex tasks. However, computer scientists typically ignore |
| | resource limitations and how long it takes for an agent to make decisions and act (in some cases, |
| | the environment is "paused" for the agent computation to be completed). |
| | |
| | A defining (and measurable) property of human cognitive processing is that it takes time and that this|
| | amount of time varies depending on numerous factors in a meaningful way [i.e., response time\; see |
| | @ratcliff2013; @deboeck2019]. The exertion of cognitive control impacts response times and this impact|
| | is a major source of information in psychological research [e.g., "task-switching costs"\; |
| | @monsell2003]. |
| | What is missing then is a new type of computational model of cognitive control that is flexible enough|
| | to be used in combination with any model (hence being able to address more complex tasks), which |
| | decouples control from operation in a way that might be theoretically meaningful and which offers |
| | computational scientists a means to add control mechanisms to their computational models. |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Why it** | The envisioned computational models would benefit psychology by offering a principled means to |
|**matters** | investigate cognitive control across a wide range of situations as well as the possibility to exploit |
| | innumerous complex models that have been developed in computer science. It would also benefit computer|
| | science by offering a principled and computationally practical (i.e., differentiable, modular) means |
| | to augment existing computational models with control abilities resulting in time varying responses. |
| | The comparison of response time profiles across human and artificial agents furthermore may offer |
| | insights benefitting both disciplines. |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Method** | We propose a general deep learning framework that functionally decouples control (generating varying |
| | response times) from the decision making processes (making choices). The framework involves a |
| | controller that acts as a wrapper around any computational models (that "perceive" the environment |
| | and generate "actions" on that environment) and controls when the model should stop its processing |
| | and output a choice (this is known as the halting problem). |
| | |
| | This model is inspired by the Test-Operate-Test-Exit (TOTE) architecture [@miller1960] that conceives |
| | control as a recurrent mechanism that ultimately halts a computational process once a specific |
| | condition has been met. We instantiated TOTE using PonderNet, a recent deep learning framework for |
| | adaptive computing. By controlling the halting, the fameworks allows to continuously control how much |
| | resources are dedicated to the decision making agent and jointly affects the choices (accuracy) and |
| | response speed of the system. |
| | |
| | We implemented CogPonder, a flexible, differentiable end-to-end deep learning model that can perform |
| | the same cognitive tests that are used in cognitive psychology to test humans. We then trained |
| | CogPonder to perform two cognitive control tasks (i.e., Stroop and N-back) while at the same time |
| | aligning it with human behavior. Next we compared the behavior of CogPonder (i.e., accuracy and |
| | response times distributions) with the behavior of humans. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Results** | CogPonder can be trained to perform cognitive tests and generates behavior that is similar to human |
| | behavior across multiple experimental conditions. CogPonder therefore provides a means for further |
| | investigating both human cognition and the computational models. |
| | |
| | The proposed model is very flexible (i.e., CogPonder can wrap around any deep learning model so is |
| | unattached to specific model choices) and can be extended in many ways (e.g., using more advanced |
| | computational techniques to perform complex tasks). Most importantly, the proposed framework |
| | explicitly connects human behavior to artificial agents that produce human-like behaviors on a |
| | battery of cognitive control tasks. The framework thus provides interesting new insights and research |
| | opportunities for both psychological and computer science. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Output** | The manuscript will be submitted for publication soon. The code is available at |
| | [github.com/morteza/CogPonder](https://github.com/morteza/CogPonder). The method and results of the |
| | proposed computational model of response time are further described in Chapter 3. |
+--------------+-------------------------------------------------------------------------------------------------------+
: TL;DR -- Chapter 3 (CogPonder) {#tbl-tldr-cogponder}
\newpage
+--------------+-------------------------------------------------------------------------------------------------------+
|**Title** | **Training Cognition with Video Games** |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Challenge** | Clarifying the relationship between training cognitive control with action video games and its |
| | transfer effects by reviewing behavioral and brain evidence. |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Context** | Experience impacts brain functioning and structure and there is now considerable evidence that |
| | specific training regimes can improve cognitive control. In particular, playing action video games, |
| | as opposed to other kinds of games, has been shown to cause improvements across a broad range of |
| | cognitive abilities [@bediou2018]. Although there is no satisfactory explanation of these effects yet,|
| | one prominent view states that video games improve cognitive/attentional control abilities and that |
| | this improvement in cognitive control explains the transfer effects [@green2012a]. |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Why it** | Training cognition in a way that transfers to real life has many practical implications (e.g., |
|**matters** | rehabilitation, healthy aging, education, peak performance). Understanding the underlying mechanisms |
| | would allow us to devise more effective interventions. The study of transfer effects is important |
| | because it offers a setting to test cognitive control theories in a non-trivial way. We currently have|
| | no satisfactory theory that could account for how training on one task would impact performance on a |
| | never seen before task. Understanding transfer requires developing computational models that can |
| | perform multiple tasks—this is a general goal that computational cognitive control models aim for. |
| | The study of training effects and their consequences is also important because they offer a means to |
| | causally test computational theories. Finally, the study of behavior during video game play poses |
| | interesting new questions to cognitive control scientists. Video games are complex interactive |
| | environments that engage cognitive systems in multiple, context dependent ways. Studying behavior |
| | during video game play may offer new insights on cognitive control that are relevant in the real |
| | world and that might not be apparent when using elementary cognitive tests. |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Method** | This chapter reviews the behavioral and neuroimaging literature on the cognitive consequences of |
| | playing various genres of video games. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Results** | Our review highlights that different genres of video games have different effects on cognition. |
| | Action video games—as defined by first and third person shooter games—have been associated with |
| | greater cognitive enhancement, especially when it comes to cognitive control and top-down attention, |
| | than puzzle or life-simulation games. Playing action video games seems also to impact reward |
| | processing, spatial navigation, and reconfiguration of attentional control networks in the brain. |
| | Interpretations of the effects of playing action video games on behavior and the brain have been |
| | attributed to various psychological constructs, in particular attentional control, quick processing of|
| | sensory information, and rapid responses. |
| | |
| | These results suggest that cognitive training interventions need to be endowed with specific game |
| | mechanics for them to generate cognitive benefits, presumably by enhancing cognitive control |
| | abilities. We discuss what those game mechanics might be and call for a more systematic assessment of |
| | the relationship between video game mechanics and cognition. We also note that as video games become |
| | more and more advanced (i.e., mixing genres and game-play styles within the same video game), it will |
| | become increasingly difficult to study and understand their effects on cognition. |
| | This article lays a foundation for the study of cognitive and brain functioning using video games and |
| | illustrates the value of this approach to investigate general cognitive control. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Output** | The article has been published as a peer-reviewed book chapter [@cardoso-leite2021]. It is further |
| | provided in Chapter 4. |
+--------------+-------------------------------------------------------------------------------------------------------+
: TL;DR -- Chapter 4 (Review) {#tbl-tldr-review}
\newpage
+--------------+-------------------------------------------------------------------------------------------------------+
|**Title** | **Neural Correlates of Habitual Action Video Games Playing in Control-Related Brain Networks** |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Challenge** | Test the idea that action video game play affects neural functioning in ways that are compatible with |
| | cognitive control hypotheses according to which action video gaming improves cognitive control which |
| | in turn explains improved performance across a wide range of cognitive tests (i.e., transfer). |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Context** | On the one hand, research shows that playing action video games improves cognitive performance across |
| | a wide range of cognitive tasks, presumably by enhancing people's cognitive control abilities |
| | [@bediou2018]. On the other, the cognitive neuroscience literature has highlighted integration of |
| | several functional brain networks as being important for cognitive control [@menon2022]. These two |
| | sets of theories have not yet been empirically confronted despite there being great value to do so. |
| | Indeed, there are competing hypotheses regarding the effects of action video gaming—some highlighting |
| | domain-general abilities (e.g., attention, cognitive control), others focusing on domain-specific ones|
| | (e.g., response speed). These alternative views make rather different predictions regarding changes |
| | in brain function (e.g., changes in specific functional networks vs changes in specific areas). |
| | |
| | Similarly, research on functional brain networks has highlighted numerous cognitive control networks.|
| | There are however some inconsistencies across such theories. Studying the impact of playing action |
| | video games provides a means to empirically test those theories and improve our understanding of how |
| | those networks work. |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Why it** | The study of the differences in functional brain networks between habitual action video game players |
|**matters** | and non-video game players can advance our understanding of both the mechanisms underlying the action |
| | video game training effects and the neural mechanisms supporting cognitive control in general. |
| | |
| | Confirming that action video game play affects cognitive control (via its functional neural |
| | underpinnings) has important implications for the study of cognitive training. It also has practical |
| | value as it would offer cognitive neuroscientists a new tool to causally study cognitive control. |
| | Finally, this type of work could lay a foundation towards bridging a gap between experimental |
| | psychology, cognitive neuroscience and computational cognitive sciences (brain function may for |
| | instance inspire new computational theories and behavioral experiments). |
+--------------+-------------------------------------------------------------------------------------------------------+
|**Method** | We curated a dataset collected by [@föcker2018]. The dataset comprises resting-state fMRI data |
| | (7 minutes and 30 seconds, or 125 time points) and task-fMRI data from a total of 32 human subjects |
| | (16 habitual action video gamers and 16 non-gamers). The original study focused on task-fMRI; here we |
| | analyze the resting-state data. |
| | |
| | We developed a machine learning pipeline to investigate the differences between habitual action video |
| | gamers and non-video gamers in terms of their functional resting-state brain connectivities, focusing |
| | in particular on networks associated with cognitive control. We used a robust approach to preprocess, |
| | remove confounds, parcellate, aggregate networks, and extract resting-state functional connectivity |
| | measures from the BOLD signals. The whole pipeline was cross-validated, and several arbitrary choices |
| | in the preprocessing were considered as hyperparameters of the model (for example parcellation atlas |
| | and connectivity measure). We trained a classifier to discriminate unseen participants as action |
| | video gamers versus non-gamers based on their resting-state functional connectivities. We then |
| | investigated what features were responsible for the model prediction accuracy by applying a |
| | permutation feature importance test. Additionally, SHAP analyses were conducted to investigate the |
| | contribution of each feature to the output (not the accuracy) of the model. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Results** | Our model is able to classify unseen participants as action video game players based only on their |
| | resting state functional connectivities with an accuracy of 72.6%. This high level of accuracy |
| | demonstrates the value of resting state functional data to study action video gaming. Interestingly, |
| | the performance of the classifier depended on the specifics of the method used (i.e., parcellation |
| | technique, type of connectivity metric), supporting the utility of the robust/exhaustive methodology |
| | employed in this study. Investigating why the classification was successful shows that there is in |
| | fact no specialized network that differs among the two groups of participants. Instead, it is the |
| | interplay between networks that matters most, and in particular the interplay between the |
| | cingulo-opercular and the sensorimotor networks and between the frontoparietal and the sensorimotor |
| | networks—a result that is robust to variations in parcellation and connectivity metric. These results |
| | do not support the view that individual networks are enhanced by action video game play and suggest |
| | instead a mechanism that involves a reconfiguration of a collection of networks. These results |
| | provide new insights and have clear implications for both theories of action video game training and |
| | for cognitive neuroscientific theories of cognitive control in the human brain. |
+--------------+-------------------------------------------------------------------------------------------------------+
| **Output** | The article is being prepared for journal submission. The code is available on |
| | (github.com/morteza/ACNets)[https://github.com/morteza/ACNets]. The method and results are described |
| | in Chapter 5. |
+--------------+-------------------------------------------------------------------------------------------------------+
: TL;DR -- Chapter 5 (ACNets) {#tbl-tldr-acnets}
\normalsize
\doublespacing