The Microstructure of Attentional Control in the

The Microstructure of Attentional Control in the
Dorsal Attention Network

Abhijit Rajan1, Sreenivasan Meyyappan1, Yuelu Liu2, Immanuel Babu Henry Samuel1,
Bijurika Nandi1, George R. Mangun2, and Mingzhou Ding1

Abstract

■ The top–down control of attention involves command signals
arising chiefly in the dorsal attention network (DAN) in frontal
and parietal cortex and propagating to sensory cortex to enable
the selective processing of incoming stimuli based on their be-
havioral relevance. Consistent with this view, the DAN is active
during preparatory (anticipatory) attention for relevant events
and objects, which, in vision, may be defined by different stimu-
lus attributes including their spatial location, color, motion, or
form. How this network is organized to support different forms
of preparatory attention to different stimulus attributes remains
unclear. We propose that, within the DAN, there exist functional
microstructures (patterns of activity) specific for controlling

attention based on the specific information to be attended. To
test this, we contrasted preparatory attention to stimulus loca-
tion (spatial attention) and to stimulus color (feature attention),
and used multivoxel pattern analysis to characterize the corre-
sponding patterns of activity within the DAN. We observed differ-
ent multivoxel patterns of BOLD activation within the DAN for
the control of spatial attention (attending left vs. right) and fea-
ture attention (attending red vs. green). These patterns of activity
for spatial and feature attentional control showed limited overlap
with each other within the DAN. Our findings thus support a
model in which the DAN has different functional microstructures
for distinctive forms of top–down control of visual attention. ■

INTRODUCTION

Visual attention can be voluntarily directed to spatial loca-
tions (spatial attention) or to object features such as color
or motion (feature attention; Duncan & Humphreys, 1989;
Posner, Snyder, & Davidson, 1980). Deployment of volun-
tary attention in advance of stimulus processing ( prepara-
tory attention) enables facilitation of attended information
and suppression of ignored or irrelevant information
(Heinze et al., 1994; Mangun & Hillyard, 1991; Corbetta,
Miezin, Dobmeyer, Shulman, & Petersen, 1990; Moran
& Desimone, 1985; Van Voorhis & Hillyard, 1977).
Neurophysiologically, this feat is thought to be achieved
by control signals issued by a predominantly frontal and pa-
rietal network that bias visual cortex to enable the selective
processing of incoming sensory stimuli (Corbetta, Kincade,
Ollinger, McAvoy, & Shulman, 2000; Hopfinger, Buonocore,
& Mangun, 2000; Gitelman et al., 1999; Kastner, Pinsk, De
Weerd, Desimone, & Ungerleider, 1999). This attentional
control network includes bilateral FEFs and bilateral su-
perior parietal lobule/intraparietal sulcus (SPL/IPS), and
related areas, and has been referred to as the dorsal atten-
tion network (DAN; He et al., 2007).

The DAN has been implicated in the attentional control
of different forms of visual attention, including spatial atten-
tion, feature attention, and object attention (Morishima
et al., 2009; Slagter et al., 2007; Corbetta et al., 2005;

1University of Florida, Gainesville, 2University of California,
Davis

© 2021 Massachusetts Institute of Technology

Giesbrecht, Woldorff, Song, & Mangun, 2003). What
remains unclear is precisely how the DAN supports these
different forms of attentional control, or, put another way:
How does the activity in the DAN represent the different
to-be-attended stimulus attributes in order to provide specific
top–down control signals to sensory systems? For example,
functional imaging studies of top–down preparatory atten-
tional control mechanisms have found scant evidence for
differential specializations in top–down control of spatial
compared to nonspatial attention (Slagter et al., 2007;
Corbetta et al., 2005; Giesbrecht et al., 2003), although
important clues come from work on the mechanisms of
feature attention (Niklaus, Nobre, & van Ede, 2017; Ibos &
Freedman, 2016; Summerfield & Egner, 2016; Astrand, Ibos,
Duhamel, & Ben Hamed, 2015; Bichot, Heard, DeGennaro,
& Desimone, 2015; Baldauf & Desimone, 2014; Liu & Hou,
2013; Greenberg, Esterman, Wilson, Serences, & Yantis,
2010), and to a lesser but important extent on attention to
objects (Liu, 2016; Baldauf & Desimone, 2014; Jiang,
Summerfield, & Egner, 2013; Morishima et al., 2009).

Two general alternative models have been offered about
the nature of the DAN in the control of attention in vision.
One is a domain-general model (Spagna, Mackie, & Fan,
2015; Fedorenko, Duncan, & Kanwisher, 2013; Wojciulik &
Kanwisher, 1999) and/or supramodal model (Betti, Corbetta,
de Pasquale, Wens, & Della Penna, 2018; Salmela, Salo, Salmi,
& Alho, 2018; Wang, Viswanathan, Lee, & Grafton, 2016;
Green, Doesburg, Ward, & McDonald, 2011; Shomstein &
Yantis, 2004) of the DAN, where it serves as an executive

Journal of Cognitive Neuroscience 33:6, pp. 965–983
https://doi.org/10.1162/jocn_a_01710

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

system for all forms of attentional control. In such a view,
both specializations in the functional organization within
the DAN (Liu & Hou, 2013), and of intra-DAN connectivity
(Szczepanski & Kastner, 2013), likely play roles in different
forms of attentional control. A different view is the idea that
the DAN is primarily a spatially based system (Szczepanski &
Kastner, 2013; Mangun & Fannon, 2007; Molenberghs,
Mesulam, Peeters, & Vandenberghe, 2007; Bichot, Schall,
& Thompson, 1996) and that nonspatial feature and object
representations and/or control mechanisms are supported
by specialized regions outside the classical DAN (e.g., inferior
frontal junction; Bichot et al., 2015; Baldauf & Desimone,
2014). Here, we focus on the functional architecture of the
DAN, asking whether specializations within it might be
related to different forms of top–down attentional control
during preparatory attention.

Neuroimaging studies of attentional control have primarily
relied on univariate analysis (e.g., Bengson, Kelley, & Mangun,
2015; Szczepanski, Pinsk, Douglas, Kastner, & Saalmann,
2013; Sestieri et al., 2008; Corbetta et al., 2000; Hopfinger
et al., 2000). In univariate fMRI analysis, for a voxel to be
reported as activated by an experimental condition, it needs
to be consistently activated across individuals. Individual
differences in voxel activation patterns could lead to failure
to detect the presence of neural activity in a given region of
the brain (e.g., Haxby et al., 2011). Multivoxel pattern anal-
ysis (MVPA) provides a way to take into account the multi-
variate spatial pattern of the BOLD activity across voxels in
order to discriminate between experimental conditions
(Haynes, 2015; Tong & Pratte, 2012; Norman, Polyn,
Detre, & Haxby, 2006). Studies using MVPA to study object
recognition (Sterzer, Haynes, & Rees, 2008), attention (Liu
& Hou, 2013; Greenberg et al., 2010), and emotion (Kim
et al., 2015) have shown that multivoxel patterns can be
different between experimental conditions from individual
to individual, even if the average BOLD activity is compara-
ble across conditions and/or individuals. Furthermore,
MVPA analysis is conducted at an individual subject level,
which takes into consideration each participant’s idiosyn-
cratic nature of the spatial pattern of BOLD responses
(Haxby, Connolly, & Guntupalli, 2014; Cox & Savoy, 2003).
To investigate the organization of top–down prepara-
tory attentional control in the DAN, we utilized a well-
established cued spatial/feature attentional control task,
which permitted us to distinguish preparatory attentional
control from selective sensory processing and motor re-
sponses (Slagter et al., 2007; Giesbrecht et al., 2003). On
each trial, an auditory cue (spoken word) was presented
that gave advance information about the to-be-attended
target attribute (spatial location or color). Univariate and
multivariate analyses were performed on the cue-evoked
BOLD activity to investigate the distinct functional neuro-
anatomical substrates of spatial versus feature attentional
control in DAN. Here, we report successful decoding
of different forms of attentional control in the cue–target
interval and provide evidence for distinct neural activity
patterns—referred to as microstructures—for spatial

versus feature attentional control in the DAN. These find-
ings have important implications for our understanding
of the neural mechanisms of voluntary attentional control.

METHODS

Participants

The experimental protocol was approved by the institutional
review board of the University of Florida. Twenty healthy,
right-handed college students (mean age 24.65 ± 2.87 years,
15 men and 5 women) with normal or corrected-to-normal
vision, and no history of neurological or psychological
disorders, provided written informed consent and partic-
ipated in the study.

Paradigm

The experimental paradigm used was a variant of those
used in many previous preparatory attention studies in
which attention-directing cues instructed participants how
to selectively focus attention on each trial (e.g., Corbetta
et al., 2000; Hopfinger et al., 2000). As illustrated in
Figure 1, two peripheral locations, 3.6° lateral to the upper
left and upper right of the fixation point, were marked on

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

Figure 1. Experimental paradigm. Each trial started with an auditory
cue (spoken words, 500 msec in duration) that instructed participants
to covertly attend, while maintaining fixation on the plus sign, to either
a spatial location (“left” or “right,” independent of color), a color (“red”
or “green,” independent of location), or neither (“none”; see text for
description of these neutral cues). For spatial and color cues, after a
variable cue–target ISI (3000–6600 msec), on the majority of trials, two
colored rectangles were displayed (200 msec in duration), one in each
visual hemifield. Participants were asked to report the orientation of the
rectangle (horizontal or vertical) that was displayed in the cued location
(regardless of its color) or that had the cued color (regardless of its
location); the uncued rectangle was to be completely ignored (except
on 8% of trials that were invalidly cued, in which it was the only stimulus
on the screen, i.e., there was only a single rectangle that was presented in
the uncued location or having the uncued color). An intertrial interval
(ITI) that varied randomly from 8000 to 12800 msec followed the onset
of the target.

966

Journal of Cognitive Neuroscience

Volume 33, Number 6

the screen. Each trial started with a spoken auditory cue of
500 msec in duration instructing the participant how to
covertly direct attention on that trial. Three types of trials
were included: (i) spatial cue trials, which directed attention
to a spatial location (“left” or “right,” independent of color;
40% of all trials); (ii) color cue trials, which directed atten-
tion to a color (“red” or “green,” independent of location;
40% of all trials); and (iii) neutral trials (the word “none”;
20% of all trials), which directed the participant to neither
spatial location nor color, but to prepare to respond to one
rectangle’s orientation based on it being presented on a
gray patch. On 80% of the spatial and color cue trials, target
stimuli followed the cues (varied delay of 3000–6660 msec).
The target stimuli were either two colored rectangles (red
and/or green) simultaneously presented in the left and
right hemifields for a duration of 200 msec (valid trials)
or a single rectangle of 200 msec in duration appearing
in the uncued location or having the uncued color (invalid
trials). On the remaining 20% of spatial and color cue trials,
the cue appeared but no target followed (cue-only trials).
The participants’ task was to report (button press) the
orientation of the rectangle (target) appearing in the cued
location (spatial attention) or having the cued color (fea-
ture attention), and to ignore the other rectangle (distrac-
tor). For color cue trials, the two rectangles displayed were
always of different colors; for spatial cue trials, the two
rectangles were either of the same color or of different
colors. For neutral cue trials, two rectangles were also
displayed, and participants were required to discriminate
the orientation of the rectangle with the gray patch in the
background. On 8% of the spatial cue or color cue trials,
the cues were invalid, because only one rectangle was
subsequently displayed (50/50 in left or right overall);
the rectangle appeared either in the uncued location or
having the uncued color, and the participants were re-
quired to report the orientation of that rectangle. Both
the neutral and invalidly cued trials were included to
permit the measurement of the behavioral effects of
attentional cuing (Posner, 1980), but were not included
in the BOLD analyses because there were too few such
trials. An intertrial interval, from target onset to the start
of the next trial varied randomly from 8000 msec to
12800 msec. Trials were organized into blocks, with each
block consisting of 25 trials and lasting approximately
7 min, with short rest periods in between. Each participant
completed 10–14 blocks over 2 days. The ISIs and trial
structure were designed to enable successful deconvolu-
tion of overlapping BOLD responses from cues and tar-
gets, given the long duration of hemodynamic responses
( Woldorff et al., 2004; Ollinger, Corbetta, & Shulman,
2001; Ollinger, Shulman, & Corbetta, 2001; Burock,
Buckner, Woldorff, Rosen, & Dale, 1998).

The goal of our experimental design is to be able to con-
trast two types of preparatory (postcue/pretarget) atten-
tion: attention to spatial location and attention to a
nonspatial feature (color). During the preparatory period
after spatial cues, the participants could covertly orient

spatial attention in order to prepare to discriminate the
target orientation at the cued location, with the target
colors being irrelevant. During the preparatory period
after color cues, the participants could not develop an ex-
pectancy for where the relevant target would be, but only
what its color would be, and thus, only after the targets ac-
tually appeared could spatial attention be oriented and the
target discriminated. As a result, during the preparatory
period (postcue/pretarget or cue–target interval), the par-
ticipants engage different forms of attentional control
(spatial or color). The logic of the design follows our prior
work (Slagter et al., 2007; Giesbrecht et al., 2003), but it
does not explicitly preclude that participants could have
adopted a strategy of dividing their spatial attention during
the preparatory period after color cues, given that they
know in the task that the targets will only appear in either
the left or right locations. This is important to bear in mind
because it means that some activation of spatial control
structures within the DAN by the color cues may be un-
avoidable in this design (and most others), which would
have the effect of reducing our ability to differentiate pat-
terns of attention control for spatial versus color attention
(however, the pattern of results we present in the follow-
ing suggest that this was not the case).

As we noted earlier, and have done in prior studies, it is
possible to add additional control conditions to help with
the isolation of feature from spatial attention, but no sin-
gle design can do that perfectly (Slagter et al., 2007;
Giesbrecht et al., 2003). This aspect of the design is one
reason that (as described below) we performed the decod-
ing separately for preparatory spatial attention (decoding
left vs. right attention) and preparatory feature attention
(decoding red vs. green attention). By performing the
decoding in this way and then comparing the decoding
results, we ensure that our decoding results are focused
on the forms of attentional control that we aimed to inves-
tigate and not merely differences in preparatory spatial
attention (e.g., focused attention in spatial trials, but
divided spatial attention during feature trials), which could
lead to different task sets between spatial and feature trials
(Hubbard, Kikumoto, & Mayr, 2019), or potentially trivial
differences between conditions, such as systematic devia-
tions of eye position during task performance (Mostert
et al., 2018). With respect to the latter issue of eye posi-
tions, we ruled out a confounding influence of systematic
differences in eye positions by decoding eye position data
recorded using an eye tracker during the scanner sessions
(see Figure 8).

All participants went through a task training session
during which their eye movements during the task were
monitored using the EyeLink 1000 eye tracker system
(SR Research). The participants who showed an accuracy
above a minimum criterion (> 70%) and who were able to
maintain proper eye fixation throughout the experiment
(assessed by visual inspection of their fixation maps
derived from the eye-tracking data) took part in the actual
fMRI experiment, where eye tracking was also employed.

Rajan et al.

967

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

fMRI Acquisition and Preprocessing

Functional images were collected on a 3T Philips Achieva
scanner (Philips Medical Systems) equipped with a
32-channel head coil. The EPI sequence parameters were
as follows: repetition time = 1.98 sec; echo time = 30 msec;
flip angle = 80°; field of view = 224 mm; slice number = 36;
voxel size = 3.5 × 3.5 × 3.5 mm; matrix size = 64 × 64.
Slice orientation was parallel to the plane connecting the
anterior and posterior commissures. Simultaneous EEG
was also recorded but not analyzed here. Although not
analyzed for this report, in order to permit assessment of
the quality of EEG recordings, the image acquisition proto-
col was modified; image acquisition was performed only
during the initial 1.85 sec within each EPI volume, with
no image acquisition taking place during an interval of
130 msec toward the end of each repetition time.

fMRI data were preprocessed in SPM (Friston et al.,
1994). Preprocessing steps included slice timing correc-
tion, realignment, spatial normalization, and smoothing.
Slice timing correction was carried out using sinc interpo-
lation to correct for differences in slice acquisition time
within an EPI volume. In order to account for changes in
head position, spatial realignment of the images to the first
image of each session was performed using a 6-parameter
rigid body spatial transformation. Images from each
participant were normalized and coregistered to the
Montreal Neurological Institute template. Images were
resampled to a voxel size of 3 × 3 × 3 mm, spatially
smoothed using a Gaussian kernel with 7-mm full width at
half maximum and high-pass filtered with cutoff frequency
set at 1/128 Hz.

fMRI Analyses

Cue-evoked BOLD responses were first examined using
the univariate general linear model (GLM) approach
(Friston et al., 1994). Eight task-related regressors were in-
cluded in the GLM, modeling the following events: Five re-
gressors modeled BOLD activity related to the five types of
cues with correct responses: attend left, attend right, at-
tend red, attend green (each of which included cue–target
and cue-only trials), and neutral attention; two regressors
modeled BOLD activity evoked by target stimuli, that is,
validly cued and invalidly cued target stimuli; one addi-
tional regressor was added to model the cues with incor-
rect responses. A t test was performed by contrasting betas
from different conditions at a voxel level to yield the t map
for each participant. The brain activation map for spatial
attentional control was obtained by combining attend left
and attend right cues, and the brain activation map for
color attentional control was obtained by combining
attend red and attend green cues. Statistical analyses were
performed at the group level using a one-sample t test
on the t maps from all the participants, thresholded at
p < .05, corrected for multiple comparisons with the false discovery rate (FDR) method (Genovese, Lazar, & Nichols, 2002) as implemented in SPM. If the group-level maps were computed by using the individual contrast maps in- stead of the individual t maps, the results were virtually identical, with the maps computed the two different ways being highly correlated (R = .95). Definition of ROI The DAN was the focus of this study because of the exten- sive literature that has focused on the role of the regions within this network in attentional control, and this focus permitted us to ask a specific question about this identi- fied attentional control network. The ROIs corresponding to the DAN were selected using the statistically significant ( p < .05, FDR corrected) group-level cue-evoked BOLD activation map (space + color cues). FEF included voxels activated in the precentral gyrus, superior frontal gyrus, and middle frontal gyrus region, and SPL/IPS included vox- els activated in inferior parietal region and SPL, consistent with previous studies (Szczepanski et al., 2013; He et al., 2007; Slagter et al., 2007; Giesbrecht et al., 2003). Activated voxels in dorsal precuneus that were contiguous with acti- vated DAN voxels were also included in the ROI (Liu, Bengson, Huang, Mangun, & Ding, 2016; Giesbrecht et al., 2003). In addition to DAN as a whole, to investigate whether there were differences in MVPA decoding in major subdivisions of the DAN, we also subdivided it into posterior DAN (pDAN, bilateral SPL/IPS), anterior DAN (aDAN, bilateral FEF), left DAN (lDAN, FEF, and SPL/IPS in the left hemisphere), and right DAN (rDAN, FEF, and SPL/IPS in the right hemisphere). As described above, the DAN ROI used in this study was defined using the univariate analyses of the attention con- ditions at the population level in coregistered standard space (Montreal Neurological Institute space). This ap- proach allowed us to capture the core set of DAN voxels involved in the control of attention to space and feature in the present task that are common across the partici- pants. Using this group approach means, of course, that individual differences in the DAN may not be accounted for in our analyses. Although the functional anatomy of the DAN is well conserved across individuals (Dworetsky et al., 2020; Gratton et al., 2018), we nonetheless could have considered alternatives, such as using a localizer scan or templates, to identify the DAN in order to define indi- vidual participant ROIs, as we have done in some prior work (Fannon, Saron, & Mangun, 2008). Indeed, some prior decoding studies have defined the DAN at the indi- vidual participant level in native space (Liu & Hou, 2013), whereas others have taken approaches similar to our group-level/standard-space method (Zhang & Golomb, 2021). Although there are pros and cons to each approach, one mitigating factor in our present work is that the DAN ROI used in this study is rather large (1390 voxels) and is therefore expected to have significant overlap with indi- vidual DANs. 968 Journal of Cognitive Neuroscience Volume 33, Number 6 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Estimation of Single-Trial BOLD Responses and MVPA The MVPA technique explores the difference in spatial patterns of BOLD activation to classify experimental con- ditions (Haynes, 2015; Haxby et al., 2014; Norman et al., 2006), and it is performed at the single-trial level. We applied the beta series regression method (Rissman, Gazzaley, & D’Esposito, 2004) to estimate BOLD activation on each trial. The beta series regression method has been used effectively to estimate single-trial BOLD responses for MVPA (Kriegeskorte, Goebel, & Bandettini, 2006; Norman et al., 2006). There are different MVPA techniques. In this study, a linear support vector machine (SVM) with c = 1 was used to identify the patterns of activity within the DAN that were related to preparatory attention to spatial location (by decoding left vs. right attention) and, separately to preparatory attention to stimulus color (by decoding red vs. green attention). The resultant SVM weight maps (described below) were then compared for preparatory spatial versus color attention to assess whether the pat- terns for the two forms of preparatory attention with over- lapping or distinct, and if overlapping, to what degree. All the voxels activated in response to spatial or color cues in the DAN ROIs were chosen as features for MVPA analyses. The classification accuracy for each participant was calcu- lated using a 10-fold cross-validation technique. In this technique, 90% of the labeled data (e.g., for spatial atten- tion, attend left vs. right trials) was used for training the classifier to generate a predictive model, and the remain- ing 10% of the data was used to test the model by compar- ing the actual labels against the predicted labels. This process was repeated 10 times using 10 different subsets of trials as testing data, and the 10 prediction accuracies were averaged. This averaged accuracy, referred to as decoding accuracy, measures the distinctiveness of pre- paratory spatial attention patterns or preparatory feature attention patterns of BOLD activation in the DAN (Haxby et al., 2014; Haynes & Rees, 2006). A nonparametric per- mutation technique was used to test the statistical signifi- cance of the decoding accuracy against chance-level decoding (Stelzer, Chen, & Turner, 2013). Specifically, at the individual participant level, the class labels were shuf- fled 100 times, and for each shuffled label, the 10-fold cross-validation procedure was carried out. At the group level, one classifier accuracy from the 100 shuffled accura- cies were chosen randomly for each participant and averaged across participants. This procedure was repeated 105 times, which resulted in 105 chance-level decoding accuracies at the group level. The group-level decoding accuracy obtained from the actual data was compared with the empirical distribution of the group-level chance accu- racies to determine its statistical significance. One point bears consideration in these methods. We used a 10-fold validation method, in which 90% of the trials were used to train a model and the remaining 10% for testing the model. Past work has suggested that a leave- one-run-out approach is better at maintaining indepen- dence between training and testing data sets during cross-validation if the trials are close together in time ( Varoquaux et al., 2017). When the trials are sufficiently separated in time, however, the leave-one-run-out meth- od and the leave-10%-out (10-fold validation) method are expected to generate similar results. Our experiment utilized a slow event-related design, and the average time interval between two adjacent cues is 15 sec (see Figure 1). In addition, 20% of the trials were cue-only trials where the cue was not followed by a target, leading to further sepa- ration of events in the experiment. These design choices help to ensure that there is less overlap in hemodynamic response between any two events, making them fairly in- dependent of each other. As expected, when we directly compared decoding using leave-one-run-out versus leave-10%-out cross-validation methods in a randomly selected subset of participants (n = 7), we found no sig- nificant differences in decoding accuracy between the two methods of cross-validation ( p = .95). Classifier Weight Maps In addition to decoding accuracy, another key aspect of the SVM technique is the weight map, which can be used to attribute functional significance to each voxel. Specifically, a linear SVM tries to find a hyperplane to max- imize the margin separating two classes of data (Cortes & Vapnik, 1995), which in this study are (i) attending left ver- sus right for spatial attention and (ii) attending red versus green for feature attention. The weight vector normal to each separating hyperplane represents the direction along which there exists maximal separation between the two classes of data. It is worth noting that the weight maps corresponding to the SVM weight vectors described above are difficult to interpret functionally. An fMRI voxel that does not contain stimulus information may acquire a large weight because it helps to improve signal-to-noise in other voxels that do contain stimulus information (Kriegeskorte & Douglas, 2019). The transformation method proposed by Haufe et al. (2014), however, remedies this situation. In this method, the weight vectors from the SVM are trans- formed to activation patterns by multiplying them with the covariance matrix of the input data Z = Cov(X ) * W, where X is the input data, which is an N × V matrix, con- taining N trials and V number of voxels/feature, W is the weight vector of length V, and Z represents the corrected weight vector, which, according to prior studies, are more functionally relevant (Grootswagers, Wardle, & Carlson, 2017; Haufe et al., 2014). The corrected weight vector was normalized by dividing by its maximum absolute value, projected onto the voxels within an ROI, and visual- ized in the form of a brain map referred to as the weight map (Lee, Halder, Kübler, Birbaumer, & Sitaram, 2010; Mourão-Miranda, Bokde, Born, Hampel, & Stetter, Rajan et al. 969 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 2005). In addition to the magnitude, the sign of the weight in each voxel is also meaningful, providing information about the contribution of that voxel to a particular condi- tion (e.g., attend left vs. attend right). To illustrate, sup- pose that Condition A is assigned positive label (+1) and Condition B negative label (−1). A positive weight means that the voxel has higher activity during Condition A as compared to Condition B, whereas a negative value means higher activity during Condition B as compared to Condition A. The functional activation difference becomes more pronounced for voxels with larger weights (see Results section for a demonstration of these properties). As a consequence of the foregoing, for the purposes of this study, the weight maps, obtained by combining SVM weight vectors with the Haufe et al. (2014) transformation, can then (with appropriate caveats) be interpreted in terms of the functional anatomical microstructures under- lying different types of attentional control. Monitoring of Eye Movements During fMRI scanning, the eye position was monitored and recorded using an EyeLink 1000 MRI-compatible eye tracker. The x and y coordinates of eye positions were averaged in 100-msec windows with a 50% overlap and subjected to SVM analysis. At each time point, decoding accuracy between different attention conditions was ob- tained for each participant by implementing a 10-fold cross-validation technique and averaged across partici- pants to yield the decoding accuracy time course. Serial t tests were performed to identify time periods where decoding accuracy was above the chance level. FDR cor- rection was applied to account for multiple comparisons across different time points. RESULTS Behavior The overall mean RT and accuracy across all trial types were 1011 msec ± 183 msec and 93.66% ± 3.82%, respec- tively. For spatial trials, mean RT was 1016 msec ± 178 msec, and accuracy was 93.94% ± 3.83%, whereas for color trials, RT was 1006 msec ± 188 msec, and accuracy was 93.39% ± 4.50%. There were no significant differences ( p > .5) in these overall behavioral measures between the
spatial and color attention conditions (Figure 2A). Since
p values depend on the sample size, whereas the effect
size does not, we also considered effect size. The
Cohen’s d is 0.05 for the RT difference and 0.127 for the
accuracy difference, both being negligible, with d > 0.2
considered a small effect size. Furthermore, a Bayesian
analysis was also applied to further compare RT and accu-
racy between spatial and feature attention conditions. For
RT, the Bayes factor in favor of the alternate hypothesis
(RT for spatial ≠ RT for feature) was very low at 0.26,
whereas the Bayes factor in favor of the null hypothesis
(RT for spatial = RT for feature) was 3.8, with 3.2 or higher
considered as offering substantial supporting evidence
(Kass & Raftery, 1995). For accuracy, the Bayes factor in
favor of the alternate hypothesis (accuracy for spatial ≠ ac-
curacy for feature) was also very low at 0.28, whereas the
Bayes factor in favor of the null hypothesis (accuracy for
spatial = accuracy for feature) was 3.6. These behavioral
results provide converging evidence to indicate that the
general level of task difficulty was equivalent between spa-
tial and color cue conditions, which was by design (during
pretesting of the paradigm, the aspect ratios and choice
of color of targets, in spatial and color conditions, were
independently adjusted to match performance across
conditions).

Figure 2. Behavioral results.
(A) No significant differences in
either overall RT or accuracy
were observed between spatial
and color trials. This suggests
that our design ensured that
spatial attention and color
attention conditions did not
differ in task difficulty. (B)
Attention effects (differences
between validly and invalidly
cued trials) were significant
for both spatial and color
conditions. RT was faster and
accuracy was higher for validly
cued (attended) targets (* p < .001). This suggests that the participants attended the cued location or the cued feature according to instructions. 970 Journal of Cognitive Neuroscience Volume 33, Number 6 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Focused attention improved behavioral performance for both spatial trials (valid RT < invalid RT, p < 10−3) and color trials (valid RT < invalid RT, p < 10−5), as shown in Figure 2B, providing behavioral evidence that the partic- ipants deployed covert attention selectively to the cued location or the cued feature according to instructions. Furthermore, responses on validly cued spatial and color trials were significantly faster than on neutral (none cue) trials ( p < 10−5), providing additional behavioral evi- dence that the participants were deploying the appropri- ate attention after attention-directing cues. Univariate Analysis of Cue-evoked BOLD Activation The GLM was applied to examine univariate BOLD activa- tions in response to various attention-directing cues. Consistent with prior reports, bilateral FEF, bilateral SPL/IPS, and precuneus in the DAN were activated in re- sponse to both spatial and color cues (Slagter et al., 2007; Giesbrecht et al., 2003; Corbetta et al., 2000; Hopfinger et al., 2000), providing neural evidence that par- ticipants deployed preparatory attention in the cue–target interval (Figure 3A, B, and C). Statistically contrasting spa- tial cues versus color cues (Figure 3D) revealed that small clusters in bilateral SPL and left FEF within the DAN were significantly more activated for spatial cues than for color cues. However, when color cues were contrasted against spatial cues, no activated regions were found (Figure 3E). This pattern of findings is a replication of our original work contrasting attentional control for spatial location and nonspatial feature (color) attention (Mangun & Fannon, 2007; Slagter et al., 2007; Giesbrecht et al., 2003). MVPA Analysis: Decoding Different Forms of Attentional Control In order to test for the existence of distinct multivoxel neural activity patterns supporting spatial and color prepa- ratory attentional control in DAN that might be obscured by univariate methods, the SVM classifier was applied to voxelwise single-trial beta values in the cue–target interval separately for spatial attention (decoding attend left vs. at- tend right) and color attention (decoding attend red vs. attend green) trials. As shown in Figure 4A and 4C, within the DAN as a whole, the mean accuracy of decoding attend left versus attend right was 55% ( p < .0001), and the mean accuracy for decoding red versus green was 57% ( p < .0001), both being significantly above chance level of 50%. Further dividing the DAN into aDAN, posterior DAN (pDAN), lDAN, and rDAN, we found that the decod- ing accuracy between spatial conditions (Figure 4B) and between color conditions (Figure 4D) were all above chance level, indicating that distinct neural activity pat- terns supporting different forms of attentional control were also present in DAN subdivisions. Furthermore, across participants, the decoding accuracies in different DAN subdivisions and in the whole DAN were found to be correlated with one another (Tables 1 and 2), suggest- ing that the individual differences in pattern distinctness were similar in these tested ROIs. In Figure 3D, small clusters of voxels in DAN were more activated by spatial cues than by color cues, and these vox- els accounted for 8% of the total DAN ROI. We tested whether the decoding results differed when we included (DAN) versus omitted (DAN-8%) these voxels. For attend left versus attend right, the respective decoding accuracies Figure 3. Univariate analyses of cue-evoked BOLD activation. BOLD signal was significantly increased ( p < .05, FDR) in DAN structures in response to (A) spatial + color cues, (B) spatial cues only, and (C) color cues only. (D) Parts of bilateral SPL and left FEF were more activated when spatial cues were contrasted against color cues. (E) No regions in DAN were more activated when color cues were contrasted against spatial cues. These findings replicate prior work demonstrating significant overlap between DAN activation for spatial and feature attention control using univariate analysis methods. Rajan et al. 971 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 4. MVPA decoding accuracy for preparatory spatial and feature attention. Cue-evoked BOLD activation was estimated on a trial-by-trial basis. SVM was applied to decode different attention control conditions. Decoding accuracy for spatial attention (attend left vs. attend right): (A) within DAN as a whole and (B) within subdivisions of DAN. Decoding accuracy for color attention (attend red vs. attend green): (C) within DAN as a whole and (D) within subdivisions of DAN (**p < .0001, *p < .05). were virtually identical: DAN = 55.37% and DAN-8% = 55.27%. The correlation between the two spatial attention weight maps was 0.991. For attend red versus attend green, the respective decoding accuracies were again vir- tually identical: DAN = 56.58% and DAN-8% = 56.66%. The correlation between these two color attention weight maps was 0.996. Thus, the results reported here are not expected to be different if we omitted the 8% of voxels, and therefore, we retained those voxels in defining our DAN ROI. MVPA Analysis: Weight Maps and Microstructures of Attentional Control In order to investigate whether there were differences in the topographic patterns (microstructures) of neural activity for preparatory spatial attention versus feature attention in the DAN, SVM weight maps derived for each participant from the SVM classifiers were utilized. Importantly, these SVM weight maps were subjected to the transformation intro- duced by Haufe et al. (2014; see Methods section for de- tails), which rendered the transformed SVM weight maps Table 1. Spatial Attention: Correlations between Decoding Accuracies for Decoding Performed on the Whole DAN and for Decoding Performed Separately on Each Specified Subdivision of the DAN Table 2. Feature Attention: Correlation between Decoding Accuracies for Decoding Performed on the Whole DAN and for Decoding Performed Separately on Each Specified Subdivision of the DAN DAN pDAN aDAN lDAN rDAN DAN pDAN aDAN lDAN rDAN DAN pDAN aDAN lDAN rDAN 1.00 0.92 0.87 0.94 0.83 0.92 1.00 0.75 0.86 0.78 0.87 0.75 1.00 0.81 0.80 0.94 0.86 0.81 1.00 0.76 0.83 0.78 0.80 0.76 1.00 DAN pDAN aDAN lDAN rDAN 1.00 0.85 0.85 0.94 0.64 0.85 1.00 0.57 0.75 0.72 0.85 0.57 1.00 0.87 0.63 0.94 0.75 0.87 1.00 0.56 0.64 0.72 0.63 0.56 1.00 972 Journal of Cognitive Neuroscience Volume 33, Number 6 more functionally interpretable (we refer to these trans- formed SVM weight maps simply as “weight maps” in the following). Specifically, each voxel was given a signed (+ vs. −) weight value according to the weight map, iden- tifying its contribution toward a given form of attentional control. For example, when decoding attend left versus at- tend right, voxels of positively signed weight values consti- tute the microstructure supporting covert attention to the left visual field, whereas voxels of negatively signed weight values constitute the microstructure supporting covert attention to the right visual field; the union of the voxels having positively and negatively signed weight values collec- tively become the microstructure of spatial attention control (provided that proper thresholding on the magnitude of the weights was applied to eliminate voxels that contained mainly noise; see below). The microstructure of feature attention control can be similarly derived from the decoding of attending red versus green. We propose that these weight maps can reveal the microstructure of attentional control activity within the DAN, enabling us to investigate whether there are differences in the patterns of brain activity that characterize spatial versus feature attention control. To verify the proposed functional meaning of the weight maps so derived, we tested whether the signed voxels (as described above) showed the predicted increases in hemo- dynamic activity implied by our logic. That is, those signed as attend left, for example, should exhibit increased prepa- ratory hemodynamic activity when the participants were attending left versus attending right, whereas those signed attend right would have larger hemodynamic responses when the participants attended right versus attended left. A similar logic applies to the feature attention condition. To accomplish this, we extracted the hemodynamic responses (beta values) for voxels coding each of the four attention trial types (i.e., attend left, attend right, attend red, and attend green). For attend left versus attend right, the attend left voxels identified by decoding had significantly higher BOLD activation for the attend left trials as compared to the attend right trials ( p < .05; Figure 5A); similarly, the attend right voxels had significantly higher BOLD activation for the attend right trials as compared to the attend left trials ( p < .05; Figure 5B). In contrast, for sets of voxels randomly selected to represent attend right or attend left, there was no difference in BOLD activation between the two attention conditions (Figure 5C). We pursued the same approach to evaluate the functional meaning of the weight maps derived from decoding red versus green. The result was the same: The attend red voxels had significantly higher BOLD activa- tion for the attend red trials as compared to the attend green trials ( p < .05; Figure 5D), and the attend green voxels had significantly higher BOLD activation for the attend green trials as compared to the attend red trials ( p < .05; l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 5. Comparison of activation evoked by different types of attention cues in different types of decoding-identified voxels. (A) BOLD activation evoked by spatial attention cues in attend left voxels. (B) BOLD activation evoked by spatial attention cues in attend right voxels. (C) BOLD activation evoked by spatial attention cues in randomly chosen voxels. (D) BOLD activation evoked by color cues in attend red voxels. (E) BOLD activation evoked by color cues in attend green voxels. (F) BOLD activation evoked by color cues in randomly chosen voxels. *p < .05. Rajan et al. 973 Figure 5E); again, randomly selected voxels showed no dif- ference between the two attention conditions (Figure 5F). These results demonstrate the functional interpretability of the attentional control microstructures based on the weight maps derived from the combination of SVM classifiers and the Haufe et al. (2014) transformation. As the foregoing demonstrated, the weight maps would reflect the distributed patterns of neural activity in the DAN that supported spatial or feature attentional control. In particular, we posited that these weight maps would differ according to the information attended to. To test these ideas, we examined the extent of overlap in the weight maps for spatial attention (attend left vs. attend right) and feature attention (attend red vs. attend green). To visualize the overlap, we created maps of the absolute value of the normalized weights for each participant (Figure 6). In these maps, the hotter (yellow) color indicate voxels having higher weight for the respective attention condition (i.e., spatial vs. feature attention). By comparing the yellow regions of the maps in Figure 6 for the control of spatial attention and feature attention in the frontal or parietal nodes of the DAN, one is able to examine the anatomical similarity/dissimilarity of the patterns of activity under different attention conditions within DAN. Visually, it is apparent that voxels that most strongly contributing for decoding spatial attentional control differ from those for decoding feature attentional control. Furthermore, as can be observed in Figure 6, the patterns do not cluster into discrete neuroanatomical subregions within the DAN (i.e., dorsal vs. ventral clusters), but rather are distributed across the DAN, and are different from participant to participant, further highlighting the importance of using multivariate methods to study this. To quantify the extent of overlap/nonoverlap between the spatial and feature attention control weight maps, we com- puted the Jaccard index (JI) between the weight maps of the two classes of attention conditions (spatial vs. feature); the index is a measure of the extent of similarity/dissimilarity l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 6. Weight maps as attentional control microstructures. Weight maps from three individual participants (A, B, and C) for spatial attention and feature attention conditions in FEF (left) and IPS/SPL (right). For Participant C, the weight maps are shown both on the unfolded left and right hemispheres and as blown-up insets; the flattened hemisphere views also show the ROIs in FEF and IPS/SPL used in each analysis (defined by the univariate analyses, see Methods section). The normalized absolute value of the weight maps are plotted to compare the strength and distribution of activity for spatial and color-based attentional control in a given region. The hotter color (yellow) indicates voxels that most clearly discriminate the respective form of attentional control. Thus, comparing the patterns of the yellow voxels in FEF in the first column maps (spatial attention) to the patterns of the yellow voxels in the second column maps (color attention) reveals the extent to which spatial attentional control and color attentional control involved similar or different patterns of voxels. The same comparisons can be made for the IPS/SPL region in the third and fourth columns. As can be seen here, in each participant, the patterns of activity related to spatial and color attention are distributed and differ, which is quantified and substantiated by the JI (see text and Figure 7). Moreover, there are considerable individual differences in functional anatomy between participants for the same form of attention control (compare rows within each column). 974 Journal of Cognitive Neuroscience Volume 33, Number 6 between two sets of data (Levandowsky & Winter, 1971). Specifically, for two sets of voxels, the JI represents the size of the intersection of the two sets divided by the union of the two sets. A JI of 0 (1) means there is no overlap (total overlap). As the index approaches the value of 0, the two sets overlap to a lesser and lesser extent. Because the spatial and feature attention control weight maps necessarily in- clude all the voxels in the ROI, we computed the JI using the top 50% of voxels according to their weight magnitude from each of the two weight maps. We found that the mean JI was 0.399 ± 0.013. Although this suggests that the two maps did not overlap to a great extent, to understand the meaning of this value further, we conducted the following simulations. Our DAN ROI has 1390 voxels, 50% of which (our thresholded value—see above) is 695. We can expect that even for two random sets of 695 voxels in our ROI, there will be overlap between them. The JI value of two random sets of 695 voxels therefore provides a useful reference number. The following procedure was carried out to obtain such a reference number: (1) 695 voxels were ran- domly selected in the DAN ROI and assigned to be the weight map of attend space. (2) 695 voxels were randomly selected in the DAN ROI and assigned to be the weight map of attend color. (3) The JI was then computed between the two sets. (4) Steps 1–3 were repeated 1000 times. (5) The mean JI was found to be 0.335, which is slightly less than our JI of 0.399 obtained from the actual data. Thus, the JI of 0.399 in our comparison of spatial to feature weight maps, being only slightly higher than the expected overlap be- tween random sets of voxels, can be taken to indicate that the overlap between the two attention control micro- structures (attend space vs. attend feature) is limited rather than substantial. In addition to different patterns in the distribution of spatial and feature voxels in the weight maps, different voxels within each map had different weight values. The magnitude of the weight value is an indicator of their rel- ative significance for a given form of attention control. To further understand the extent of overlap between the spa- tial and feature control weight maps, we selected voxels within each weight map according to whether they fell within the top 50%, 40%, 30%, 20%, or 10% in terms of voxel weights, and calculated the corresponding JI for each selection. Figure 7A shows the results, and as can be seen, the JI declined monotonically as the weight maps became more selective—going from the top 50% to the top 10% of voxel weights—indicating reduced overlap in the functional anatomical structures for spatial and feature attention control. If the reduced overlap in the higher weight voxels of spatial and color attentional control weight maps reflected that these voxels became more selective for one form of attention over the other, then spatial attention voxels might be expected to do a poor job in decoding color attention conditions and vice versa for color attention voxels. To test this, for each participant, we took the voxels whose weight values were in the top 10% for preparatory spatial attention and feature attention, rejected any over- lapping voxels, and decoded attend left versus attend right as well as attend red versus attend green in each of the two sets of remaining voxels (the average number of space and color voxels chosen in this analysis across participants was 87 ± 25). We found evidence for such selectivity in feature attention voxels, which showed above chance level decoding for feature attention (56% for attend red vs. attend green, p < 10−4) but not for spatial attention (51% for attend left vs. attend right; p > .05; Figure 7B
right). However, the same effect was not seen in spatial
attention voxels, which showed above chance level decod-
ing for both spatial attentions (53% for attend left vs.

Figure 7. Relationship between weight maps underlying spatial and feature attention control. (A) JI quantifying the overlap in weight maps in DAN
between spatial and color-based attention for varying threshold of weights. It shows that for voxels with higher weight values, the overlap ( JI)
between weight maps (i.e., microstructures) for spatial and feature attention control becomes lower. (B) Choosing top 10% voxels from the spatial
weight map and the color weight map to decode attend left versus attend right (blue bars) and attend red versus attend (red bars). In color-selective
voxels, decoding accuracy of attending red versus attend green is significantly above chance level, but not for decoding attend left versus attend right
(right). However, spatial-selective voxels showed significantly above chance level decoding accuracy for both attend left versus attend right and
attend red versus attend green (left; *p < .01, **p < 10−4). Rajan et al. 975 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / / 3 3 6 9 6 5 1 9 1 3 6 0 7 / j o c n _ a _ 0 1 7 1 0 p d . f b y g u e s t t o n 0 8 S e p e m b e r 2 0 2 3 Figure 8. Decoding eye movements. (A) Decoding accuracy as a function of time for attend left versus attend right. (B) Decoding accuracy as a function of time for attend red versus attend green. (C) Decoding accuracy as a function of time for attend space versus attend feature (collapsed across attend left and right, and attend red and green, respectively). attend right, p < .005) and feature attention (53% for attend red vs. attend green, p < .01; Figure 7B left). These results suggest that color attention information is more widely represented in DAN voxels than spatial atten- tion information. MVPA Analysis: Weight Map Overlap in Subdivisions of DAN Our weight map analyses above considered the DAN as a whole, but prior work by our group and others (e.g., Popov, Kastner, & Jensen, 2017; Liu et al., 2016) has sug- gested that there may be differences in the functions of the subregions of the DAN (e.g., left vs. rDAN, or anterior vs. posterior DAN). In Figure 4, we showed that the atten- tional control conditions can be decoded in the whole DAN as well as in subdivisions of DAN. Here, we tested the over- lap between the spatial attention control weight map (top 50% voxels) and the feature attention control weight map (top 50% voxels) in each of the four subdivisions of DAN using two methods. In Method 1, voxels in a given subdivi- sion were used for the classifier, and the resulting weight maps were compared. In Method 2, all voxels in the DAN ROI were used for the classifier and the portions of the resulting weight maps in the given subdivision were used for the overlap analysis. The JI for each method was as follows: pDAN: JI = 0.436 (Method 1) and JI = 0.408 (Method 2); aDAN: JI = 0.401 (Method 1) and JI = 0.386 (Method 2); lDAN: JI = 0.401 (Method 1) and JI = 0.410 (Method 2); rDAN: JI = 0.391 (Method 1) and JI = 0.382 (Method 2). The JI values from the two different methods in each of the four subdivisions were not signifi- cantly different from each other and were in line with that obtained using the whole DAN ROI (JI = 0.399). Decoding Analysis of Eye Movements To what extent could subtle systematic patterns of eye movements (e.g., microsaccades) under different cueing instructions might carry information about the attended visual attributes, which, in turn, could influence the decod- ing results in the DAN? That is, even though the partici- pants were required to maintain fixation, if participants subtly differentially moved their eyes for attend right ver- sus attend left trials or attend red versus attend green tri- als, and the neural correlates of such systematic eye movements contributed to the training and decoding of DAN activity, then our findings would be confounded and potentially invalid (Mostert et al., 2018). To evaluate this possibility, we analyzed and applied decoding to our eye tracking data that was recorded during fMRI scanning, considering three contrasts, to evaluate whether the par- ticipants’ eye movements might have varied with condi- tion: 1) spatial attention: attend left versus attend right, 2) feature attention: attend red versus attend green, and 3) spatial attention (collapsed over attend left and right) versus feature attention (collapsed over attend red and green). The decoding accuracy time courses are shown in Figure 8. As can be seen, for all three contrasts, at no time did the decoding accuracies become significantly dif- ferent from the chance level ( p > .05). Thus, we conclude
that no systematic eye movements between attention
conditions could have contributed to our decoding results
in the DAN.

DISCUSSION

We examined whether there exists a functional micro-
structure of preparatory attentional control within the
DAN. Applying MVPA analysis to BOLD signals in the
DAN, we found that (1) the accuracy of decoding atten-
tional control activity for spatial attention (attend left vs.
right) and feature attention (attend red versus green)
was above chance in the DAN as a whole, as well as in
the major subdivisions of the DAN, namely, lDAN, rDAN,
anterior (frontal) DAN, and posterior (parietal) DAN; (2)
weight maps obtained from combining SVM classifiers
with Haufe et al. (2014) transformation differed both qual-
itatively (visual inspection) and quantitatively ( JI) for

976

Journal of Cognitive Neuroscience

Volume 33, Number 6

attentional control of spatial versus feature attention; (3)
the overlap between the two weight maps corresponding
to the two types of attentional control is limited and not
much different from the expected overlap between two
random sets of voxels; (4) the overlap between voxels of
the weight maps selected according to their weight values
decreased monotonically as the weight threshold of voxel
inclusion increased; and (5) the top 10% of voxels in the
color attention weight map decoded above chance for
color conditions (attend red vs. attend green) but not
for spatial conditions (attend left vs. attend right), whereas
the top 10% of voxels in the spatial attention weight
map decoded above chance for both forms of attention,
suggesting perhaps that spatial attention information is
more concentrated in fewer voxels than is color attention
information.

Our results provide information about how top–down
attentional control signals are organized within the DAN
and argue against strong domain-general (Spagna et al.,
2015; Fedorenko et al., 2013; Wojciulik & Kanwisher,
1999) or supramodal models of the DAN (Betti et al.,
2018; Salmela et al., 2018; Wang et al., 2016; Green et al.,
2011; Shomstein & Yantis, 2004). Moreover, our findings
provide a basis for understanding the specificity of atten-
tional control mechanisms in the DAN by suggesting that
functional microstructures for attentional control could
serve as the sources of precise top–down projections to
sensory structures as a function of stimulus processing re-
quirements according to behavioral goals.

The MVPA approach used here helps to clarify past
findings of little or minimal specialization in the DAN for
different forms of attentional control based on univariate
fMRI analysis methods, including in our own work (Slagter
et al., 2007; Giesbrecht et al., 2003), and that of others
(Egner et al., 2008; Vandenberghe, Gitelman, Parrish, &
Mesulam, 2001; Wojciulik & Kanwisher, 1999). Indeed,
there are good reasons—both theoretical and empirical—
to propose that specializations within the DAN should exist
for different forms of top–down attentional control. From
studies in animals, we know that in the FEFs, there are
different classes of colocalized neurons with different
functional roles and that these neurons project to different
cortical and subcortical targets for the control of eye move-
ments (Pouget et al., 2009; Armstrong, Fitzgerald, & Moore,
2006). In addition to the evidence from animal studies, in
humans, TMS applied to parietal cortex using different stim-
ulation parameters produces distinct effects on spatial
versus feature-based attention (Schenkluhn, Ruff, Heinen,
& Chambers, 2008). As well, combined TMS and EEG re-
search has shown that the functional connectivity between
the DAN (FEF) and higher-order visual areas, such as the
fusiform face area and human motion-specific cortex, shifts
with behavioral task requirements (Morishima et al., 2009).
The DAN itself can be divided into multiple different func-
tional zones (Szczepanski et al., 2013; Silver & Kastner, 2009;
Sereno, Pitzalis, & Martinez, 2001). The DAN is also known
to have different intra-DAN connectivity for different forms

of spatial attention, such as for viewer- and object-centered
spatial attention (Szczepanski et al., 2013). Furthermore,
prior work using MVPA has also suggested differences in
the organization of attentional control for different stimulus
attributes (Liu & Hou, 2013; Greenberg et al., 2010). We
observed distinct neural patterns characterizing different
forms of attention control in the DAN, as well as in major
subdivisions of the DAN. Our findings significantly advance
models of top–down attentional control by isolating and
focusing on preparatory brain activity in the cue–target
interval. In contrast to our focus on preparatory attention,
Greenberg et al. (2010) investigated shifts of attention that
occurred while covert spatial and feature attention was
being sustained over time to ongoing stimulation (e.g.,
moving dots). In their paradigm, specific changes in the
direction of motion (e.g., up vs. down) of attended color
dots (e.g., green) at the attended location (left or right
hemifield) signaled (cued) the participants to either shift
attention from the attended location to the moving colored
dots in the opposite visual hemifield, or to maintain atten-
tion at the attended location but shift feature attention
(e.g., from attending green to attending red moving dots).
They found that whereas univariate analyses showed
similarity in the patterns of activity within the DAN,
MVPA suggested differences between the frontal portions
of the DAN and the parietal regions during attentional
shifts; the parietal cortex differed for spatial and feature
attention shifts, but frontal regions largely did not (but
see Supplementary Materials in Greenberg et al., 2010).
They also argued for specialization in attentional control
by suggesting that interleaved populations of control neu-
rons were present within the posterior parietal cortex for
different forms of attentional control (spatial vs. features).
The important findings of Greenberg et al. (2010) differ in
essential ways from our results in that they investigated
shifts of attention (cued by visual signals in an ongoing
stimulus display of moving dots), whereas we focus here
on preparatory attention to impending visual targets
(cued by an auditory signal prior to the appearance of
relevant targets). As a result, in part, different cognitive
operations are revealed in their task and analyses com-
pared to ours (i.e., switching attention vs. preparatory
attention). Nonetheless, at a less granular level of view,
both we and Greenberg et al. (2010) argue for specializa-
tions in attentional control, as opposed to more domain-
general models of attentional control, in the DAN.

Liu and Hou (2013) also investigated the DAN for spe-
cializations in attentional control, describing a hierarchy of
attentional control in the DAN. They cued (auditorily) par-
ticipants to attend to the location, color, or motion of
moving dot patterns located in the lateral visual fields.
Using MVPA methods, they found a hierarchical structure
in the effects of attention such that the patterns for attend-
ing to spatial locations differed from those for attending to
features. Furthermore, the patterns for feature attention
(color vs. motion) could be segregated. Their study did
not, however, distinguish between preparatory attentional

Rajan et al.

977

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

control and selective stimulus processing, as we have done
here, because, in their design, the analyses necessarily
focused on the period during which the dynamic target
stimuli were already in view. As a result, the very important
work of Liu and Hou (2013) may include effects driven by
the interactions between attentional control and attentional
selection of the incoming sensory signals. Once again, the
cognitive operations revealed in our study are related purely
to the preparatory control component of visual selective
attention. Nonetheless, our work and that of Liu and Hou
(2013) converge on the idea that the DAN contains func-
tional specializations for different aspects of top–down at-
tentional control.

In this study, we used weight maps derived from com-
bining SVM with the Haufe et al. (2014) transformation to
help understand the microstructure of attentional control
for spatial and feature (color) attention. Regarding this
approach, there are two points requiring contemplation.
First, the SVM weight vector, which represents the vector
normal to the hyperplane optimally dividing two experi-
mental conditions, can be projected onto individual voxels
and visualized as a brain map. Such functional brain maps,
however, may be difficult to interpret functionally because
fMRI voxels that do not contain task-related information
can play a significant role in decoding performance as
the result of noise cancellation (Kriegeskorte & Douglas,
2019). The transformation suggested by Haufe et al.
(2014) helps to mitigate this problem. The weight maps
in our study, which are the basis for defining attentional
control microstructures, are obtained after applying the
Haufe et al. (2014) transformation to the SVM weight
vectors. The weight maps so obtained have clearly defined
functional meanings, as we demonstrated by comparing
the corresponding hemodynamic responses between
different conditions (Figure 5), and they also show distrib-
uted voxel patterns that are variable across participants
within the same ROI (Figure 6; Guntupalli et al., 2016).
The latter likely contributes to why univariate analyses
have failed to consistently uncover differences in prepara-
tory spatial compared to feature attentional control within
the DAN, both in the present univariate analyses (Figure 3)
and previous univariate analyses (Slagter et al., 2007;
Giesbrecht et al., 2003); in the univariate analysis, to be
deemed activated by an experimental condition, a voxel
needs to be activated consistently across participants.
Second, to assess the extent of overlap between the
weight maps of spatial versus feature attention control,
we utilized the JI approach. Choosing voxels correspond-
ing to top 50% weights for each weight map, we calculated
the JI value between spatial and feature maps and found
that the two weight maps were similar (slightly higher)
in overlap to what would be expected from the overlap
between two random sets of voxels in our data. Consequently,
we suggest that the microstructures underlying the two
types of attentional control in the DAN have limited func-
tional anatomical overlap, as opposed to substantial func-
tional anatomical overlap.

We interpret the patterns of activity revealed by MVPA
decoding to be reflections of an underlying difference in
the functional representation of attentional control in
the DAN. This notion is supported by the results in
Figure 5, where it is shown, for example, that, in attend left
voxels defined by decoding weights, the BOLD signals are
higher for attend left than for attend right conditions;
moreover, for example, in attend red voxels defined by de-
coding, BOLD signals are higher for attend red than for at-
tend green conditions. The same was of course true for the
inverse cases (i.e., for attend right voxels, and attend green
voxels, as defined by decoding weights). Despite the
strong functional interpretability of these attentional
control microstructures, we do not, however, suggest that
the voxel-based weight maps are identifying specific
underlying circuitry, or subnetworks, per se. One might
have hoped to reveal consistent differences in activity such
that a subregional organization would be revealed that was
generally consistent across participants, such as anterior–
posterior, dorsal–ventral, or medial–lateral gradients, as
have been observed in decoding high-level object catego-
ries in ventral–temporal cortex (e.g., Connolly et al., 2012),
but that is not what was observed.

Our general model framework posits that the spatial
and feature attentional control signals in the DAN project
their top–down influences selectively to specific visual
cortical regions to influence perceptual processing for
the to-be-attended stimulus attributes (i.e., location versus
color). It therefore follows that specific top–down connec-
tivity necessarily complements the differing patters of
activity in the DAN that support selective attentional con-
trol for spatial versus feature attention. That said, from the
current data, we cannot distinguish between the represen-
tation of spatial location and color information in the DAN
(e.g., working memory representations; see below) from
activity corresponding to the output signals themselves,
which in any case is a challenging proposition in human
imaging studies. Regardless of whether the different spa-
tial and feature attentional control activities we observed
in the DAN are primarily representational or primarily out-
put, or perhaps both in a parsimonious model, our find-
ings shed light on the organization of the DAN for purely
preparatory (anticipatory) attention that was not evident
in prior work using univariate analyses of BOLD signals
(Slagter et al., 2007; Giesbrecht et al., 2003).

Most models of attention include both working memory
and attention as components of attentive behavior, and
there is a long history considering the relationships be-
tween the two (e.g., Oberauer, 2019; Gazzaley & Nobre,
2012; Lewis-Peacock, Drysdale, Oberauer, & Postle,
2012; Awh & Jonides, 2001). There are many different
ways that working memory has been conceptualized
(Cowan, 2017), but two main considerations apply most
directly here. The first is the issue of whether there are dif-
ferences in the working memory load between conditions,
which might contribute to differences in the patterns of
our effects. In order to avoid such confounds, we were

978

Journal of Cognitive Neuroscience

Volume 33, Number 6

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

careful to balance task difficulty and working memory load
between spatial attention and feature attention trials, as ev-
idenced by the RTs and accuracies in the two different
conditions, which were not significantly different from one
another (Figure 2A). Moreover, the working memory load in
our study was relatively low, being one of two alternatives
(left vs. right; red vs. green) in each condition. The second
consideration is whether the activation patterns we reveal
using decoding in the DAN are related to working memory
maintenance/representations in the DAN versus attentional
control signals in the DAN, as mentioned earlier. Because
our experiment was not designed to test for differences
between working memory and attentional control, we can-
not distinguish between the two. Indeed, our overarching
model holds that, on a trial-by-trial basis, working memory
is necessarily involved in performing an attention task such
as ours (and indeed, perhaps even more so given that the
cue–target delay can be several seconds). However,
whether working memory is maintained actively during the
period, or simply used to implement the attentional control
(or whether they are indeed the same thing) is a fascinating
question we cannot answer. It has been known for some
time (e.g., LaBar, Gitelman, Parrish, & Mesulam, 1999) that
there is overlap in (univariate) neural activations for non-
spatial working memory and spatial attention in the DAN.
More recent work, however, has shown dissociations
between spatial attention and working memory storage
(Hakim, Adam, Gunseli, Awh, & Vogel, 2019; Lewis-
Peacock et al., 2012; Lewis-Peacock & Postle, 2012). What
we can say from our findings is that the DAN is activated
during covert attention (i.e., the cue–target interval), which
may, but does not theoretically have to, include a distinct
working memory activity (Lewis-Peacock & Postle, 2012);
such effects may be task-specific (i.e., whether or not working
memory maintenance is integral to task performance). It is
nonetheless reasonable, in a design like ours, to assume
that working memory is required and active. We also
know, however, that working memory representation can
be decoded in the DAN (Xu, 2017; Christophel, Hebart, &
Haynes, 2012), and therefore, because we did not test this
directly, it remains possible that our decoding for spatial
attention and feature attention could reflect working
memory as well as attentional control signals, assuming that
these are actually even different cognitive-neural operations
within the DAN. That is, in many models, working memory
and attention are not distinct mechanisms (e.g., Awh &
Jonides, 2001), whereas in more current models, they are
integrally related processes, one (working memory) depen-
dent on the other (attention; Foster, Vogel, & Awh, in press;
Hakim et al., 2019). Given that attended information and
remembered information are both decodable in the DAN
and, therefore, that the DAN is a common neural substrate
for both covert attention and working memory, one may
reasonably ask whether, at the microstructural level, the
voxels supporting attend left versus attend right (or attend
red vs. attend green) also support remember left versus
remember right (or remember red vs. remember green).

In this sense, for future studies, our framework offers a
novel avenue to pursue the relation between attention and
working memory, a long-standing problem in cognitive
neuroscience.

In conclusion, this study offers new evidence from
MVPA and brain mapping approaches that supports the
idea of functional–anatomical specialization (microstruc-
tures) underlying the control of different forms of prepa-
ratory attention in both frontal and parietal portions of
the DAN. In the model, we hypothesize that specialized
microstructures have specific output connections for the
top–down control of those regions of visual cortex coding
the relevant (to-be-attended) stimulus attribute(s), thereby
selectively biasing the processing of incoming sensory infor-
mation in the service of goal-directed behaviors.

Acknowledgments

This work was supported by National Institute of Mental Health
grant MH117991 (G. R. M. and M. D.) and National Science
Foundation grant BCS-1439188 (M. D.). We are grateful to
Steve Luck, Joy Geng, John Henderson, Sean Noah, Edward
Awh, Karl Friston, Tamara Swaab, and the members of our labo-
ratories for their helpful comments and advice. All data will be
publicly available on the National Institute of Mental Health
Data Archive.

Reprint requests should be sent to Mingzhou Ding, J. Crayton
Pruitt Family Department of Biomedical Engineering, University
of Florida, BME 149, PO Box 116131, Gainesville, FL 32611, or
via e-mail: mding@bme.ufl.edu, or George R. Mangun, Center
for Mind and Brain, University of California, Davis, CA 95618, or
via e-mail: mangun@ucdavis.edu.

Funding Information

George R. Mangun, National Institute of Mental Health
(http://dx.doi.org/10.13039/100000025), grant number:
MH117991. Mingzhou Ding, National Institute of Mental
Health (http://dx.doi.org/10.13039/100000025), grant
number: MH117991. Mingzhou Ding, National Science
Foundation (http://dx.doi.org/10.13039/100000001), grant
number: BCS-1439188.

Diversity in Citation Practices

A retrospective analysis of the citations in every article
published in this journal from 2010 to 2020 has revealed
a persistent pattern of gender imbalance: Although the
proportions of authorship teams (categorized by estimated
gender identification of first author/last author) pub-
lishing in the Journal of Cognitive Neuroscience ( JoCN)
during this period were M(an)/M = .408, W(oman)/M =
.335, M/W = .108, and W/W = .149, the comparable pro-
portions for the articles that these authorship teams cited
were M/M = .579, W/M = .243, M/W = .102, and W/W =
.076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently,
JoCN encourages all authors to consider gender balance
explicitly when selecting which articles to cite and gives
them the opportunity to report their article’s gender
citation balance.

Rajan et al.

979

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

REFERENCES

Armstrong, K. M., Fitzgerald, J. K., & Moore, T. (2006). Changes
in visual receptive fields with microstimulation of frontal
cortex. Neuron, 50, 791–798. DOI: https://doi.org/10.1016
/j.neuron.2006.05.010, PMID: 16731516

Astrand, E., Ibos, G., Duhamel, J.-R., & Ben Hamed, S. (2015).

Differential dynamics of spatial attention, position, and
color coding within the parietofrontal network. Journal of
Neuroscience, 35, 3174–3189. DOI: https://doi.org/10.1523
/JNEUROSCI.2370-14.2015, PMID: 25698752, PMCID:
PMC6605583

Awh, E., & Jonides, J. (2001). Overlapping mechanisms of

attention and spatial working memory. Trends in Cognitive
Sciences, 5, 119–126. DOI: https://doi.org/10.1016/S1364
-6613(00)01593-X, PMID: 11239812

Baldauf, D., & Desimone, R. (2014). Neural mechanisms of

object-based attention. Science, 344, 424–427. DOI: https://
doi.org/10.1126/science.1247003, PMID: 24763592

Bengson, J. J., Kelley, T. A., & Mangun, G. R. (2015). The neural
correlates of volitional attention: A combined fMRI and ERP
study. Human Brain Mapping, 36, 2443–2454. DOI: https://
doi.org/10.1002/hbm.22783, PMID: 25731128, PMCID:
PMC6869709

Betti, V., Corbetta, M., de Pasquale, F., Wens, V., & Della Penna, S.
(2018). Topology of functional connectivity and hub dynamics
in the beta band as temporal prior for natural vision in the
human brain. Journal of Neuroscience, 38, 3858–3871. DOI:
https://doi.org/10.1523/JNEUROSCI.1089-17.2018, PMID:
29555851, PMCID: PMC6705907

Bichot, N. P., Heard, M. T., DeGennaro, E. M., & Desimone, R.
(2015). A source for feature-based attention in the prefrontal
cortex. Neuron, 88, 832–844. DOI: https://doi.org/10.1016
/j.neuron.2015.10.001, PMID: 26526392, PMCID: PMC4655197

Bichot, N. P., Schall, J. D., & Thompson, K. G. (1996). Visual

feature selectivity in frontal eye fields induced by experience
in mature macaques. Nature, 381, 697–699. DOI: https://doi
.org/10.1038/381697a0, PMID: 8649514

Burock, M. A., Buckner, R. L., Woldorff, M. G., Rosen, B. R., &
Dale, A. M. (1998). Randomized event-related experimental
designs allow for extremely rapid presentation rates using
functional MRI. NeuroReport, 9, 3735–3739. DOI: https://doi
.org/10.1097/00001756-199811160-00030, PMID: 9858388

Christophel, T. B., Hebart, M. N., & Haynes, J.-D. (2012).

Decoding the contents of visual short-term memory from
human visual and parietal cortex. Journal of Neuroscience,
32, 12983–12989. DOI: https://doi.org/10.1523/JNEUROSCI
.0184-12.2012, PMID: 22993415, PMCID: PMC6621473

Connolly, A. C., Guntupalli, J. S., Gors, J., Hanke, M., Halchenko,
Y. O., Wu, Y.-C., et al. (2012). The representation of biological
classes in the human brain. Journal of Neuroscience, 32,
2608–2618. DOI: https://doi.org/10.1523/JNEUROSCI.5547
-11.2012, PMID: 22357845, PMCID: PMC3532035

Corbetta, M., Kincade, J. M., Ollinger, J. M., McAvoy, M. P., &
Shulman, G. L. (2000). Voluntary orienting is dissociated
from target detection in human posterior parietal cortex.
Nature Neuroscience, 3, 292–297. DOI: https://doi.org/10
.1038/73009, PMID: 10700263

Corbetta, M., Miezin, F. M., Dobmeyer, S., Shulman, G. L., &
Petersen, S. E. (1990). Attentional modulation of neural
processing of shape, color, and velocity in humans. Science,
248, 1556–1559. DOI: https://doi.org/10.1126/science
.2360050, PMID: 2360050

Corbetta, M., Tansy, A. P., Stanley, C. M., Astafiev, S. V., Snyder,

A. Z., & Shulman, G. L. (2005). A functional MRI study
of preparatory signals for spatial location and objects.
Neuropsychologia, 43, 2041–2056. DOI: https://doi.org
/10.1016/j.neuropsychologia.2005.03.020, PMID: 16243051

Cortes, C., & Vapnik, V. (1995). Support-vector networks.
Machine Learning, 20, 273–297. DOI: https://doi.org/10
.1007/BF00994018

Cowan, N. (2017). The many faces of working memory and
short-term storage. Psychonomic Bulletin & Review, 24,
1158–1170. DOI: https://doi.org/10.3758/s13423-016-1191-6,
PMID: 27896630

Cox, D. D., & Savoy, R. L. (2003). Functional magnetic resonance
imaging (fMRI) “brain reading”: Detecting and classifying
distributed patterns of fMRI activity in human visual cortex.
Neuroimage, 19, 261–270. DOI: https://doi.org/10.1016
/S1053-8119(03)00049-1

Duncan, J., & Humphreys, G. W. (1989). Visual search and

stimulus similarity. Psychological Review, 96, 433–458. DOI:
https://doi.org/10.1037/0033-295X.96.3.433, PMID: 2756067
Dworetsky, A., Seitzman, B. A., Adeyemo, B., Neta, M., Coalson,
R. S., Petersen, S. E., et al. (2020). Probabilistic mapping of
human functional brain networks identifies regions of high
group consensus. bioRxiv, 313791. DOI: https://doi.org
/10.1101/2020.09.28.313791

Egner, T., Monti, J. M. P., Trittschuh, E. H., Wieneke, C. A.,
Hirsch, J., & Mesulam, M.-M. (2008). Neural integration of
top–down spatial and feature-based information in visual
search. Journal of Neuroscience, 28, 6141–6151. DOI:
https://doi.org/10.1523/JNEUROSCI.1262-08.2008, PMID:
18550756, PMCID: PMC6670545

Fannon, S. P., Saron, C. D., & Mangun, G. R. (2008). Baseline

shifts do not predict attentional modulation of target
processing during feature-based visual attention. Frontiers in
Human Neuroscience, 1, 7. DOI: https://doi.org/10.3389
/neuro.09.007.2007, PMID: 18958221, PMCID: PMC2525984

Fedorenko, E., Duncan, J., & Kanwisher, N. (2013). Broad
domain generality in focal regions of frontal and parietal
cortex. Proceedings of the National Academy of Sciences,
U.S.A., 110, 16616–16621. DOI: https://doi.org/10.1073/pnas
.1315235110, PMID: 24062451, PMCID: PMC3799302

Foster, J. J., Vogel, E. K., & Awh, E. (in press). Working memory
as persistent neural activity. In M. Kahana & A. Wagner (Eds.),
Oxford handbook of human memory. New York: Oxford
University Press.

Friston, K. J., Holmes, A. P., Worsley, K. J., Poline, J.-P., Frith, C. D.,
& Frackowiak, R. S. J. (1994). Statistical parametric maps in
functional imaging: A general linear approach. Human Brain
Mapping, 2, 189–210. DOI: https://doi.org/10.1002/hbm
.460020402

Gazzaley, A., & Nobre, A. C. (2012). Top–down modulation:
Bridging selective attention and working memory. Trends
in Cognitive Sciences, 16, 129–135. DOI: https://doi.org
/10.1016/j.tics.2011.11.014, PMID: 22209601, PMCID:
PMC3510782

Genovese, C. R., Lazar, N. A., & Nichols, T. (2002). Thresholding
of statistical maps in functional neuroimaging using the false
discovery rate. Neuroimage, 15, 870–878. DOI: https://doi
.org/10.1006/nimg.2001.1037, PMID: 11906227

Giesbrecht, B., Woldorff, M. G., Song, A. W., & Mangun, G. R.
(2003). Neural mechanisms of top–down control during
spatial and feature attention. Neuroimage, 19, 496–512.
DOI: https://doi.org/10.1016/S1053-8119(03)00162-9

Gitelman, D. R., Nobre, A. C., Parrish, T. B., LaBar, K. S., Kim, Y. H.,
Meyer, J. R., et al. (1999). A large-scale distributed network for
covert spatial attention: Further anatomical delineation based
on stringent behavioural and cognitive controls. Brain, 122,
1093–1106. DOI: https://doi.org/10.1093/brain/122.6.1093,
PMID: 10356062

Gratton, C., Laumann, T. O., Nielsen, A. N., Greene, D. J.,

Gordon, E. M., Gilmore, A. W., et al. (2018). Functional brain
networks are dominated by stable group and individual
factors, not cognitive or daily variation. Neuron, 98, 439–452.

980

Journal of Cognitive Neuroscience

Volume 33, Number 6

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

DOI: https://doi.org/10.1016/j.neuron.2018.03.035, PMID:
29673485, PMCID: PMC5912345

Green, J. J., Doesburg, S. M., Ward, L. M., & McDonald, J. J.
(2011). Electrical neuroimaging of voluntary audiospatial
attention: Evidence for a supramodal attention control
network. Journal of Neuroscience, 31, 3560–3564. DOI:
https://doi.org/10.1523/JNEUROSCI.5758-10.2011, PMID:
21389212, PMCID: PMC6622799

Greenberg, A. S., Esterman, M., Wilson, D., Serences, J. T.,
& Yantis, S. (2010). Control of spatial and feature-based
attention in frontoparietal cortex. Journal of Neuroscience,
30, 14330–14339. DOI: https://doi.org/10.1523/JNEUROSCI
.4248-09.2010, PMID: 20980588, PMCID: PMC3307052

Grootswagers, T., Wardle, S. G., & Carlson, T. A. (2017).

Decoding dynamic brain patterns from evoked responses: A
tutorial on multivariate pattern analysis applied to time series
neuroimaging data. Journal of Cognitive Neuroscience, 29,
677–697. DOI: https://doi.org/10.1162/jocn_a_01068, PMID:
27779910

Guntupalli, J. S., Hanke, M., Halchenko, Y. O., Connolly,

A. C., Ramadge, P. J., & Haxby, J. V. (2016). A model of
representational spaces in human cortex. Cerebral Cortex,
26, 2919–2934. DOI: https://doi.org/10.1093/cercor/bhw068,
PMID: 26980615, PMCID: PMC4869822

Hakim, N., Adam, K. C. S., Gunseli, E., Awh, E., & Vogel, E. K.

(2019). Dissecting the neural focus of attention reveals
distinct processes for spatial attention and object-based
storage in visual working memory. Psychological Science, 30,
526–540. DOI: https://doi.org/10.1177/0956797619830384,
PMID: 30817220, PMCID: PMC6472178

Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,
Blankertz, B., et al. (2014). On the interpretation of weight
vectors of linear models in multivariate neuroimaging.
Neuroimage, 87, 96–110. DOI: https://doi.org/10.1016
/j.neuroimage.2013.10.067, PMID: 24239590

Haxby, J. V., Connolly, A. C., & Guntupalli, J. S. (2014).

Decoding neural representational spaces using multivariate
pattern analysis. Annual Review of Neuroscience, 37,
435–456. DOI: https://doi.org/10.1146/annurev-neuro
-062012-170325, PMID: 25002277

Haxby, J. V., Guntupalli, J. S., Connolly, A. C., Halchenko, Y. O.,
Conroy, B. R., Gobbini, M. I., et al. (2011). A common, high-
dimensional model of the representational space in human
ventral temporal cortex. Neuron, 72, 404–416. DOI: https://
doi.org/10.1016/j.neuron.2011.08.026, PMID: 22017997,
PMCID: PMC3201764

Haynes, J.-D. (2015). A primer on pattern-based approaches to

fMRI: Principles, pitfalls, and perspectives. Neuron, 87,
257–270. DOI: https://doi.org/10.1016/j.neuron.2015.05.025,
PMID: 26182413

Haynes, J.-D., & Rees, G. (2006). Decoding mental states from
brain activity in humans. Nature Reviews Neuroscience, 7,
523–534. DOI: https://doi.org/10.1038/nrn1931, PMID:
16791142

He, B. J., Snyder, A. Z., Vincent, J. L., Epstein, A., Shulman, G. L.,
& Corbetta, M. (2007). Breakdown of functional connectivity
in frontoparietal networks underlies behavioral deficits in
spatial neglect. Neuron, 53, 905–918. DOI: https://doi.org
/10.1016/j.neuron.2007.02.013, PMID: 17359924

Heinze, H. J., Mangun, G. R., Burchert, W., Hinrichs, H., Scholz,
M., Münte, T. F., et al. (1994). Combined spatial and temporal
imaging of brain activity during visual selective attention in
humans. Nature, 372, 543–546. DOI: https://doi.org/10.1038
/372543a0, PMID: 7990926

Hopfinger, J. B., Buonocore, M. H., & Mangun, G. R. (2000).
The neural mechanisms of top–down attentional control.
Nature Neuroscience, 3, 284–291. DOI: https://doi.org/10
.1038/72999, PMID: 10700262

Hubbard, J., Kikumoto, A., & Mayr, U. (2019). EEG decoding

reveals the strength and temporal dynamics of goal-relevant
representations. Scientific Reports, 9, 9051. DOI: https://
doi.org/10.1038/s41598-019-45333-6, PMID: 31227796,
PMCID: PMC6588723

Ibos, G., & Freedman, D. J. (2016). Interaction between spatial
and feature attention in posterior parietal cortex. Neuron, 91,
931–943. DOI: https://doi.org/10.1016/j.neuron.2016.07.025,
PMID: 27499082, PMCID: PMC5015486

Jiang, J., Summerfield, C., & Egner, T. (2013). Attention

sharpens the distinction between expected and unexpected
percepts in the visual brain. Journal of Neuroscience, 33,
18438–18447. DOI: https://doi.org/10.1523/JNEUROSCI
.3308-13.2013, PMID: 24259568, PMCID: PMC3834051

Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the
American Statistical Association, 90, 773–795. DOI: https://
doi.org/10.1080/01621459.1995.10476572

Kastner, S., Pinsk, M. A., De Weerd, P., Desimone, R., &

Ungerleider, L. G. (1999). Increased activity in human visual
cortex during directed attention in the absence of visual
stimulation. Neuron, 22, 751–761. DOI: https://doi.org/10
.1016/S0896-6273(00)80734-5, PMID: 10230795

Kim, J., Schultz, J., Rohe, T., Wallraven, C., Lee, S.-W., &

Bülthoff, H. H. (2015). Abstract representations of associated
emotions in the human brain. Journal of Neuroscience, 35,
5655–5663. DOI: https://doi.org/10.1523/JNEUROSCI.4059
-14.2015, PMID: 25855179, PMCID: PMC6605320

Kriegeskorte, N., & Douglas, P. K. (2019). Interpreting encoding
and decoding models. Current Opinion in Neurobiology, 55,
167–179. DOI: https://doi.org/10.1016/j.conb.2019.04.002,
PMID: 31039527, PMCID: PMC6705607

Kriegeskorte, N., Goebel, R., & Bandettini, P. (2006).

Information-based functional brain mapping. Proceedings of
the National Academy of Sciences, U.S.A., 103, 3863–3868.
DOI: https://doi.org/10.1073/pnas.0600244103, PMID:
16537458, PMCID: PMC1383651

LaBar, K. S., Gitelman, D. R., Parrish, T. B., & Mesulam, M.
(1999). Neuroanatomic overlap of working memory and
spatial attention networks: A functional MRI comparison
within subjects. Neuroimage, 10, 695–704. DOI: https://
doi.org/10.1006/nimg.1999.0503, PMID: 10600415

Lee, S., Halder, S., Kübler, A., Birbaumer, N., & Sitaram, R.
(2010). Effective functional mapping of fMRI data with
support-vector machines. Human Brain Mapping, 31,
1502–1511. DOI: https://doi.org/10.1002/hbm.20955, PMID:
20112242, PMCID: PMC6871106

Levandowsky, M., & Winter, D. (1971). Distance between sets.
Nature, 234, 34–35. DOI: https://doi.org/10.1038/234034a0
Lewis-Peacock, J. A., Drysdale, A. T., Oberauer, K., & Postle,

B. R. (2012). Neural evidence for a distinction between
short-term memory and the focus of attention. Journal
of Cognitive Neuroscience, 24, 61–79. DOI: https://doi
.org/10.1162/jocn_a_00140, PMID: 21955164, PMCID:
PMC3222712

Lewis-Peacock, J. A., & Postle, B. R. (2012). Decoding the

internal focus of attention. Neuropsychologia, 50, 470–478.
DOI: https://doi.org/10.1016/j.neuropsychologia.2011
.11.006, PMID: 22108440, PMCID: PMC3288445

Liu, T. (2016). Neural representation of object-specific attentional
priority. Neuroimage, 129, 15–24. DOI: https://doi.org/10
.1016/j.neuroimage.2016.01.034, PMID: 26825437, PMCID:
PMC4803527

Liu, T., & Hou, Y. (2013). A hierarchy of attentional priority signals
in human frontoparietal cortex. Journal of Neuroscience, 33,
16606–16616. DOI: https://doi.org/10.1523/JNEUROSCI.1780
-13.2013, PMID: 24133264, PMCID: PMC3797377

Liu, Y., Bengson, J., Huang, H., Mangun, G. R., & Ding, M.

(2016). Top–down modulation of neural activity in

Rajan et al.

981

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

anticipatory visual attention: Control mechanisms revealed
by simultaneous EEG-fMRI. Cerebral Cortex, 26, 517–529.
DOI: https://doi.org/10.1093/cercor/bhu204, PMID: 25205663,
PMCID: PMC4712792

Mangun, G. R., & Fannon, S. P. (2007). Networks for attentional
control and selection in spatial vision. In F. Mast & L. Jäncke
(Eds.), Spatial processing in navigation, imagery, and
perception (pp. 411–432). Amsterdam: Springer. DOI:
https://doi.org/10.1007/978-0-387-71978-8_21

Mangun, G. R., & Hillyard, S. A. (1991). Modulations of sensory-

evoked brain potentials indicate changes in perceptual
processing during visual-spatial priming. Journal of
Experimental Psychology: Human Perception and
Performance, 17, 1057–1074. DOI: https://doi.org/10
.1037/0096-1523.17.4.1057, PMID: 1837297

Molenberghs, P., Mesulam, M. M., Peeters, R., & Vandenberghe,
R. R. C. (2007). Remapping attentional priorities: Differential
contribution of superior parietal lobule and intraparietal
sulcus. Cerebral Cortex, 17, 2703–2712. DOI: https://doi.org
/10.1093/cercor/bhl179, PMID: 17264251

Moran, J., & Desimone, R. (1985). Selective attention gates visual
processing in the extrastriate cortex. Science, 229, 782–784.
DOI: https://doi.org/10.1126/science.4023713, PMID: 4023713

Morishima, Y., Akaishi, R., Yamada, Y., Okuda, J., Toma, K.,
& Sakai, K. (2009). Task-specific signal transmission from
prefrontal cortex in visual selective attention. Nature
Neuroscience, 12, 85–91. DOI: https://doi.org/10.1038
/nn.2237, PMID: 19098905

Mostert, P., Albers, A. M., Brinkman, L., Todorova, L., Kok, P., &
de Lange, F. P. (2018). Eye movement-related confounds in
neural decoding of visual working memory representations.
eNeuro, 5, ENEURO.0401-17.2018. DOI: https://doi.org
/10.1523/ENEURO.0401-17.2018, PMID: 30310862, PMCID:
PMC6179574

Mourão-Miranda, J., Bokde, A. L. W., Born, C., Hampel, H., &
Stetter, M. (2005). Classifying brain states and determining
the discriminating activation patterns: Support vector
machine on functional MRI data. Neuroimage, 28, 980–995.
DOI: https://doi.org/10.1016/j.neuroimage.2005.06.070,
PMID: 16275139

Niklaus, M., Nobre, A. C., & van Ede, F. (2017). Feature-based

attentional weighting and spreading in visual working
memory. Scientific Reports, 7, 42384. DOI: https://doi.org
/10.1038/srep42384, PMID: 28233830, PMCID: PMC5324041
Norman, K. A., Polyn, S. M., Detre, G. J., & Haxby, J. V. (2006).
Beyond mind-reading: Multi-voxel pattern analysis of fMRI
data. Trends in Cognitive Sciences, 10, 424–430. DOI:
https://doi.org/10.1016/j.tics.2006.07.005, PMID: 16899397

Oberauer, K. (2019). Working memory and attention—A

conceptual analysis and review. Journal of Cognition, 2,
36. DOI: https://doi.org/10.5334/joc.58, PMID: 31517246,
PMCID: PMC6688548

Ollinger, J. M., Corbetta, M., & Shulman, G. L. (2001). Separating
processes within a trial in event-related functional MRI II.
Analysis. Neuroimage, 13, 218–229. DOI: https://doi.org/10
.1006/nimg.2000.0711, PMID: 11133324

Ollinger, J. M., Shulman, G. L., & Corbetta, M. (2001). Separating
processes within a trial in event-related functional MRI I. The
method. Neuroimage, 13, 210–217. DOI: https://doi.org/10
.1006/nimg.2000.0710, PMID: 11133323

Popov, T., Kastner, S., & Jensen, O. (2017). FEF-controlled

alpha delay activity precedes stimulus-induced gamma-band
activity in visual cortex. Journal of Neuroscience, 37,
4117–4127. DOI: https://doi.org/10.1523/JNEUROSCI.3015
-16.2017, PMID: 28314817, PMCID: PMC5391684

Posner, M. I. (1980). Orienting of attention. Quarterly Journal
of Experimental Psychology, 32, 3–25. DOI: https://doi.org
/10.1080/00335558008248231, PMID: 7367577

Posner, M. I., Snyder, C. R., & Davidson, B. J. (1980). Attention

and the detection of signals. Journal of Experimental
Psychology: General, 109, 160–174. DOI: https://doi.org
/10.1037/0096-3445.109.2.160, PMID: 7381367

Pouget, P., Stepniewska, I., Crowder, E. A., Leslie, M. W.,

Emeric, E. E., Nelson, M. J., et al. (2009). Visual and motor
connectivity and the distribution of calcium-binding proteins
in macaque frontal eye field: Implications for saccade target
selection. Frontiers in Neuroanatomy, 3, 2. DOI: https://
doi.org/10.3389/neuro.05.002.2009, PMID: 19506705,
PMCID: PMC2691655

Rissman, J., Gazzaley, A., & D’Esposito, M. (2004). Measuring
functional connectivity during distinct stages of a cognitive
task. Neuroimage, 23, 752–763. DOI: https://doi.org/10
.1016/j.neuroimage.2004.06.035, PMID: 15488425

Salmela, V., Salo, E., Salmi, J., & Alho, K. (2018). Spatiotemporal
dynamics of attention networks revealed by representational
similarity analysis of EEG and fMRI. Cerebral Cortex, 28,
549–560. DOI: https://doi.org/10.1093/cercor/bhw389,
PMID: 27999122

Schenkluhn, B., Ruff, C. C., Heinen, K., & Chambers, C. D.
(2008). Parietal stimulation decouples spatial and feature-
based attention. Journal of Neuroscience, 28, 11106–11110.
DOI: https://doi.org/10.1523/JNEUROSCI.3591-08.2008,
PMID: 18971453, PMCID: PMC6671486

Sereno, M. I., Pitzalis, S., & Martinez, A. (2001). Mapping of
contralateral space in retinotopic coordinates by a parietal
cortical area in humans. Science, 294, 1350–1354. DOI:
https://doi.org/10.1126/science.1063695, PMID: 11701930
Sestieri, C., Sylvester, C. M., Jack, A. I., d’Avossa, G., Shulman,
G. L., & Corbetta, M. (2008). Independence of anticipatory
signals for spatial attention from number of nontarget stimuli
in the visual field. Journal of Neurophysiology, 100, 829–838.
DOI: https://doi.org/10.1152/jn.00030.2008, PMID: 18550727,
PMCID: PMC2525703

Shomstein, S., & Yantis, S. (2004). Control of attention shifts
between vision and audition in human cortex. Journal of
Neuroscience, 24, 10702–10706. DOI: https://doi.org/10
.1523/JNEUROSCI.2939-04.2004, PMID: 15564587, PMCID:
PMC6730120

Silver, M. A., & Kastner, S. (2009). Topographic maps in human
frontal and parietal cortex. Trends in Cognitive Sciences,
13, 488–495. DOI: https://doi.org/10.1016/j.tics.2009.08.005,
PMID: 19758835, PMCID: PMC2767426

Slagter, H. A., Giesbrecht, B., Kok, A., Weissman, D. H.,

Kenemans, J. L., Woldorff, M. G., et al. (2007). fMRI evidence
for both generalized and specialized components of
attentional control. Brain Research, 1177, 90–102. DOI:
https://doi.org/10.1016/j.brainres.2007.07.097, PMID:
17916338, PMCID: PMC2710450

Spagna, A., Mackie, M.-A., & Fan, J. (2015). Supramodal executive
control of attention. Frontiers in Psychology, 6, 65. DOI:
https://doi.org/10.3389/fpsyg.2015.00065, PMID: 25759674,
PMCID: PMC4338659

Stelzer, J., Chen, Y., & Turner, R. (2013). Statistical inference

and multiple testing correction in classification-based
multi-voxel pattern analysis (MVPA): Random permutations
and cluster size control. Neuroimage, 65, 69–82. DOI:
https://doi.org/10.1016/j.neuroimage.2012.09.063, PMID:
23041526

Sterzer, P., Haynes, J.-D., & Rees, G. (2008). Fine-scale activity
patterns in high-level visual areas encode the category of
invisible objects. Journal of Vision, 8, 10. DOI: https://doi
.org/10.1167/8.15.10, PMID: 19146294

Summerfield, C., & Egner, T. (2016). Feature-based attention

and feature-based expectation. Trends in Cognitive Sciences,
20, 401–404. DOI: https://doi.org/10.1016/j.tics.2016.03.008,
PMID: 27079632, PMCID: PMC4875850

982

Journal of Cognitive Neuroscience

Volume 33, Number 6

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

Szczepanski, S. M., & Kastner, S. (2013). Shifting attentional

priorities: Control of spatial attention through hemispheric
competition. Journal of Neuroscience, 33, 5411–5421. DOI:
https://doi.org/10.1523/JNEUROSCI.4089-12.2013, PMID:
23516306, PMCID: PMC3651512

Szczepanski, S. M., Pinsk, M. A., Douglas, M. M., Kastner, S., &
Saalmann, Y. B. (2013). Functional and structural architecture
of the human dorsal frontoparietal attention network.
Proceedings of the National Academy of Sciences, U.S.A.,
110, 15806–15811. DOI: https://doi.org/10.1073/pnas
.1313903110, PMID: 24019489, PMCID: PMC3785784

Tong, F., & Pratte, M. S. (2012). Decoding patterns of human
brain activity. Annual Review of Psychology, 63, 483–509.
DOI: https://doi.org/10.1146/annurev-psych-120710-100412,
PMID: 21943172, PMCID: PMC7869795

Vandenberghe, R., Gitelman, D. R., Parrish, T. B., & Mesulam,

M.-M. (2001). Location- or feature-based targeting of
peripheral attention. Neuroimage, 14, 37–47. DOI: https://
doi.org/10.1006/nimg.2001.0790, PMID: 11525335

Van Voorhis, S., & Hillyard, S. A. (1977). Visual evoked potentials

and selective attention to points in space. Perception &
Psychophysics, 22, 54–62. DOI: https://doi.org/10.3758
/BF03206080

Varoquaux, G., Raamana, P. R., Engemann, D. A., Hoyos-Idrobo,
A., Schwartz, Y., & Thirion, B. (2017). Assessing and tuning
brain decoders: Cross-validation, caveats, and guidelines.

Neuroimage, 145, 166–179. DOI: https://doi.org/10.1016
/j.neuroimage.2016.10.038, PMID: 27989847

Wang, W., Viswanathan, S., Lee, T., & Grafton, S. T. (2016).
Coupling between theta oscillations and cognitive control
network during cross-modal visual and auditory attention:
Supramodal vs. modality-specific mechanisms. PLoS One,
11, e0158465. DOI: https://doi.org/10.1371/journal.pone
.0158465, PMID: 27391013, PMCID: PMC4938209

Wojciulik, E., & Kanwisher, N. (1999). The generality of parietal
involvement in visual attention. Neuron, 23, 747–764. DOI:
https://doi.org/10.1016/S0896-6273(01)80033-7, PMID:
10482241

Woldorff, M. G., Hazlett, C. J., Fichtenholtz, H. M., Weissman, D. H.,
Dale, A. M., & Song, A. W. (2004). Functional parcellation of
attentional control regions of the brain. Journal of Cognitive
Neuroscience, 16, 149–165. DOI: https://doi.org/10.1162
/089892904322755638, PMID: 15006044

Xu, Y. (2017). Reevaluating the sensory account of visual working
memory storage. Trends in Cognitive Sciences, 21, 794–815.
DOI: https://doi.org/10.1016/j.tics.2017.06.013, PMID:
28774684

Zhang, X., & Golomb, J. D. (2021). Neural representations
of covert attention across saccades: Comparing pattern
similarity to shifting and holding attention during fixation.
eNeuro, ENEURO.0186-20.2021. DOI: https://doi.org/10
.1523/ENEURO.0186-20.2021

l

D
o
w
n
o
a
d
e
d

f
r
o
m
h

t
t

p

:
/
/

d
i
r
e
c
t
.

m

i
t
.

e
d
u

/
j

/

o
c
n
a
r
t
i
c
e

p
d

l

f
/

/

/

/

3
3
6
9
6
5
1
9
1
3
6
0
7

/
j

o
c
n
_
a
_
0
1
7
1
0
p
d

.

f

b
y
g
u
e
s
t

t

o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3

Rajan et al.

983The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image
The Microstructure of Attentional Control in the image

Download pdf