Directional Visual Motion Is Represented in the Auditory
and Association Cortices of Early Deaf Individuals
Talia L. Retter1,2, Michael A. Webster1, and Fang Jiang1
Abstracto
■ Individuals who are deaf since early life may show enhanced
performance at some visual tasks, including discrimination of
directional motion. The neural substrates of such behavioral en-
hancements remain difficult to identify in humans, a pesar de
neural plasticity has been shown for early deaf people in the
auditory and association cortices, including the primary audi-
tory cortex (PAC) and STS region, respectivamente. Aquí, we inves-
tigated whether neural responses in auditory and association
cortices of early deaf individuals are reorganized to be sensitive
to directional visual motion. To capture direction-selective re-
sponses, we recorded fMRI responses frequency-tagged to the
0.1-Hz presentation of central directional (100% coherent
random dot) motion persisting for 2 sec contrasted with non-
directional (0% coherent) motion for 8 segundo. We found direction-
selective responses in the STS region in both deaf and hearing
Participantes, but the extent of activation in the right STS region
era 5.5 times larger for deaf participants. Minimal but signifi-
cant direction-selective responses were also found in the PAC
of deaf participants, both at the group level and in five of six
individuals. In response to stimuli presented separately in the
right and left visual fields, the relative activation across the right
and left hemispheres was similar in both the PAC and STS
region of deaf participants. Notablemente, the enhanced right-
hemisphere activation could support the right visual field
advantage reported previously in behavioral studies. Taken to-
juntos, these results show that the reorganized auditory corti-
ces of early deaf individuals are sensitive to directional motion.
Speculatively, these results suggest that auditory and associa-
tion regions can be remapped to support enhanced visual
actuación. ■
INTRODUCCIÓN
The absence of sensory inputs from one modality early in
life has been linked to enhancement of the other senses.
Respectivamente, congenitally deaf people have been shown
to display better performance at some visual tasks than
hearing individuals (p.ej., Shiell, Champoux, & Zatorre,
2014; Bottari, Nava, Ley, & Pavani, 2010; Dye, Hauser,
& Bavelier, 2009; Lore & Song, 1991; Neville & Lawson,
1987; Parasnis & Samar, 1985). Por ejemplo, an enhance-
ment at detecting and discriminating directional visual
motion has been reported in early deaf people (Shiell
et al., 2014; Hauthal, Sandmann, Debener, & Thorne,
2013; in the right visual field [RVF] solo: Bosworth,
Petrich, & Dobkins, 2013; Bosworth & Dobkins, 1999;
Neville & Lawson, 1987). From an ecological perspective,
the daily importance of visual motion may be increased
for deaf individuals, especially for monitoring the periph-
eral visual field, Por ejemplo, when using sign language
(Codina, Pascalis, Baseler, Levin, & Buckley, 2017).
Sin embargo, for other potentially useful visual tasks, No
differences or a decrease in performance has been
This paper is part of a Special Focus deriving from a symposium
en el 2017 International Multisensory Research Forum (IMRF).
1University of Nevada, Reno, 2Universidad de Lovaina
© 2019 Instituto de Tecnología de Massachusetts
reported across deaf and hearing people (for reviews
on this controversy, see Pavani & Bottari, 2012; mitchell
& Maslin, 2007; Bavelier, Dye, & Hauser, 2006; Parasnis,
1983). The prevalent hypothesis explaining these differ-
ences regards neural plasticity, eso es, the recruitment
of brain areas processing the deprived sense or the reor-
ganization of brain areas processing the existent senses
or engaging in multisensory integration. It is thought that
neural plasticity could support compensatory behavioral
abilities, but only when the underlying functional organi-
zation of the incoming sense is compatible with those
areas (p.ej., Bola et al., 2017; Pascual Leone & hamilton,
2001). Sin embargo, the capacity for neural plasticity of early
deaf individuals to support behavioral advantages in
visual tasks, including those involving motion, has not
been clearly demonstrated.
Extensive neural plasticity has been reported for deaf
individuals’ responses to visual motion. Most strikingly,
several human neuroimaging studies have reported ac-
tivation in the primary auditory cortex (PAC) of deaf
participants in response to moving or flickering visual
estímulos, most often presented in or toward the visual pe-
riphery (peripheral moving dot pattern: Finney, Fine, &
Dobkins, 2001; flickering patch of a full-field luminance
grating: Finney, Clementz, Hickok, & Dobkins, 2003; pe-
ripheral moving dot pattern: Fine, Finney, Boynton, &
Dobkins, 2005; flickering point lights in the RVF: Scott,
Revista de neurociencia cognitiva 31:8, páginas. 1126–1140
doi:10.1162/jocn_a_01378
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
oh
C
norte
_
a
_
0
1
3
7
8
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Karns, dow, stevens, & Neville, 2014). In addition to au-
ditory cortex, in the multisensory STS region (a term
used to include the STS and adjacent cortex of the supe-
rior and middle temporal gyrus and angular gyrus;
alison, Chip, & McCarthy, 2000), a trend has been shown
for higher activation and significantly more pronounced
attentional enhancement in deaf people in response to
visual dot motion (Bavelier et al., 2001). In the study by
Scott et al. (2014), a larger area of activation around the
STS was reported in deaf participants, including the pos-
terior superior and middle temporal gyrus. Changes in
responsiveness to peripherally presented visual motion
or flickering stimuli have also been reported in human
visual area hMT+: Increased (left-hemisphere) activación
and/or extent of activation has been reported in deaf
gente (Scott et al., 2014; Bavelier et al., 2000, 2001;
but see also Fine et al., 2005). To a lesser degree, otro
areas implying cross-modal neuroplasticity for motion or
flicker in the early deaf people include the posterior pa-
rietal cortex, anterior cingulate, and FEF/supplementary
eye field (Scott et al., 2014; Bavelier et al., 2001).
De nuevo, sin embargo, the relationship between such neural
plasticity in early deaf people and behavioral advantages
in visual motion detection or discrimination has not been
well documented. Recent evidence from animal studies
suggests a causal link between reversible lesions in the
auditory cortex and behavioral advantages at visual lo-
calization and movement detection in cats (Lumbar,
Meredith, & Kral, 2010; see also Meredith et al., 2011).
Yet for humans, only noninvasive, correlative evidence
has been provided. Structurally, Por ejemplo, correlations
have been found for deaf individuals between the relative
amount of auditory cortex (planum temporale) or visual
corteza (V1) devoted to processing peripheral motion and
behavioral performance in motion detection tasks (audi-
conservador: Shiell, Champoux, & Zatorre, 2016; visual: Levin,
Codina, Buckley, de Sousa, & Baseler, 2015). Suggestive
evidence has also been provided by showing that the re-
cruitment of reorganized brain regions in early deaf indi-
viduals shows selective responses to a visual task for
which there is behavioral enhancement. Por ejemplo,
four cardinal locations of visual stimuli could be decoded
from the auditory cortex in deaf individuals with neuro-
imaging, suggesting that representations in the auditory
cortex align with those in the visual cortex (Almeida et al.,
2015). Aquí, we aim to add to these findings by asking
whether deaf individuals’ enhanced ability in speed
and/or accuracy at discriminating the direction of visual
motion could be supported by direction-selective re-
sponses in brain areas evidencing neural plasticity.
Directional visual motion is a particularly salient visual
stimulus and is known to selectively activate a subset of
areas in the neurotypical human brain responding to vi-
sual motion more generally. Strong direction-selective
responses have been found in human visual area hMT
+/ V5 (p.ej., Huk, Ress, & Heeger, 2001; Morrone et al.,
2000; Tootell et al., 1995). Other implicated areas include
V3/V3A and, en un grado menor, the rest of V1–V4 (cervezas &
Norcia, 2009; Huk et al., 2001; Tootell et al., 1995; finding
large effects also in V1 with EEG source imaging). El
representation of directional motion within these cortical
areas was first revealed by single-cell recordings in mon-
keys, reporting columnar direction tuning (Felleman &
VanEssen, 1987; Albright, 1984; Dubner & Zeki, 1971;
Hubel & Wiesel, 1961). Desafortunadamente, debido a la
spatial scale, such direction tuning cannot be studied
noninvasively in humans, and direction-specific represen-
tation in humans has thus remained elusive (see Kamitani
& Tong, 2006, for a potential exception, but also Beckett,
Peirce, Sanchez-Panchuelo, Francisco, & Schluppeck, 2012;
for axis of motion mapping at 7 t, see Zimmermann
et al., 2011). Despite this, direction-selective areas may
be identified in the human brain with fMRI with stim-
ulation presentation techniques, such as contrasting
directional (es decir., coherent) motion with directionless (es decir.,
noncoherent) motion or dynamic noise (Morrone et al.,
2000; Beauchamp, Cox, & DeYoe, 1997; Braddick,
Hartley, Atkinson, Wattam-Bell, & Tornero, 1997; in EEG/
magnetoencephalography: Palomares, cervezas, Wade,
Cottereau, & Norcia, 2012; cervezas & Norcia, 2009; Nakamura
et al., 2003; Lam et al., 2000; tyler & Kaitz, 1977).
Aquí, we used a sensitive approach to investigate the
spatial extent and activation of direction-selective brain
regions in early deaf and hearing people, focusing on
the auditory and association cortices, the PAC and STS
región, respectivamente, in comparison with visual area hMT+.
Específicamente, we used fMRI together with a frequency-
tagging approach (p.ej., gao, Gentile, & rossión, 2017;
Koening-Robert, VanRullen, & Tsuchiya, 2015; Ernst,
Boynton, & Jazayeri, 2013; Morrone et al., 2000; ángel,
zhang, & Wandell, 1997; Chip, alison, Sangre, & McCarthy,
1995; Bandettini, Jesmanowicz, Wong, & Hyde, 1993) a
identify periodic changes from noncoherent to direc-
tional random-dot motion (Morrone et al., 2000; see also
Palomares et al., 2012; cervezas & Norcia, 2009; Atkinson
et al., 2008). We were thus able to acquire signals with
a high signal-to-noise ratio that were independent of a
hemodynamic response function model. By using a
contrast of directionless-to-directional motion, we were
also able to capture direction-selective responses (nota
that these responses are not direction specific) locked
precisely to the frequency of coherence onset. To follow
up on a behavioral advantage for direction discrimina-
tion typically reported in the RVF for deaf individuals
(Bosworth et al., 2013; Bosworth & Dobkins, 1999;
Neville & Lawson, 1987), we presented visual stimuli
in the left visual field (LVF) and RVF as well as centrally.
When activation was found, we further explored potential
qualitative differences across hearing and deaf individuals
in terms of spatial extent, RVF versus LVF response, y
hemispheric lateralization. Juntos, these comparisons
allowed us to assess the potential neural bases of en-
hanced visual motion processing reported in previous
studies for early deaf people.
Platos, Webster, and Jiang
1127
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
oh
C
norte
_
a
_
0
1
3
7
8
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
MÉTODOS
Participantes
Two groups of participants, early deaf and hearing controls,
were tested in the experiment, which was approved by the
institution review board of the University of Nevada, Reno,
and conducted in accordance with the Code of Ethics of
the World Medical Association (Declaration of Helsinki).
Each group consisted of six adults, recruited from northern
Nevada and California. Our deaf participants included those
who experienced severe-to-profound sensorineural hearing
loss at an early age. They had no ability to understand
auditory speech but were proficient in sign language (ver
Mesa 1 for deaf participants’ details). The mean age of deaf
participants was 36 años (DE = 8.2, range = 26–49 years);
the mean age of hearing participants was 33 años (DE =
8.5, range = 26–48 years). Four of the hearing and one
of the deaf participants were male; one hearing participant
was left-handed. All participants were unaware of the ex-
perimental design, except for one hearing participant,
who was author T. l. R. All participants reported visual
acuity in the normal or corrected-to-normal range.
f MRI Acquisition
f MRI scanning was performed with a 3-T Philips Ingenia
scanner using a 32-channel digital SENSE head coil (Philips
Medical Systems) at the Renown Regional Medical Center,
Reno, NV. Volumetric anatomical images were acquired at a
resolution of 1 mm3 using a T1-weighted magnetization
prepared rapid gradient echo sequence. Functional BOLD
signals were acquired through a continuous design at a
resolution of 2.75 × 2.75 × 3 mm voxels, with no gap. A
repetition time of 2 sec was used to acquire 30 transverse
slices in an ascending order, with an echo time of 17 mseg,
a flip angle of 76°, y un 220 × 220 mm2 field of view.
Visual Motion Stimuli
Visual motion was displayed with random-dot kinemato-
gramos, based on the incremental displacement across
monitor refresh frames of individual dots within a circular
campo (Braddick, 1974; Julesz, 1971; Anstis, 1970). Frames of
white dots against a black background were generated with
a custom script running over MATLAB (The MathWorks),
refreshing at a rate of about 60 Hz, with a 500-msec
lifespan to discourage participants tracking the movement
of individual dots. Given some inconsistency in presenta-
tion timing because of online drawing of dot positions,
the motion display was adapted for precise periodic stimu-
lus presentation by exporting generated dot motion frames
and then displaying them at a precisely controlled periodic
tasa de 60 Hz using custom software running over Java.
Viewed on the testing monitor, the stimulus field diame-
ter subtended 8.5° of visual angle, with individual dots
subtending 1.35 en. in diameter, moving at a speed of
3.4°/sec, at a density of 12.5 dots/deg.
We created directional stimulus sequences in four direc-
ciones (arriba, bien, abajo, and left) as well as nondirectional,
noncoherent dot motion. To create the directional se-
quences, a 30-sec sequence of 1,800 sequential stimulus
frames creating the appearance of 100% coherent right-
ward visual motion was extracted. The rightward stimuli
frames were rotated by increments of 90° to create se-
quences of downward, leftward, and upward motions, re-
spectively, with minimal variation across directions. En
the functional scans, these sequences each repeated four
times in immediate succession, leading to a block of 2 segundo
of directional motion. To create sequences of non-
directional motion, 100% noncoherent motion stimulus
frames were similarly extracted from 30-sec sequences; a
fill the longer proportion of nondirectional-to-directional
motion duration in the testing sequences with consistent
stimulus update intervals, this procedure was repeated
three additional times. Note that these 30-sec sequence
pieces also served as “incoherent jumps” to prevent a
specific confound of full dot replacement at the onset
and offset times of directional and nondirectional motion
(see the following section; Braddick, Birtles, Wattam-Bell,
& Atkinson, 2005; Wattam-Bell, 1991). In functional scans,
these four nondirectional sequence sets were each re-
peated four times in immediate succession, defining a
Mesa 1. Demographic Information for the Early Deaf Participants
Age ( Años)
Sex
Handedness
Deafness
Acquisition
Cause of Deafness
Auditory Deprivation,
Left/Right (dB)
Signing
Acquisition
D1
D2
D3
D4
D5
D6
41
31
49
26
34
32
F
METRO
F
F
F
F
F = female; M = male; R = right.
R
R
R
R
R
R
12 meses
Unknown
15 meses
Fever
Birth
Birth
Birth
Birth
Maternal gestational measles
Genetic (coex26)
Hereditary
Unknown
95/95
Total/85
100/90
85/85
80/70
98/96
12 años
15 meses
11 años
< 1 year
< 1 year
1 year
1128
Journal of Cognitive Neuroscience
Volume 31, Number 8
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
block of 8 sec of nondirectional motion. Participants viewed
the stimulation monitor with a mirror attached to the MR
head coil.
Periodic Visual Stimulation Procedure
Functional scans consisted of periodic alternation between
directional and nondirectional motion over a duration of
5.1 min. Scans began with 2 sec of a white fixation cross
on a black background, followed by a 2-sec fade-in period,
in which stimulus luminance contrast gradually increased to
100%. Stimuli were then shown over a duration of 300 sec
in a fixed pattern of 2 sec of directional motion followed by
8 sec of nondirectional motion. Periods of directional mo-
tion thus onset every 10 sec, leading to a direction-selective
frequency-tagged rate of 1/10 sec, that is, 0.1 Hz
(Figure 1A). Within each scan, the direction of motion also
consistently alternated at each presentation cycle, for
example, from upward to downward motion, leading to a
direction-specific frequency-tagged rate of 0.05 Hz. Finally,
the scans ended with 2 sec of stimulus fade-out and 2 sec of
the white fixation cross. Four participants from each of the
deaf and hearing groups saw contrasts of up/down and
left/right motions (Trial Lists 1 and 2), and the remaining
two participants of each group saw contrasts of up/left
and right/down motions (Trial Lists 3 and 4). Because no
clusters of significant responses to direction-specific motion
at 0.05 Hz were found for any participants in any trial lists,
data were combined across trial lists within each group
to examine the direction-selective response at 0.1 Hz.
Visual Field Conditions
Scans designed to localize brain regions responding to
visual motion contained stimuli presented in a central
visual field (CVF) condition. In two additional scan con-
ditions designed to measure the amplitude of brain acti-
vation, stimuli were presented in either the right or left
peripheral visual field. In the CVF condition, stimuli were
presented in the center of the stimulation monitor to-
gether with a superimposed central fixation cross. From
a viewing distance of 134 cm, the monitor supported a
field of view of 29° × 17°. Thus, when presented in the
right or left peripheral visual field conditions, the stimulus
was translated laterally to the edge of the monitor and the
fixation cross shifted laterally 4° from center in the opposite
direction, so that the distance between the proximal edge
of the stimulus and the fixation cross subtended 10° (e.g.,
Jiang, Beauchamp, & Fine, 2015). Four scan repetitions of
each condition were presented sequentially to discourage
participants from moving their heads as stimulus location
changed. Each participant was presented with every con-
dition, leading to 12 scans for a total testing time of about
1 hr. In odd trial lists, scans began with stimuli presented in
the CVF, whereas in even trial lists, stimuli were first
presented in one of the peripheral visual fields. The order
of conditions was identical across participant groups.
Behavioral Task
Participants were instructed to fixate on the centrally pre-
sented white fixation cross. The cross changed shape to
a circle for a duration of 200 msec at random intervals
(a minimum of 800 msec in between changes) 30 times
within each scan, that is, once about every 10 sec. Par-
ticipants were asked to use a response box to report
the direction of motion at the time of the fixation shape
change. This task was designed to facilitate participants’
fixation as well as to encourage attention to the direction
of motion of the stimulus.
Figure 1. (A) Stimulation sequences consisted of 2 sec of directional (100% coherent dot) visual motion followed immediately by 8 sec of
nondirectional (0% coherent dot) motion. The onset of directional motion thus occurred periodically every 10 sec, predicting a direction-selective
response in the frequency domain at 0.1 Hz (i.e., 1/10 sec). The arrows drawn on the figure are purely for illustrating the direction of dot motion.
(B) Top: An example of the BOLD response recorded by fMRI from a single voxel in visual area hMT+ from a hearing participant, averaged over
four runs of visual motion presented in the CVF and DC corrected. Its location is illustrated on the sagittal slice of this participant’s anatomy in Talairach
space. Bottom: A fast Fourier transform (FFT) is applied to each voxel to transform the data into the temporal frequency domain. This example
voxel is sensitive to directional motion, as evidenced by the high-amplitude BOLD signal of the 0.1-Hz response peak.
Retter, Webster, and Jiang
1129
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
f MRI Data Analysis
Preprocessing. Anatomical and functional data were
analyzed with BrainVoyager v20.0 and the BVQXTools
toolbox (Brain Innovation B.V.) together with MATLAB
R2013b. Functional scan data were imported into
BrainVoyager and preprocessed with corrections for slice
scan time and 3-D motion (aligned to the first functional
scan for intersession alignment). They were temporally
filtered with a linear de-trending; no spatial smoothing
was applied. Anatomical scans, similarly imported into
BrainVoyager, were subjected to an isotropic voxel trans-
formation and aligned according to standard anterior and
posterior commissure points. For display across par-
ticipants, data were transformed into a conventional
Talairach space (Talairach & Tournoux, 1988). Func-
tional scans were coregistered to each participant’s cor-
responding anatomical images. Initial alignment was
fine-tuned through an affine transformation and mini-
mally corrected with visual inspection. Spatial normaliza-
tion of the functional data was applied through a volume
time course transformation.
Frequency domain processing. The volume time
course files of each functional scan were imported into
MATLAB for frequency domain analyses. They were
cropped to 150 volumes of 2 sec, containing exactly 30
presentation cycles of 0.1-Hz directional motion and ex-
cluding the first and last two volumes corresponding to
fixation cross and fade-in/out presentation. BOLD data
from each participant from the four scans per condition
were averaged in time to reduce noise from non-phase-
locked activation, that is, from activation not driven by
periodic stimulus presentation. A DC correction was ap-
plied to remove the mean signal offset, and the data were
transformed into a normalized amplitude spectrum
through a fast Fourier transform (Figure 1B). The resulting
BOLD amplitude spectrum contained a range of 0–0.25 Hz
with a frequency resolution of 0.0033 Hz. For each fre-
quency bin, x, a baseline range was defined as 20 sur-
rounding frequency bins, encompassing a range of about
0.07 Hz centered around x. To assess significance during
CVF scans of the 0.1-Hz response at each voxel, z scores
were generated by subtracting from x the mean baseline
value and dividing the result by the standard deviation of
the baseline. To display BOLD response amplitudes in pre-
determined regions (see section below) during RVF and
LVF scans, baseline-subtracted amplitude values were
similarly generated by subtracting the mean baseline
value from x (e.g., Retter & Rossion, 2016). The resulting
files were reimported into BrainVoyager for display.
ROIs. Given previously reported findings of neural plas-
ticity in the PAC and association auditory cortex in deaf
individuals (e.g., Scott et al., 2014; Karns, Dow, & Neville,
2012; Fine et al., 2005; Finney et al., 2001, 2003) and
direction-selective responses in visual area hMT+ (e.g.,
Huk et al., 2001; Morrone et al., 2000; Tootell et al., 1995),
we a priori focused our analyses on the PAC and STS region,
potentially including part of the STS/PT, middle temporal
gyrus, and angular gyrus (Allison et al., 2000; see also Scott
et al., 2014, for activation in deaf participants), and hMT+.
To define the STS region and hMT+, we used a func-
tional cluster-based criterion from direction-selective
responses at 0.1 Hz to motion presented in the CVF
(clusters > 150 vóxeles). Significance thresholding was ap-
plied at the individual participant level (range: z > 2.6 a
z > 5.7), to approximately equalize the number of signif-
icant voxels across commonly active regions, incluido
hMT+ (six deaf and six hearing, in at least one hemi-
sphere), the STS region (six deaf and six hearing), early
visual areas (six deaf and six hearing), and the lateral oc-
cipital complex (five deaf and six hearing). In relevant
casos, the threshold was increased for two regions, ap-
plied bilaterally, to spatially separate them. The mean
total voxel number across participants after thresholding
era 15,138 voxels and did not differ significantly across
grupos (deaf: m = 13,636, SE = 1,422; hearing: m =
16,641, SE = 1,583), t= 1.41, pag = .19, re = 0.73 (two-
cola).
In a separate analysis, we defined the PAC, a region
that cannot be functionally defined in deaf participants,
using the Julich probabilistic atlas in the SPM Anatomy
Toolbox (Eickhoff, Heim, Zeilles, & Amunts, 2006;
Eickhoff et al., 2005). Following a procedure described
in Eickhoff et al. (2006), we included the volume assign-
ment to all subregions for PAC (Morosan et al., 2001) en
the summary map of all areas (maximum probability
map). This procedure ensured no overlap between any
two cytoarchitectonic defined areas. The PAC ROI was
then transformed to Talairach space and applied to each
participant’s brain volume. It was further separated into
left and right hemisphere PAC for each participant.
Statistical tests. For the functionally defined ROIs,
a saber, the STS region and hMT+, we investigated
whether there were significant differences in the spatial
extent and amplitude of activation between the deaf and
hearing participant groups. The spatial extent and ampli-
tude of the STS region and hMT+ were thus compared
across the deaf and hearing participant groups with non-
parametric Mann–Whitney U tests, given the relatively
small sample size. To compare differences in the spatial
extent of the STS region and hMT+, the number of sig-
nificant voxels was used. To compare the amplitude dif-
ferences in these ROIs to stimuli presented in the LVF
and RVF, baseline-subtracted amplitude values at 0.1 Hz
for each participant were averaged across voxels within
their individually defined ROIs for the LVF and RVF re-
sponses separately. When a cluster-based ROI could not
have been defined in one hemisphere (STS: two deaf and
one hearing in the left hemisphere; MONTE: one deaf partic-
ipant in the left hemisphere), no corresponding ampli-
tude values were included in the analysis.
1130
Revista de neurociencia cognitiva
Volumen 31, Número 8
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
oh
C
norte
_
a
_
0
1
3
7
8
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
For the probabilistically defined ROI, eso es, the PAC,
we investigated whether there were significant responses
in the deaf and/or hearing participants. To determine
response significance in the PAC ROI, an amplitude spec-
trum was computed from the averaged BOLD responses
to motion presented in the CVF of all bilateral PAC voxels
across participants in each group. z Scores were then
calculated on this averaged spectrum with a threshold
of p < .001 (z > 3.10) for significance for this sensitive
group-level analysis. Given some debate about whether
PAC responses occur only as a result of group level aver-
aging (p.ej., as shown in Finney et al., 2001; but see Scott
et al., 2014), significance was also assessed similarly at
the individual participant level, with the typical thresh-
old of p < .05 (z > 1.64). To compare the number of
significantly direction-selective voxels across the PAC
and the STS region, the PAC ROI was thresholded at
the individual level defined previously for demarcating
the STS region (es decir., encompassing a range of z > 2.6
to z > 5.7).
RESULTADOS
The STS Region and Visual Middle
Temporal Complex
The centrally presented visual motion trials were used to
localize direction-selective responses in deaf and hearing
individuals. These responses were frequency-tagged at
0.1 Hz, eso es, the rate at which directional (100% dot
coherencia) motion onset (and continued for 2 segundo) im-
mediately after 8 sec of directionless motion (0% dot
coherencia).
m = 3,591 mm3, SE = 596.2; hearing: m = 653 mm3,
SE = 22.6), with no pronounced differences in the left
hemisferio (deaf: m = 294 mm3, SE = 164.8; hearing:
m = 315 mm3, SE = 124.8; Figura 2A). Statistically, este
led to a significant difference in the extent of STS
region activation across participant groups in the right
STS region only: U = 1, pag = .004 (left STS: U = 36, pag =
.70).
The STS region in the right hemisphere was centered
at Talairach x = 54, y = −42, and z = 9 for deaf partic-
ipants and x = 55, y = −39, and z = 16 for hearing par-
ticipants (for individual regions, ver figura 3). El
location of the STS region was particularly reliable in
the right hemisphere for deaf participants; The range of
its center Talairach x coordinates (ver figura 3) was x =
52–58 (SE = 0.84) for deaf participants, comparado con
x = 48–66 (SE = 3.07) for hearing participants (en el
left hemisphere, the range was x = 45–61 [SE = 3.45]
for deaf participants and x = 50–65 [SE = 2.96] for hear-
ing participants).
The area of hMT+ did not appear to differ greatly across
participant groups, although the right hemisphere (deaf:
m = 1,847 mm3, SE = 183.3; hearing: m = 1,304 mm3,
SE = 321.2) appeared larger than the left (deaf: m =
1,033 mm3, SE = 365.9; hearing: m = 1,335 mm3, SE =
291.1) for deaf participants only (Figura 2B). Sin embargo,
statistically, there was not a significant difference in the
extent of hMT+ activation across deaf and hearing partic-
ipants, in either the right, U = 11, pag = .31, or left, U = 14,
pag = .59, hemisferio. En resumen, the only significant dif-
ference found between deaf and hearing participants in
terms of area of activation was a greater extent of the right
STS region for deaf participants.
Direction-selective Responses Are More Extensive in the
Right STS Region for Deaf Participants
Responses in the LVF vs. RVF: An RVF STS Region Bias for
Deaf Participants
The area of the STS region was 5.5 times larger in deaf
than hearing individuals in the right hemisphere (deaf:
The amplitude of responses within the STS region and
hMT+ ROIs identified previously was used to quantify
Cifra 2. The size of STS region and hMT+ ROIs in deaf and hearing individuals. In the center, a sagittal slice (hemisferio derecho; Talairach x = 46)
provides an example of the location of these regions defined in a single hearing participant; the STS region is drawn in green, being dorsal and slightly
more anterior relative to hMT+, drawn in blue/purple. (A) The extent of activation in the STS region in deaf and hearing individuals (promedio [Avg.]
across groups plotted on the far right, with error bars representing ± 1 SE ), in each of the left and right hemisphere. (B) The extent of activation in
hMT+, plotted as in A.
Platos, Webster, and Jiang
1131
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
oh
C
norte
_
a
_
0
1
3
7
8
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
oh
C
norte
_
a
_
0
1
3
7
8
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Cifra 3. The STS region ROIs in the anatomy of deaf and hearing individuals, in Talairach space. Data are thresholded with individually
defined z score values (see the scale in the top right corner). These sagittal slices are centered around the functionally defined ROI for each
hemisferio; in three cases where a functional ROI could not be defined in one hemisphere, the x coordinate mirrors that of the other
hemisferio (and is presented in italics).
the response to visual motion presented in separate trials
in the LVF and RVF. We expected that the larger size of
the right STS region only in deaf individuals might be
accompanied by enhanced activity in response to visual
stimuli presented in the LVF. En cambio, the results showed
the opposite, eso es, that STS region responses of deaf
individuals were of larger amplitude for stimuli presented
in the RVF than LVF (Figura 4A). En efecto, for stimuli
presented in the RVF, there was a significantly higher
response for deaf than hearing participants in the right
STS, U = 0, pag = .002, which only neared significance in
the left STS, U = 2, pag = .063. A diferencia de, for stimuli pre-
sented in the LVF, there were no significant differences
across participant groups—right STS: U = 9, pag = .18; izquierda
STS: U = 9, pag = .91.
In hMT+, the pattern of amplitude responses to stim-
uli in the LVF and RVF appeared highly similar across
the left and right hemispheres for deaf and hearing
Participantes (Figura 4B). This pattern was described by
a contralateral visual field to hemisphere advantage, par-
ticularly for the RVF/left hemisphere (see Figure 4B).
Across participant groups, sin embargo, there were no sig-
nificant differences for stimuli presented in either the
RVF (right hMT+: U = 12, pag = .39; left hMT+: U =
14, pag = .93) or LVF (right hMT+: U = 15, pag = .70; izquierda
hMT+: U = 15, pag = 1.0). En general, there were thus no
differences between deaf and hearing participants in
hMT+ responses to directional motion in the LVF or
RVF, but deaf participants had more activation than
hearing participants in the right STS region to stimuli
presented in the RVF.
The PAC
The PAC was defined with a probabilistic atlas for both
deaf and hearing participants. To determine response
Cifra 4. Amplitude values (baseline-subtracted) of the frequency domain analysis of periodic BOLD signal changes to directional motion at
0.1 Hz. Data are reported for deaf and hearing participants in response to visual stimuli presented in the LVF and RVF in the left (LH) y correcto (RH)
hemispheres. Results are shown in the ROIs defined previously for the STS region (A) and hMT+ (B). Group data are plotted in bar graphs
(error bars plotting ± 1 SE ), and individual data are plotted as superimposed dots (deaf, filled-in; hearing, unfilled-in); each participant is plotted in a
consistent color across plots.
1132
Revista de neurociencia cognitiva
Volumen 31, Número 8
significance in this region, an amplitude spectrum was
derived from the averaged BOLD responses to motion
presented in the CVF of all PAC voxels across participants
for each group.
Significant But Minimal Direction-selective Responses in
the PAC for Deaf Participants
At the individual participant level, direction selectivity at
0.1 Hz was evidenced in five of six deaf participants in the
bilateral PAC (zs ranging from 1.79 a 3.19, ps < .05). A
significant direction-selective response emerged at
0.1 Hz for the deaf participants across the bilateral audi-
tory cortex (z = 5.80, p < .0001; right PAC: z = 7.78; left
PAC: z = 3.40; Figure 5A).
A direction-selective response was not found in five of
six hearing participants (all zs < 1.34, ps > .05); sin embargo,
a significant response was found in one hearing par-
ticipant who was not naive to the experimental design
(z = 4.36, pag < .0001). When including all six participants
of the hearing group, the direction-selective response in
the bilateral PAC reached significance at a threshold of
p < .05 (bilaterally: z = 2.32, p = .010; right PAC: z =
1.88; left PAC: z = 2.27); when removing the nonnaive
participant, the PAC was not significant in the hearing
group (bilaterally: z = 1.39, p = .082).
Despite significant responses in the PAC in deaf partic-
ipants, the extent of direction-selective responses was
minimal, with significant voxels subtending only 14.1%
of the bilateral PAC area (at z > 3.10, pag < .001) for deaf
participants at the group level (i.e., grand-averaged am-
plitude spectra; see Methods). When the lowest and
highest z score thresholds applied for individual partici-
pants (z > 2.6 to z > 5.7) were applied to the group level
datos, the percentage of significant bilateral PAC area at
the group level ranged from, respectivamente, 21.8% a
0.67%. To put this in perspective, this area was more than
28 times smaller than the extent of activation in the bilat-
eral STS region for deaf participants when defined at the
same significance threshold for individual participants
(see Figure 5B; mean PAC area: 1.11%, SE = 0.43%). En
summary, significant PAC responses were present but
minimal in the deaf participant group.
Responses in the LVF vs. RVF: The PAC Hemispheric
Activation Mirrors the STS
The responsivity of the BOLD amplitude of direction-
selective responses to visual motion presented in the
RVF or LVF was investigated, as in the Responses in the
LVF vs. RVF: An RVF STS Region Bias for Deaf Participants
sección (Figure 5C). The resultant pattern of activation
across hemispheres in the PAC was reminiscent of that
of the STS region (compare Figure 5C with Figure 4A).
Note that the large amplitude differences across the
STS region and PAC are not comparable directly, porque
of the different methods of definition of these regions
(es decir., the STS region was defined functionally to include
only significant voxels, whereas the PAC was defined as
all voxels within a predefined region).
Cifra 5. Responses to directional visual motion activated the PAC of deaf individuals. (A) A nivel de grupo, areas of activation in deaf participants’
temporal lobes encompassed the probabilistic area of the auditory cortex (shown in light blue on the standard Colin 27 cerebro; datos
at p < .001). Moreover, (B) significant responses to direction-selective motion at 0.1 Hz were found in the PAC (shaded in light blue) at the
individual level, although the area of activation was small relative to the STS region: z Scores of three deaf individuals are shown here at the same
thresholded level used to define their individual STS ROI (D2: z > 4.57; D3: z > 5.7; D4: z > 2.6). (C) The pattern of activation in the left and
right PAC for deaf participants to visual motion in the LVF and RVF was similar to that of their STS region (compare with Figure 4A). De nuevo, group data
are plotted in bar graphs (error bars plotting ± 1 SE ), and individual data are plotted as superimposed dots; colors are consistent across plots
and labeling in B. L = left; R = right.
Platos, Webster, and Jiang
1133
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
mi
d
tu
/
j
/
oh
C
norte
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
oh
C
norte
_
a
_
0
1
3
7
8
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
DISCUSIÓN
We used an fMRI frequency-tagging approach to identify
direction-selective brain regions in early deaf and hearing
gente, investigating the spatial extent of their activation
(in response to stimuli presented in the CVF) y el
amplitude of their activation (in response to stimuli
presented separately in the LVF and RVF). We focused
our analysis on the PAC and associative STS region, en
comparison with visual area hMT+. We predicted that
direction-selective response would be found in the PAC
and STS region, in line with enhanced behavioral abili-
ties reported for early deaf individuals in discriminating
and/or detecting directional visual motion (Shiell et al.,
2014; Bosworth et al., 2013; Hauthal et al., 2013; Bosworth
& Dobkins, 1999; Neville & Lawson, 1987). Tenga en cuenta que
we are able to identify direction-selective responses
emerging from a contrast of directional versus nondirec-
tional visual motion in our frequency-tagging paradigm.
Direction-selective motion responses are more selective
than motion-selective responses but less selective than
direction-specific (p.ej., leftward-selective) respuestas. On
the other hand, previous studies investigating motion-
related responses in the early deaf people have reported
motion-selective, rather than direction-selective, re-
sponses (p.ej., Fine et al., 2005; Finney et al., 2001).
Direction-selective Responses Are Found in the STS
Region for Both Hearing and Deaf Individuals
To our knowledge, this is the first study showing di-
rection selectivity for translational visual motion in the
human STS region (see Figures 2A and 3), here encom-
passing the posterior to middle STS, superior temporal
gyrus, and middle temporal gyrus (for direction selectiv-
ity with rotational head and ellipsoid motion, see Carlin,
Rowe, Kriegeskorte, Thompson, & Calder, 2012). El
STS region is known to respond to visual (biological)
motion in neurotypical humans and nonhuman animal
modelos (para una revisión, see Allison et al., 2000; see also,
p.ej., Noguchi, Kaneoke, Kakigi, Tanabe, & Sadato, 2005;
Grossman & Blake, 2001). Además, direction-selective
tuning of single neurons to visual motion has been re-
ported in the STS region of monkeys (p.ej., Nelissen,
Vanduffel, & Orban, 2006; Oram, Perrett, & Hietanen,
1993; bruce, Desimone, & Bruto, 1981; Zeki, 1978).
The absence of direction-selective STS responses in past
human neuroimaging or source localization studies may
be for several reasons: Por ejemplo, these studies fo-
cused on more traditionally, retinotopically defined
areas, and there may be differences in activation resulting
from the directional/nondirectional motion contrast used
here and the motion adaptation paradigms favored previ-
iosamente. Tenga en cuenta que, in previous studies, direction-selective
responses were reported only in visual areas V1 through
hMT+/ V5 and the lateral occipital complex (hong, Tong,
& Seiffert, 2013; cervezas & Norcia, 2009; Huk et al., 2001;
Tootell et al., 1995). Además, the frequency-tagging
paradigm applied here may have provided methodological
advantages, enabling a powerful contrast of directional and
nondirectional motion, an analysis with a high signal-to-
noise ratio, and not relying on a hemodynamic response
function model (p.ej., Gao et al., 2017; Koening-Robert
et al., 2015; Ernst et al., 2013; Morrone et al., 2000;
Engel et al., 1997; Puce et al., 1995; Bandettini et al., 1993).
The direction-selective STS region could be function-
ally defined in all individual deaf and hearing participants
in the right hemisphere and in five deaf and four hearing
participants in the left hemisphere (ver figura 3). Fue
2–12 times larger in the right than left hemisphere, para
the hearing and deaf participants, respectivamente (ver
Figura 2A). A nivel de grupo, in the right hemisphere,
this region was centered at Talairach coordinates of x =
55, y = −39, and z = 16 for hearing participants and x =
54, y = −42, and z = 9 for deaf participants. The local-
ization of the STS region here is similar to that reported
in previous studies (p.ej., for deaf participants, respuesta
to visual motion: x = 56, y = −40, z = 8, en mesa 5 de
Bavelier et al., 2001; for neurotypical participants in
response to visual, tactile, and auditory stimuli: left ante-
rior inferior coordinates of x = 52, y = 44, z = 15, en
Beauchamp, Yasar, Frye, & Ro, 2008).
This STS region also showed a right-hemisphere ad-
vantage in terms of response amplitude to stimuli shown
in the LVF and RVF, particularly for deaf participants. En
contrast, there was no left-hemisphere advantage appar-
ent for stimuli shown in the RVF for either participant
grupo (see Figure 4A). These results are in line with
larger responses to visual motion in the right hemisphere
generally (p.ej., Corballis, 2003; Finney et al., 2001;
Kubova, Kuba, Hubacek, & Vit, 1990; see also Weeks
et al., 2000, for an example of right-hemisphere domi-
nance to auditory motion in congenitally blind partici-
pants) as well as interhemispheric transfer of visual
motion information (Brandt, Esteban, Bense, Yousry, &
Dieterich, 2000; see also Motter, Steinmetz, Duffy, &
Mountcastle, 1987) and previous reports of no contralat-
eral organization in the STS region (p.ej., Grossman &
Blake, 2001; see also Saygin & Sereno, 2008).
The PAC Shows Direction-selective Visual Motion
Responses in Early Deaf Individuals
We discovered significant direction-selective responses
to visual motion in a probabilistically defined PAC region
in early deaf people (ver figura 5). The extent of this ac-
tivation was highly dependent on the significance thresh-
old used; at p < .001, it appeared to cover 14.1% of the
bilateral PAC for the early deaf group. In comparison with
the extent of activation in the STS at the same signifi-
cance threshold, this area is more than 28 times smaller.
Nevertheless, when averaging across all voxels in the bi-
lateral PAC, a significant response emerged for five of six
deaf participants ( p < .05).
1134
Journal of Cognitive Neuroscience
Volume 31, Number 8
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Responses to visual stimuli were first reported in the
PAC for early deaf people in response to peripheral mov-
ing dots at the group level (Finney et al., 2001). PAC ac-
tivation was replicated in the early deaf people in
response to moving or flickering stimuli, most often in
or near the visual periphery (Finney et al., 2001, 2003).
Importantly, these results were likely not an effect of
group averaging or imprecise PAC definition: A recent
study identified PAC activation defined anatomically at
the individual participant level, using the transverse tem-
poral gyrus, also known as Heschl’s gyrus, with flickering
point lights in the RVF (Scott et al., 2014). In this study,
the amount of activation in the PAC was reported only in
comparison for peripherally versus perifoveally pre-
sented flicker dots, preventing a direct comparison with
the extent or amount of activation reported here. Still,
our finding that PAC activation is—at least to some
extent—direction selective adds to our knowledge of
neural plasticity in this region for early deaf people.
The pattern of PAC activation in response to stimuli
presented in the RVF and LVF is highly reminiscent of
that of the STS region (see Figures 5C and 4A). One pos-
sibility is that the PAC projects information into the STS
region, a sensory association area (e.g., Hackett et al.,
2007; Smiley et al., 2007; Seltzer et al., 1996; Seltzer &
Pandya, 1978; Benevento, Fallon, Davis, & Rezak, 1977;
see also Beauchamp et al., 2008). In addition, the STS
also projects information back to the superior temporal
gyrus (e.g., Barnes & Pandya, 1992, using retrograde trac-
ing in the rhesus monkey), suggesting reciprocal con-
nections and more complex interactions between these
regions. Note that the correspondence between the
PAC and STS region found here cannot be explained by
overlap between these areas: There was no overlap in
five deaf participants (0.8% for the remaining one par-
ticipant) and no overlap at the group level in the right
hemisphere.
The Right STS Region Is Recruited Extensively for
Processing Direction-selective Visual Motion in
Early Deaf Individuals
The most striking difference between deaf and hearing
individuals in response to directional motion was found
in the right STS region, which was 5.5 times larger for
deaf than hearing participants (for a 12 times greater
extent in the right posterior STS in deaf than hearing
participants in response to attended visual motion, see
Table 2 of Bavelier et al., 2001). In contrast, no dif-
ferences in direction-selective responses were found
across groups in the left STS region or visual area
hMT+ here.
The STS is a likely region for cross-modal organization,
as it covers an expansive region of the temporal lobe and
expresses great functional diversity, containing sub-
regions sensitive to auditory, visual, tactile, and multi-
sensory stimuli (e.g., Dahl, Logothesis, & Kayser, 2009;
Beauchamp et al., 2008; Beauchamp, Argall, Bodurka,
Duyn, & Martin, 2004; Calvert, Campbell, & Brammer,
2000; Seltzer & Pandya, 1978; Benevento et al., 1977).
The posterior STS receives inputs from both the visual
and auditory cortex, whereas the middle STS normally
receives auditory inputs only, at least in the rhesus mon-
key (Seltzer & Pandya, 1994); in humans, auditory–visual
responses have been reported to be largest in the middle
STS ( Venezia et al., 2017). Congruently, the auditory
association cortices have also been invoked in studies
in neurotypical individuals on cross-modal plasticity
through learned associations (e.g., Meyer, Baumann,
Marchina, & Jancke, 2007; see also Bulkin & Groh,
2006; Ghazanfar & Schroeder, 2006).
In congenitally deaf people, greater connectivity be-
tween the middle STS across hemispheres, as well as with
the ipsilateral posterior STS, hints at a reorganization of
this region in line with cross-modal plasticity (Li, 2013).
Specific examples of cross-modal plasticity in the STS re-
gion have been reported with regard to how early deaf
people process sign language. Early deaf participants
have been shown to have increased activation in the
middle STS in response to sign language (e.g., Sadato
et al., 2004; Neville et al., 1998). In addition, increased
posterior STS activation was shown in deaf signers, and
not hearing signers, when performing a velocity task
(see Figure 6 of Bavelier et al., 2001). Our report of ex-
pansive recruitment of the STS region in early deaf
people in response to visual motion thus further con-
firms a general pattern of neural plasticity in this region.
In hearing individuals, some authors claim that re-
sponses to auditory motion are separate from those to
auditory localization and rely on the superior temporal
gyrus (e.g., Ducommun et al., 2002, 2004; Baumgart,
Gaschler-Markefski, Woldorff, Heinze, & Scheich, 1999).
However, others claim that the auditory cortex may be
selective to spatial locations rather than motion (e.g.,
Smith, Okada, Saberi, & Hickok, 2004). Our findings sug-
gest that, at least in response to visual motion, auditory
areas in deaf participants and association areas in deaf
and hearing participants are selectively responsive to
directional visual motion.
Direction-selective Responses to Stimuli in the RVF
and LVF Do not Show a Contralateral Bias in the
Deaf PAC and Association Cortex
Interestingly, despite a right-hemisphere advantage for
early deaf participants in the STS region, and hinted at
in the PAC, there was more activation overall to stimuli
presented in the RVF. Behaviorally, an RVF advantage for
visual motion perception has often been reported for early
deaf participants (e.g., direction of motion: Neville &
Lawson, 1987; direction of motion: Bosworth & Dobkins,
1999; motion velocity: Brozinsky & Bavelier, 2004; direc-
tion of motion: Bosworth et al. 2013; see also Samar &
Retter, Webster, and Jiang
1135
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Parasnis, 2007; but see Hauthal et al., 2013, for an LVF ad-
vantage for movement localization in late signers).
A right-hemisphere advantage has been reported in
previous neuroimaging studies investigating visual mo-
tion or flickering stimulus responses in early deaf people:
In the auditory cortices, the right hemisphere was dom-
inantly (Finney et al., 2003) or exclusively (Fine et al.,
2005; Finney et al., 2001) activated. The right-hemisphere
advantage reported here is also in line with the finding
that only the right auditory cortex (planum temporale)
showed a correlation between increased cortical thick-
ness and enhanced visual motion detection thresholds
in the study on early deaf people of Shiell et al. (2016).
The right auditory cortex was also shown with ERP source
localization to be dominant in hearing-restored deaf indi-
viduals when viewing visual stimuli (Sandmann et al.,
2012). Although some studies have reported left-
hemisphere advantages in early deaf people, such effects
either could not be localized (e.g., Neville & Lawson,
1987) or were relatively small, sometimes nonsignificant
effects, reported in hMT+ (e.g., Fine et al., 2005;
Bavelier et al., 2001), which in our study, on direction-
selective responses, did not show significant differences
across participant groups but an appearance of right lat-
eralization for early deaf participants only (see Figures 2B
and 4B).
Here, we can reconcile behavioral RVF and neural
right-hemisphere advantages in response to directional
motion: The remapped auditory cortices do not show a
strong contralateral bias like other direction-selective
areas, for example, hMT+; instead, the right hemisphere
dominates regardless. As addressed in the first section of
the Discussion, the STS region possesses large spatial
fields and, particularly in the right hemisphere, low sen-
sitivity to retinotopic organization (e.g., Saygin & Sereno,
2008; see also Almeida et al., 2015; Grossman & Blake,
2001). The recruited association cortex seems to be the
best candidate for behavioral RVF advantages, because
this region showed the most extensive changes between
early deaf and hearing participants here. In addition, in a
previous study reporting effects in hMT+, the posterior
STS was shown to be 9.3 times larger in size (in compar-
ison, hMT+ was only 1.08 times larger) and seven times
greater in percent signal change (hMT+: 1.05 times
greater) in deaf than hearing participants in response
to attended velocity of visual motion (see Table 4 of
Bavelier et al., 2001).
A right-hemisphere advantage paired with an RVF ad-
vantage goes against the assumption that neural activa-
tion to visual stimuli is necessarily contralateral, which
has frequently been made in the literature on neural
plasticity in early deaf people (e.g., Bavelier et al., 2001;
tentatively in Bosworth et al., 2013; Hauthal et al., 2013;
Brozinsky & Bavelier, 2004; Bosworth & Dobkins, 1999,
2002). In a study reporting a left-hemisphere advantage
with attention-related modulation of ERPs to peripheral
visual targets in early deaf people, Neville and Lawson
(1987) hypothesized that the left hemisphere was re-
mapped for sign language processing and therefore
could have different sensitivities to stimuli such as visual
motion or stimulus localization. However, it is not clear
that the left hemisphere is specialized for sign language
processing in early deaf people: Deaf and early-signing
hearing participants have been shown to have bilateral
(STS) activation to sign language; and early deaf partici-
pants, to have more right STS activation to written lan-
guage (e.g., Sadato et al., 2004; Neville et al., 1998).
Our finding of a right-hemisphere advantage compatible
with an RVF advantage offers an alternative explanation
and unites most neural and behavioral findings regarding
motion perception of early deaf people.
Speculatively, our findings suggest that behavioral
advantages in early deaf people, particularly for motion
discrimination in the RVF, may be supported by increased
STS region activation in the right hemisphere (again, see
Figure 4A). Although limited by a small sample size, five of
our six deaf participants also participated in a behavioral
study in our laboratory, in which thresholds on the
percent dot coherence required for direction of visual
motion in the LVF and RVF were acquired. We found a
suggestive correlation with this measure of behavioral di-
rection discrimination ability (i.e., lower percent dot
motion coherences required) and the extent of activation
in the right STS region, R2 = .30, although not significant,
p = .10. In comparison, the extent of bilateral hMT+
activation showed no correlation, R2 = .01, p = .78. How-
ever, this tentative result would need to be confirmed with
larger sample sizes in future studies. At the least, we are
able to introduce the hypothesis that the auditory and
association cortices in early deaf individuals are sensitive
to directional visual motion and that this neural reorga-
nization may support a behavioral advantage reported
previously for visual motion direction discrimination.
Limitations and Future Directions
One limitation of this study was that the sample con-
sisted of only six participants per group: Although the re-
sults were reliable across individual participants, they
would be strengthened by replication in future studies,
potentially with larger sample sizes. Another limitation
here is that an eye tracker was not used during the
fMRI experiment to ensure fixation. However, it is un-
likely that eye movements could explain the results.
The functionally defined regions were localized with
centrally presented stimuli, and the differences between
deaf and hearing participants for peripherally presented
stimuli were highly specific (e.g., the enhanced STS re-
gion recruitment was restricted to the right hemisphere
and RVF). In addition, cues for participants to report the
direction of motion were given at random intervals
throughout the experiment, such that they were neither
periodic nor associated with directional motion presenta-
tion times (see Methods). A third limitation of this study
1136
Journal of Cognitive Neuroscience
Volume 31, Number 8
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
was that the STS region was defined broadly in each
participant; future studies could use anatomical land-
marks or more specific functional localizers in hearing
participants to demarcate more precise subregions.
Finally, in this study, directional visual motion coincided
with coherent visual motion. Although it may be ar-
gued that coherent motion inherently possesses direc-
tionality, future studies may address the influence of
coherency on directional motion responses (see Braddick
et al., 2008).
Acknowledgments
This research was supported by grants from the National
Institutes of Health (NIH; grants EY023268 to F. J. and P20
GM103650 to M. A. W.). The content is solely the responsibility
of the authors and does not necessarily represent the official
views of the NIH. Talia L. Retter is supported by the Belgian
National Foundation for Scientific Research (grant FC7159).
The authors are thankful to Andrea Conte and Bruno Rossion
for access to the stimulation program XPMan, Revision 111, as
well as to Xiaoqing Gao for his help with the frequency domain
analysis, and O. Scott Gwinn for use of his behavioral data on
discrimination thresholds for the deaf participants as well as
help with stimulus generation.
Reprint requests should be sent to Talia L. Retter, Psychology and
Integrative Neuroscience, University of Nevada, Reno, 1664 N.
Virginia St., Reno, NV 89503, or via e-mail: tlretter@nevada.
unr.edu.
REFERENCES
Albright, T. D. (1984). Direction and orientation selectivity of
neurons in visual area MT of the macaque. Journal of
Neurophysiology, 52, 1106–1130.
Ales, J. M., & Norcia, A. M. (2009). Assessing direction-specific
adaptation using the steady-state visual evoked potential:
Results from EEG source imaging. Journal of Vision, 9, 8.
Allison, T., Puce, P., & McCarthy, G. (2000). Social perception
from visual cues: Role of the STS region. Trends in Cognitive
Sciences, 4, 267–278.
Almeida, J., He, D., Chen, Q., Mahon, B. Z., Zhang, F.,
Gonçalves, Ó. F., et al. (2015). Decoding visual location
from neural patterns in the auditory cortex of the
congenitally deaf. Psychological Science, 26, 1771–1782.
Anstis, S. M. (1970). Phi movement as a subtraction process.
Vision Research, 10, 1411–1430.
Atkinson, J., Birtles, D., Anker, S., Braddick, O., Rutherford, M.,
Cowan, F., et al. (2008). High-density VEP measures of global
form and motion processing in infants born very preterm.
Journal of Vision, 8, 422.
Bandettini, P. A., Jesmanowicz, A., Wong, E. C., & Hyde, J. S.
(1993). Processing strategies for time-course data sets in
functional MRI of the human brain. Magnetic Resonance in
Medicine, 30, 161–173.
Barnes, C. L., & Pandya, D. N. (1992). Efferent cortical
connections of multimodal cortex of the superior temporal
sulcus in the rhesus monkey. Journal of Comparative
Neurology, 318, 222–244.
Baumgart, F., Gaschler-Markefski, B., Woldorff, M. G., Heinze,
H. J., & Scheich, H. (1999). A movement-sensitive area in
auditory cortex. Nature, 400, 724–726.
Bavelier, D., Brozinsky, C., Tomann, A., Mitchell, T., Neville, H.,
& Liu, G. (2001). Impact of early deafness and early exposure
to sign language on the cerebral organization for motion
processing. Journal of Neuroscience, 21, 8931–8942.
Bavelier, D., Dye, M. W. G., & Hauser, P. C. (2006). Do deaf
individuals see better? Trends in Cognitive Sciences, 10,
512–518.
Bavelier, D., Tomann, A., Hutton, C., Mitchell, T., Corina, D.,
Liu, G., et al. (2000). Visual attention to the periphery is
enhanced in congenitally deaf individuals. Journal of
Neuroscience, 20, RC93.
Beauchamp, M. S., Argall, B. D., Bodurka, J., Duyn, J. H., &
Martin, A. (2004). Unraveling multisensory integration:
Patchy organization within human STS multisensory cortex.
Nature Neuroscience, 7, 1190–1192.
Beauchamp, M. S., Cox, R. W., & DeYoe, E. A. (1997). Graded
effects of spatial and featural attention on human area MT
and associated motion processing areas. Journal of
Neurophysiology, 78, 516–520.
Beauchamp, M. S., Yasar, N. E., Frye, R. E., & Ro, T. (2008).
Touch, sound and vision in human superior temporal sulcus.
Neuroimage, 41, 1011–1020.
Beckett, A., Peirce, J. W., Sanchez-Panchuelo, R. M., Francis, S.,
& Schluppeck, D. (2012). Contribution of large scale biases in
decoding of direction-of-motion from high-resolution fMRI
data in human early visual cortex. Neuroimage, 63,
1623–1632.
Benevento, L. A., Fallon, J., Davis, B. J., & Rezak, M. (1977).
Auditory–visual interaction in single cells in the cortex
of the superior temporal sulcus and the orbital frontal cortex
of the macaque monkey. Experimental Neurology, 57,
849–872.
Bola, L., Zimmermann, M., Mostowski, P., Jednorog, K.,
Marchewka, A., Butkowski, P., et al. (2017). Task-specific
reorganization of the auditory cortex in deaf humans.
Proceedings of the National Academy of Sciences, U.S.A.,
114, E600–E609.
Bosworth, R. G., & Dobkins, K. R. (1999). Left-hemisphere
dominance for motion processing in deaf signers.
Psychological Science, 10, 256–262.
Bosworth, R. G., & Dobkins, K. R. (2002). Visual field
asymmetries for motion processing in deaf and hearing
signers. Brain and Cognition, 49, 170–181.
Bosworth, R. G., Petrich, J. A., & Dobkins, K. R. (2013). Effects
of attention and laterality on motion and orientation
discrimination in deaf signers. Brain and Cognition, 82,
117–126.
Bottari, D., Nava, E., Ley, P., & Pavani, F. (2010). Enhanced
reactivity to visual stimuli in deaf individuals. Restorative
Neurology and Neuroscience, 28, 167–179.
Braddick, O. (1974). A short-range process in apparent motion.
Vision Research, 14, 519–527.
Braddick, O., Birtles, D., Wattam-Bell, J., & Atkinson, J. (2005).
Motion- and orientation-specific cortical responses in
infancy. Vision Research, 45, 3169–3179.
Braddick, O., Hartley, T., Atkinson, J., Wattam-Bell, J., & Turner,
R. (1997). FMRI study of differential brain activation by
coherent motion and dynamic noise. Investigative
Ophthalmology & Visual Science, 38, 4297.
Braddick, O., Wattam-Bell, J., Birtles, D., Loesch, J., Loesch, L.,
Frazier, K., et al. (2008). Brain activity evoked by motion
direction changes and by global motion coherence shows
different spatial distributions. Journal of Vision, 8, 674.
Brandt, T., Stephan, T., Bense, S., Yousry, T. A., & Dieterich,
M. (2000). Hemifield visual motion stimulation: An
example of interhemispheric crosstalk. NeuroReport, 11,
2803–2809.
Brozinsky, C. J., & Bavelier, D. (2004). Motion velocity
thresholds in deaf signers: Changes in lateralization but not
in overall sensitivity. Cognitive Brain Research, 21, 1–10.
Retter, Webster, and Jiang
1137
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Bruce, C., Desimone, R., & Gross, C. G. (1981). Visual
properties of neurons in a polysensory area in superior
temporal sulcus of the macaque. Journal of
Neurophysiology, 46, 369–384.
Bulkin, D. A., & Groh, J. M. (2006). Seeing sounds: Visual and
auditory interactions in the brain. Current Opinion in
Neurobiology, 16, 415–419.
Calvert, G. A., Campbell, R., & Brammer, M. J. (2000).
Evidence from functional magnetic resonance imaging of
crossmodal binding in the human heteromodal cortex.
Current Biology, 10, 649–657.
Carlin, J. D., Rowe, J. B., Kriegeskorte, N., Thompson, R., &
Calder, A. J. (2012). Direction-sensitive codes for observed
head turns in human superior temporal sulcus. Cerebral
Cortex, 22, 735–744.
Codina, C. J., Pascalis, O., Baseler, H. A., Levine, A. T., &
Buckley, D. (2017). Peripheral visual reaction time is faster in
deaf adults and British sign language interpreters than in
hearing adults. Frontiers in Psychology, 8, 50.
Corballis, P. M. (2003). Visuospatial processing and the
right-hemisphere interpreter. Brain and Cognition, 53,
171–176.
Dahl, C. D., Logothesis, N. K., & Kayser, C. (2009). Spatial
organization of multisensory responses in temporal
association cortex. Journal of Neuroscience, 29,
11924–11932.
Dubner, R., & Zeki, S. M. (1971). Response properties and
receptive fields of cells in an anatomically defined region
of the superior temporal sulcus in the monkey. Brain
Research, 2, 528–532.
Ducommun, C. Y., Michel, C. M., Clarke, S., Adriani, M.,
Seeck, M., Landis, T., et al. (2004). Cortical motion
deafness. Neuron, 43, 765–777.
Ducommun, C. Y., Murray, M. M., Thut, G., Bellmann, A.,
Viaud-Delmon, I., & Michel, C. M. (2002). Segregated
processing of auditory motion and auditory location: An
ERP mapping study. Neuroimage, 16, 76–88.
Dye, M. W. G., Hauser, P. C., & Bavelier, D. (2009). Is
visual selective attention in deaf individuals enhanced or
deficient? The case of the useful field of view. PLoS One,
4, e5640.
Eickhoff, S. B., Heim, S., Zeilles, K., & Amunts, K. (2006).
Testing anatomically specified hypotheses in functional
imaging using cytoarchitectonic maps. Neuroimage, 32,
570–582.
Eickhoff, S. B., Stephan, K. E., Mohlberg, H., Grefkes,
C., Fink, G. R., Amunts, K., et al. (2005). A new SPM
toolbox for combining probabilistic cytoarchitectonic
maps and functional imaging data. Neuroimage, 25,
1325–1335.
Engel, S., Zhang, X., & Wandell, B. (1997). Colour tuning in
human visual cortex measured with functional magnetic
resonance imaging. Nature, 388, 68–71.
Ernst, Z. R., Boynton, G. M., & Jazayeri, M. (2013). The spread of
attention across features of a surface. Journal of
Neurophysiology, 110, 2426–2439.
Finney, E. M., Fine, I., & Dobkins, K. R. (2001). Visual stimuli
activate auditory cortex in the deaf. Nature Neuroscience, 4,
1171–1173.
Gao, X., Gentile, F., & Rossion, B. (2017). Fast periodic
stimulation (FPS): A highly effective approach in fMRI
brain mapping. Brain Structure & Function, 223,
2433–2454.
Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex
essentially multisensory? Trends in Cognitive Sciences, 10,
278–285.
Grossman, E. D., & Blake, R. (2001). Brain activity evoked by
inverted and imagined biological motion. Vision Research,
41, 1475–1482.
Hackett, T. A., De La Mothe, L. A., Ulbert, I., Karmos, G., Smiley,
J., & Schroeder, C. E. (2007). Multisensory convergence in
auditory cortex: II. Thalamocortical connections of the caudal
superior temporal plane. Journal of Comparative
Neurology, 502, 924–952.
Hauthal, N., Sandmann, P., Debener, S., & Thorne, J. D.
(2013). Visual movement perception in deaf and hearing
individuals. Advances in Cognitive Psychology, 9, 53–61.
Hong, S. W., Tong, F., & Seiffert, A. E. (2013). Direction-
selective patterns of activity in human visual cortex suggest
common neural substrates for different types of motion.
Neuropsychologia, 50, 514–521.
Hubel, D. H., & Wiesel, T. N. (1961). Integrative action in the
cat’s lateral geniculate body. Journal of Physiology, 155,
385–398.
Huk, A. C., Ress, D., & Heeger, D. J. (2001). Neuronal
basis of the motion aftereffect reconsidered. Neuron, 32,
161–172.
Jiang, F., Beauchamp, M. S., & Fine, I. (2015). Re-examining
overlap between tactile and visual motion responses within
hMT + and STS. Neuroimage, 119, 187–196.
Julesz, B. (1971). Foundations of cyclopean perception.
Chicago: University of Chicago Press.
Kamitani, Y., & Tong, F. (2006). Decoding seen and attended
motion directions from activity in the human visual cortex.
Current Biology, 16, 1096–1102.
Karns, C. M., Dow, M. W., & Neville, H. J. (2012). Altered cross-
modal processing in the primary auditory cortex of
congenitally deaf adults: A visual-somatosensory fMRI study
with a double-flash illusion. Journal of Neuroscience, 32,
9626–9638.
Koening-Robert, R., VanRullen, R., & Tsuchiya, N. (2015).
Semantic wavelet-induced frequency-tagging (SWIFT)
periodically activates category selective areas while steadily
activating early visual areas. PLoS One, 10, e0144858.
Kubova, Z., Kuba, M., Hubacek, J., & Vit, F. (1990). Properties
of visual evoked potentials to onset of movement on a
television screen. Documenta Ophthalmologica, 75,
67–72.
Lam, K., Kaneoke, Y., Gunji, A., Yamasaki, H., Matsumoto, E.,
Naito, T., et al. (2000). Magnetic response of human
extrastriate cortex in the detection of coherent and
incoherent motion. Neuroscience, 97, 1–10.
Felleman, D. J., & Van Essen, D. C. (1987). Receptive field
properties of neurons in area V3 of macaque monkey
extrastriate cortex. Journal of Neurophysiology, 57,
889–920.
Levine, A., Codina, C., Buckley, D., de Sousa, G., & Baseler,
H. A. (2015). Differences in primary visual cortex predict
performance in local motion detection in deaf and hearing
adults. Journal of Vision, 15, 486.
Fine, I., Finney, E. M., Boynton, G. M., & Dobkins, K. R. (2005).
Li, S. C. (2013). Neuromodulation and developmental
Comparing the effects of auditory deprivation and sign
language within the auditory and visual cortex. Journal of
Cognitive Neuroscience, 17, 1621–1637.
contextual influences on neural and cognitive plasticity across
the lifespan. Neuroscience & Biobehavioral Reviews, 37,
2201–2208.
Finney, E. M., Clementz, B. A., Hickok, G., & Dobkins, K. R.
Lomber, S. G., Meredith, M. A., & Kral, A. (2010). Cross-modal
(2003). Visual stimuli activate auditory cortex in deaf
subjects: Evidence from MEG. NeuroReport, 14,
1425–1427.
plasticity in specific auditory cortices underlies visual
compensations in the deaf. Nature Neuroscience, 13,
1421–1427.
1138
Journal of Cognitive Neuroscience
Volume 31, Number 8
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Lore, W. H., & Song, S. (1991). Central and peripheral visual
Pavani, F., & Bottari, D. (2012). Visual abilities in individuals
processing in hearing and nonhearing individuals. Bulletin of
the Psychonomic Society, 29, 437–440.
Meredith, M. A., Kryklywy, J., McMillan, A. J., Malhotra,
S., Lum-Tai, R., & Lomber, S. G. (2011). Crossmodal
reorganization in the early deaf switches sensory, but not
behavioral roles of auditory cortex. Proceedings of the
Natural Academy of Sciences, U.S.A., 108, 8856–8861.
Meyer, M., Baumann, S., Marchina, S., & Jancke, L. (2007).
Hemodynamic responses in human multisensory and
auditory association cortex to purely visual stimulation.
BMC Neuroscience, 8, 14.
Mitchell, T. V., & Maslin, M. T. (2007). How vision matters for
individuals with hearing loss. International Journal of
Audiology, 46, 500–511.
Morosan, P., Rademacher, J., Schleicher, A., Amunts, K.,
Schomann, T., & Zilles, K. (2001). Human primary
auditory cortex: Cytoarchitectonic subdivisions and
mapping into a spatial reference system. Neuroimage,
13, 684–701.
Morrone, M. C., Tosetti, M., Montanaro, D., Fiorentini, A., Cioni,
G., & Burr, D. C. (2000). A cortical area that responds
specifically to optic flow, revealed by fMRI. Nature
Neuroscience, 3, 1322–1328.
Motter, B. C., Steinmetz, M. A., Duffy, C. J., & Mountcastle, V. B.
(1987). Functional properties of parietal visual neurons:
Mechanisms of directionality along a single axis. Journal of
Neuroscience, 7, 154–176.
Nakamura, H., Kashii, S., Naagamine, T., Hashimoto, T., Honda,
Y., & Shibasaki, H. (2003). Human V5 demonstrated by
magnetoencephalography using random dot kinematograms
of different coherence levels. Neuroscience Research, 46,
423–433.
Nelissen, K., Vanduffel, W., & Orban, G. A. (2006). Charting the
lower superior temporal region, a new motion-sensitive
region in monkey superior temporal sulcus. Journal of
Neuroscience, 26, 5929–5947.
Neville, H. J., Bavelier, D., Corina, D., Rauschecker, J., Karni, A.,
Lalwani, A., et al. (1998). Cerebral organization for language
in deaf and hearing subjects: Biological constraints and
effects of experience. Proceedings of the National Academy
of Sciences, U.S.A., 95, 922–929.
Neville, H. J., & Lawson, D. (1987). Attention to central and
peripheral visual space in a movement detection task: An
event-related potential and behavioral study. II. Congenitally
deaf adults. Brain Research, 405, 268–283.
Noguchi, Y., Kaneoke, Y., Kakigi, R., Tanabe, H. C., &
Sadato, N. (2005). Role of the superior temporal region
in human visual motion perception. Cerebral Cortex,
15, 1592–1601.
Oram, M. W., Perrett, D. I., & Hietanen, J. K. (1993). Directional
tuning of motion-sensitive cells in the anterior superior
temporal polysensory area of the macaque. Experimental
Brain Research, 97, 274–294.
Palomares, M., Ales, J. M., Wade, A. R., Cottereau, B. R., &
Norcia, A. M. (2012). Distinct effects of attention on the
neural responses to form and motion processing: A SSVEP
source-imaging study. Journal of Vision, 12, 15.
Parasnis, I. (1983). Effects of parental deafness and early exposure
to manual communication on the cognitive skills, English
language skill, and field independence of young deaf adults.
Journal of Speech and Hearing Research, 26, 588–594.
Parasnis, I., & Samar, V. J. (1985). Parafoveal attention in
congenitally deaf and hearing young adults. Brain and
Cognition, 4, 313–327.
Pascual-Leone, A., & Hamilton, R. (2001). The metamodal
with profound deafness: A critical review. In M. M. Murray &
M. T. Wallace (Eds.), The neural bases of multisensory
processes. Boca Raton, FL: CRC Press/Taylor & Francis.
Puce, A., Allison, T., Gore, J. C., & McCarthy, G. (1995).
Face-sensitive regions in human extrastriate cortex studied
by functional MRI. Journal of Neurophysiology, 74,
1192–1199.
Retter, T. L., & Rossion, B. (2016). Uncovering the neural
magnitude and spatio-temporal dynamics of natural image
categorization in a fast visual stream. Neuropsychologia, 91,
9–28.
Sadato, N., Okada, T., Honda, M., Matsuki, K., Yoshida, M.,
Kashikura, K., et al. (2004). Cross-modal integration and
plastic changes revealed by lip movement, random-dot
motion and sign languages in the hearing and deaf. Cerebral
Cortex, 15, 1113–1122.
Samar, V. J., & Parasnis, I. (2007). Non-verbal IQ is correlated
with visual field advantages for short duration coherent
motion detection in deaf signers with varied ASL exposure
and etiologies of deafness. Brain and Cognition, 65,
260–269.
Sandmann, P., Dillier, N., Eichele, T., Meyer, M., Kegel, A.,
Pascual-Marqui, R. D., et al. (2012). Visual activation of
auditory cortex reflects maladaptive plasticity in cochlear
implant users. Brain, 135, 555–568.
Saygin, A. P., & Sereno, M. I. (2008). Retinotopy and attention
in human occipital, temporal, parietal, and frontal cortex.
Cerebral Cortex, 18, 2158–2168.
Scott, G. D., Karns, C. M., Dow, M. W., Stevens, C., & Neville,
H. J. (2014). Enhanced peripheral visual processing in
congenitally deaf humans is supported by multiple brain
regions, including primary auditory cortex. Frontiers in
Human Neuroscience, 8, 177.
Seltzer, B., Cola, M. G., Gutierrez, C., Massee, M.,
Weldon, C., & Cusick, C. G. (1996). Overlapping and
nonoverlapping cortical projections to cortex of the
superior temporal sulcus in the rhesus monkey: Double
anterograde tracer studies. Journal of Comparative
Neurology, 370, 173–190.
Seltzer, B., & Pandya, D. N. (1978). Afferent cortical connections
and architectonics of the superior temporal sulcus and
surrounding cortex in the rhesus monkey. Brain Research,
149, 1–24.
Seltzer, B., & Pandya, D. N. (1994). Parietal, temporal, and
occipital projections to cortex of the superior temporal
sulcus in the rhesus monkey: A retrograde tracer study.
Journal of Comparative Neurology, 343, 445–463.
Shiell, M. M., Champoux, F., & Zatorre, R. J. (2014).
Enhancement of visual motion detection thresholds in early
deaf people. PLoS One, 9, e90498.
Shiell, M. M., Champoux, F., & Zatorre, R. J. (2016).
The right hemisphere planum temporale supports
enhanced visual motion detection ability in deaf people:
Evidence from cortical thickness. Neural Plasticity, 2016,
7217630.
Smiley, J. F., Hackett, T. A., Ulbert, I., Karmas, G., Lakatos, P.,
Javitt, D. C., et al. (2007). Multisensory convergence in
auditory cortex: I. Cortical connections of the caudal superior
temporal plane in macaque monkeys. Journal of
Comparative Neurology, 502, 894–923.
Smith, K. R., Okada, K., Saberi, K., & Hickok, G. (2004). Human
cortical auditory motion areas are not motion selective.
NeuroReport, 15, 1523–1526.
Talairach, J., & Tournoux, P. (1988). Co-planar stereotaxic
atlas of the human brain. New York: Thieme.
organization of the brain. Progress in Brain Research, 134,
427–445.
Tootell, R. B., Reppas, J. B., Dale, A. M., Look, R. B., Sereno,
M. I., Malach, R., et al. (1995). Visual motion aftereffect in
Retter, Webster, and Jiang
1139
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
human cortical area MT revealed by functional magnetic
resonance imaging. Nature, 375, 139–141.
Tyler, C. W., & Kaitz, M. (1977). Movement adaptation in the
visual evoked response. Experimental Brain Research, 27,
203–209.
Venezia, J. H., Vaden, K. I., Jr., Rong, F., Maddox, D., Saberi, K.,
& Hickok, G. (2017). Auditory, visual, and audiovisual
speech processing streams in superior temporal sulcus.
Frontiers in Human Neuroscience, 11, 174.
Wattam-Bell, J. (1991). Development of motion-specific cortical
responses in infancy. Vision Research, 31, 287–297.
Weeks, R., Horwitz, B., Aziz-Sultan, A., Tian, B., Wessinger,
C. M., Cohen, L. G., et al. (2000). A positron emission
tomographic study of auditory localization in the
congenitally blind. Journal of Neuroscience, 20,
2664–2672.
Zeki, S. M. (1978). Functional specialisation in the visual cortex
of the rhesus monkey. Nature, 274, 423–428.
Zimmermann, J., Goebel, R., De Martino, F., van de Moortele,
P. F., Feinberg, D., Adriany, G., et al. (2011). Mapping the
organization of axis of motion selective features in human
area MT using high-field fMRI. PLoS One, 6, e28716.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
3
1
8
1
1
2
6
1
7
8
8
6
7
7
/
/
j
o
c
n
_
a
_
0
1
3
7
8
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
1140
Journal of Cognitive Neuroscience
Volume 31, Number 8