Stimulus Onset Asynchrony Affects Weighting-related

Stimulus Onset Asynchrony Affects Weighting-related
Event-related Spectral Power in Self-motion Perception

Ben Townsend , Joey K. Legere, Martin v. Mohrenschildt, and Judith M. Shedden

Abstrakt

■ Self-motion perception relies primarily on the integration of
the visual, vestibular, proprioceptive, and somatosensory sys-
Systeme. There is a gap in understanding how a temporal lag
between visual and vestibular motion cues affects visual–
vestibular weighting during self-motion perception. The beta
band is an index of visual–vestibular weighting, in that robust
beta event-related synchronization (ERS) is associated with
visual weighting bias, and robust beta event-related desynchro-
nization is associated with vestibular weighting bias. The present
study examined modulation of event-related spectral power dur-
ing a heading judgment task in which participants attended to
either visual (optic flow) or physical (inertial cues stimulating
the vestibular, proprioceptive and somatosensory systems)
motion cues from a motion simulator mounted on a MOOG
Stewart Platform. The temporal lag between the onset of visual
and physical motion cues was manipulated to produce three lag
Bedingungen: simultaneous onset, visual before physical motion
onset, and physical before visual motion onset. Es gab

two main findings. Erste, we demonstrated that when the
attended motion cue was presented before an ignored cue,
the power of beta associated with the attended modality was
greater than when visual–vestibular cues were presented simul-
taneously or when the ignored cue was presented first. Das war
the case for beta ERS when the visual-motion cue was attended
Zu, and beta event-related desynchronization when the physical-
motion cue was attended to. Zweite, we tested whether the
power of feature-binding gamma ERS (demonstrated in audio-
visual and visual–tactile integration studies) increased when
the visual–vestibular cues were presented simultaneously versus
with temporal asynchrony. We did not observe an increase in
gamma ERS when cues were presented simultaneously, vorschlagen
that electrophysiological markers of visual–vestibular binding
differ from markers of audiovisual and visual–tactile integration.
All event-related spectral power reported in this study were
generated from dipoles projecting from the left and right motor
Bereiche, based on the results of Measure Projection Analysis. ■

EINFÜHRUNG

The visual, vestibular, proprioceptive, and somatosensory
systems collect information about how an organism moves
through its environment, and integrate this information in
associated brain areas, such as medial superior temporal
area and ventral intraparietal area (für eine Rezension, sehen
DeAngelis & Angelaki, 2012), to produce a smooth, vereinheitlicht
perception of self-motion. One complicating factor in this
integration process is that each of these cues to motion is
perceived on different timelines. Zum Beispiel, self-motion
information from the visual system is perceived faster than
self-motion information from the vestibular system (z.B.,
RTs are ∼220 msec for light and ∼440 msec for galvanic
vestibular stimulation; Barnett-Cowan & Harris, 2009);
Jedoch, our perception of self-motion is a function of
multisensory integration. Understanding how the tempo-
ral factors of visual and vestibular perception affect multi-
sensory integration has been of interest to researchers in
many fields of science and engineering. Zum Beispiel,
understanding this construct has been a major focus for

McMaster University, Hamilton, Ontario, Kanada

transfer of training research and for setting policies by
flight training administration authorities.

Given the different temporal trajectories of information
processing between sensory systems, the temporal inte-
gration of multisensory stimuli has long been of interest
to researchers. Zum Beispiel, in audiovisual integration,
direction-incongruent stimuli give rise to the ventriloquist
Wirkung, in which the two stimuli are perceived as having the
same source despite a spatially separated origin (Alais &
Burr, 2004). This effect disappears when the synchrony
of the audiovisual stimuli exceeds ∼300 msec (Slutsky &
Recanzone, 2001). We still do not fully understand the
potential effect of temporal asynchrony on visual–
vestibular integration and self-motion perception, espe-
cially in the context of driving and flight motion-simulator
Forschung. Jedoch, a recent study demonstrated that
changes in the velocity of a visual or physical self-motion
cue are most quickly detected when the stimuli are
aligned, compared with a 100-msec timing difference
(Kenney et al., 2020). Darüber hinaus, Rodriguez and Crane
(2021) demonstrated that visual-inertial (z.B., visual–
vestibular) heading perception is also sensitive to tempo-
ral misalignments of less than 250 msec between the
motion cues.

© 2023 Massachusetts Institute of Technology. Published under a
Creative Commons Attribution 4.0 International (CC BY 4.0) Lizenz.

Zeitschrift für kognitive Neurowissenschaften 35:7, S. 1092–1107
https://doi.org/10.1162/jocn_a_01994

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
J

/

Ö
C
N
A
R
T
ich
C
e

P
D

l

F
/

/

/

3
5
7
1
0
9
2
2
1
4
0
2
4
0

/

/
J

Ö
C
N
_
A
_
0
1
9
9
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Multisensory integration is also affected by attention
allocation (Macaluso et al., 2016). Attention can be volun-
tarily allocated toward a stimulus, a sensory modality, or a
specific region of space to achieve task goals (Li, Piëch, &
Gilbert, 2004). Jedoch, processing can also be involun-
tarily captured by sensory events, even when the attention
capturing signals are unrelated to the current goal-
directed activity (Öhman, Flykt, & Esteves, 2001). EEG is
a useful tool to explore the online processes related to the
interaction between attention and multisensory integra-
tion. The high temporal resolution of EEG has been effec-
tive in testing hypotheses related to synchronization of
neural oscillations as a mechanism for the integration of
information across sensory modalities (Senkowski,
Schneider, Foxe, & Engel, 2008). Synchronization of neu-
ral oscillations (event-related spectral power [ERSP]) Ist
quantified by measuring power of event-related synchro-
nizations (ERSs) and desynchronizations (ERDs) innerhalb
particular frequency bands (z.B., theta, alpha, beta,
gamma). One hypothesis about interpretation of neural
oscillations is that distinct spectral timelines index differ-
ent local cortical networks involved in sensory processing,
attention allocation, and multisensory integration (Siegel,
Donner, & Engel, 2012). Most studies that support the
spectral timelines hypothesis are based on audiovisual
or visuotactile integration (für eine Rezension, see Keil &
Senkowski, 2018). Zum Beispiel, Senkowski, Talsma,
Grigutsch, Herrmann, and Woldorff (2007) showed that
the closer in time the audiovisual stimuli were presented
together, the more feature binding-related gamma ERS
was elicited early after stimulus onset. This finding also
supports Singer and Gray’s (1995) temporal correlation
Hypothese, which suggests that oscillations within the
gamma band facilitate integration across sensory modali-
Krawatten. As far as we know, there are few published studies
exploring how the onset timing of multisensory stimuli
affects EEG correlates of visual–vestibular integration.

Townsend, Legere, O’Malley, von Mohrenschildt, Und
Shedden (2019) used a high-fidelity motion simulator
and a high-density EEG array to observe ERSP in response
to simultaneous-onset visual- and physical-motion stimuli.
To examine the effect of attention allocation to visual ver-
sus physical motion, in a blocked design, Teilnehmer
made heading judgments to visual (or physical) cues only,
while ignoring the other modality. For each trial, headings
of the motion cues were either spatially congruent (z.B.,
heading was the same for visual and physical) or incongru-
ent (z.B., visual and physical headings differed). Impor-
tantly, in all conditions, the visual and physical cues to
self-motion were presented simultaneously. Measure Pro-
jection Analysis (MPA) identified cortical regions in the
premotor and sensory motor areas (Brodmann’s areas
[BAs] 6 Und 4) associated with motor processing. ERSP
analysis within these areas revealed sensitivity of theta-
(4–7 Hz), alpha- (8–12 Hz), and beta- (13–30 Hz) band
oscillations to attended visual versus physical self-motion
Reize. Speziell, attending to the visual-motion

stimulus (while ignoring the physical-motion stimulus)
evoked earlier theta ERS and alpha ERD, whereas attention
to the physical-motion stimulus (while ignoring the visual-
motion stimulus) evoked longer-lasting and more power-
ful beta ERD. Complimentary research suggests that theta
ERS is an index of heading processing (Townsend, Legere,
von Mohrenschildt, & Shedden, 2022; für eine Rezension, sehen
Buzsáki & Moser, 2013), and alpha ERD/ERS is associated
with focal attention and cognitive load (für eine Rezension, sehen
Klimesch, 2012). Most important for the present article,
previous research has indicated that beta ERD/ ERS
indexed visual–vestibular weighting (Townsend et al.,
2019, 2022). Zum Beispiel, when attention was focused
on the visual-motion stimulus (while ignoring physical-
motion cues), beta ERS was stronger, whereas when atten-
tion was focused on the physical-motion stimulus (while
ignoring visual-motion cues), beta ERD was stronger
(Townsend et al., 2019). The purpose of the present article
was to further examine visual–vestibular weighting by
manipulating the timing of onset of the self-motion cues.
Previous research has demonstrated that the beta band
is an index of visual–vestibular weighting, and that atten-
tion allocation plays a key role in how weighting is distrib-
uted among multisensory inputs (Townsend et al., 2019,
2022). Those studies, Jedoch, did not investigate the
impact stimulus onset timing has on the process of
visual–vestibular weighting within self-motion perception.
Previous research has shown that discrepancies in the
onset timing of audiovisual stimuli can affect multisensory
weighting (Fister, Stevenson, Nidiffer, Barnett, & Wallace,
2016; Sheppard, Raposo, & Churchland, 2013). We need a
better understanding about how the interaction of atten-
tion allocation and temporal misalignment affect the
underlying cortical activity associated with visual–
vestibular integration during self-motion perception. Der
goals of the present study were twofold. The first goal was
to examine the effect of attention allocation and temporal
asynchrony on induced ERSP, specifically the power and
time course of beta oscillations associated with visual–
vestibular weighting. The second goal was to examine
induced gamma oscillations. Previous multisensory
Forschung (z.B., Senkowski et al., 2007) demonstrated more
powerful feature-binding gamma ERS when audiovisual
multisensory cue onsets were presented closer in time.
The present study extends this work by asking whether
feature-binding reflected by gamma ERS is similar for
visual–vestibular integration.

Participants attended to either physical (ignoring visual)
or visual (ignoring physical) motion cues (blocked design)
and discriminated between left and right self-motion head-
ings (random presentation within a block). Es gab
three SOA conditions: (1) visual motion onset 100 ms
before physical motion onset, (2) physical motion onset
100 msec before visual motion onset, Und (3) simulta-
neous visual and physical motion onset. Given previous
Forschung (Townsend et al., 2019, 2022), we hypothesized
that beta ERD would be most powerful when participants

Townsend et al.

1093

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
J

/

Ö
C
N
A
R
T
ich
C
e

P
D

l

F
/

/

/

3
5
7
1
0
9
2
2
1
4
0
2
4
0

/

/
J

Ö
C
N
_
A
_
0
1
9
9
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

attended to the physical-motion cues, and beta ERS would
be most powerful when participants attended to visual-
motion cues. This pattern, Jedoch, would be modulated
by the temporal lag conditions, such that beta ERD in
response to attention to physical motion would be
enhanced if the attended physical-motion cue was pre-
sented before the ignored visual-motion cue, and beta
ERS in response to attention to visual motion would be
enhanced if the attended visual-motion cue was presented
before the ignored physical-motion cue. Darüber hinaus, Wenn
gamma ERS is most powerful during conditions of tempo-
ral synchrony (Senkowski et al., 2007), the present study
may provide evidence that gamma ERS is an index of
general processes related to multisensory binding and
integration across multiple sensory systems. If this is not
the case, feature binding-related gamma ERS may only
be specific to processes such as audiovisual and visual–
tactile integration.

METHODEN

Teilnehmer

Thirty-six participants (20 Frauen) were recruited from
the McMaster University psychology participant pool and
the McMaster community. The sample size was sufficient
based on a power analysis of data from our previous study
(Townsend et al., 2019; 37 sample size, 0.73 effect size,
0.05 error probability, 0.95 power, four measurements)
conducted by G*Power Software (Faul, Erdfelder,
Buchner, & Lang, 2009). Ages ranged from 17 Zu 23 Jahre
(M = 18 Jahre, SD = 1.30 Jahre). Those recruited from the
participant pool were compensated with course credits.
All participants self-reported normal or corrected-to-
normal visual acuity and reported no major problems
with vertigo, motion sickness, or claustrophobia. Das
experiment was approved by the Hamilton Integrated
Research Ethics Board and complied with the Canadian
tri-council policy on ethics.

Stimuli

Visual Motion Stimuli

Visual motion stimuli were presented on a 43-in. LCD
panel, 51 In. in front of the participant, subtending a visual
angle of 41°. The panel had a refresh rate of 60 Hz and a
resolution of 1920 × 1080 (1080P).

The visual display, which contributed to the perception
of self-motion, was composed of a fixation cross in the cen-
ter of the display and two tracks on a gray surface. Jede
track consisted of a series of yellow dashes perpendicular
to the length of the track, drawn in perspective to a vanish-
ing point so that the track appeared to extend into the dis-
tanz. One track veered right, whereas the other veered
links, both at 35°, starting at the lower center of the display.
Both tracks together subtended a horizontal visual angle
of 33.69°. A horizon line was created by a gray surface upon

which the tracks laid, and a blue sky with white clouds
über, accentuating the perception of traveling along a
track into the distance. The perception of self-motion
along the track was created via a first-person viewpoint ani-
mation that simulated a forward trajectory to align with the
acceleration and perceived velocity that result from the
physical-motion cues (see Figure 1B and C for two tempo-
ral snapshots). The duration of the visual-motion stimulus
on each trial was 700 ms, which included a 200-msec
acceleration period followed by 500 msec at a fixed veloc-
ität. This was followed by a 960-msec pause in the final posi-
tion at the end of the track. At the completion of the trial
(1660 ms), the visual stimulus was reset to the starting
position of the tracks.

Physical Motion Stimuli

A motion simulator provided physical-motion stimuli. Der
motion simulator cabin was supported by a MOOG
Stewart platform with six-degrees-of-freedom motion
(Moog series 6DOF2000E). Participants were seated in a
bucket-style car seat fixed to the cabin floor.

Each physical-motion stimulus consisted of the cabin
moving in a forward linear translation, 35° left or right
für 330 msec at 0.01 G. This forward acceleration was
presented as a precomputed parabolic movement of the
platform. This surge was followed by a corresponding
1330 msec washout (see Figure 1A). During the washout
Zeitraum, the cabin is slowly moved to the original position
below threshold for detecting the direction of movement.
Figure 1A also illustrates motion noise above 60 Hz, welche
is because of mechanical vibrations of the simulator. Wir
also presented very small movements in random direc-
tions other than the forward motion that simulated the
feel and sound of wheels on the road, and which also
helped to mask mechanical vibrations and direction of
washout motion. As can be seen in the figure, the mechan-
ical vibrations and injected noise have very low energy,
which is experienced as a rumbling accompanying the per-
ception of forward motion. The acceleration intensity was
selected based on preliminary testing to achieve a clear
perception of forward motion within the spatial restric-
tions of the movement of the platform while minimizing
compensating movements of the head, neck, or upper
Körper (Townsend et al., 2019). Physical forward accelera-
tions were well above vestibular thresholds of .009 G, als
discussed by Kingma (2005). The motion force, S(T), War
described by:

8
>>< >>:

s tð Þ

A1 0 ≤ t ≤ tp
−A2 tp ≤ tb
A2 tb ≤ t ≤ te
0 else

where t represents time in seconds, tp represents present
Zeit, tb represents the breakpoint, and te represents the
end time. A1 describes the initial forward acceleration,
−A2 describes the initial (backward) acceleration of the

1094

Zeitschrift für kognitive Neurowissenschaften

Volumen 35, Nummer 7

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
J

/

Ö
C
N
A
R
T
ich
C
e

P
D

l

F
/

/

/

3
5
7
1
0
9
2
2
1
4
0
2
4
0

/

/
J

Ö
C
N
_
A
_
0
1
9
9
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
J

/

Ö
C
N
A
R
T
ich
C
e

P
D

l

F
/

/

/

3
5
7
1
0
9
2
2
1
4
0
2
4
0

/

/
J

Ö
C
N
_
A
_
0
1
9
9
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Figur 1. (A) An example of the profile of physical motion measured during a single trial by an accelerometer (red line); the high-frequency
component represents the high sensitivity of the accelerometer (sensitive to 0.0001 G, sampling at 1 Kh). Note that the frequencies above 60 Hz
represent mechanical vibrations of the motion system and simulator. The x axis represents time, and the y axis represents acceleration (g = m/sec2).
The acceleration profile is similar for 35° left and 35° right physical-motion trials. (B) The visual display before the onset of motion; at this point, Die
participant does not know whether visual motion will indicate travel along the left or right track. (C) A still screen capture of the dynamic visual
motion display at approximately 1 sec after visual onset of a left visual motion trial.

washout, and A2 describes the deceleration of the
washout. Acceleration was measured using an Endevco
accelerometer (model number 752A13), calibrated to
approximately 1-mV/g sensitivity.

Verfahren

The entire session was between 1.5 Und 2 hr in duration.
The timeline of the session included collection of demo-
graphic information, followed by completion of one
practice block (30 Versuche; ∼2 min), application of EEG elec-
trodes (25 min), completion of four experimental blocks
(60 min), and participant clean up and debriefing (15 min).
Es gab 796 experimental trials divided into four
blocks of 199 trials each. Participants fixated on the fixation
cross for the duration of each trial; a blink break was

provided every 15 Versuche. The attend-visual (AV) Und
attend-physical (AP) tasks were blocked to avoid task
switching effects. The task required participants to direct
attention to the visual-motion stimulus and ignore the
physical-motion cues (AV task) or to direct attention to
the physical-motion stimulus and ignore the visual-motion
Hinweise (AP task). They responded with a button press to
indicate whether the direction of the attended-modality
motion was left or right heading.

Given the importance of collecting enough clean data
with correct responses in each attention condition for
EEG analyses, and given that participants have a more dif-
ficult time ignoring the visual while attending the physical
stimulus (Townsend et al., 2019), we collected three AP
blocks compared with one AV block. Presentation order
was controlled so that the AV block was presented as the

Townsend et al.

1095

Erste, zweite, or third of the four blocks. Darüber hinaus, Zu
ensure that participants maintained attention to the
intended modality (especially during AP blocks), jede
block contained eight catch trials in which the ignored
modality heading was incongruent with the attended
modality heading.

SOA was manipulated to produce simultaneous (S),
visual-first ( V1st), and physical-first (P1st) Bedingungen. In
the simultaneous condition, visual and physical motion
cues were onset at the same time. In the V1st condition,
the visual motion stimulus was onset 100 msec before the
physical motion, and in the P1st condition, the physical
motion stimulus began 100 msec before the visual motion.
The duration of 100 msec was selected as the SOA based
on previous research that demonstrated a window in
which temporal alignment of visual–vestibular cues
speeds up the perception of self-motion (Kenney et al.,
2020; O’Malley, Townsend, von Mohrenschildt, &
Shedden, 2015). This research provided evidence that a
temporal misalignment of 100 msec delayed the responses
to the self-motion cues, relative to visual–vestibular cues
that were closer in temporal alignment. Daher, the benefits
of multisensory integration were weakened, which was the
case regardless of which motion cue was being attended.
There were an equal number of left and right heading
trials in each block, randomly presented.

EEG Data Acquisition

EEG data were collected using the BioSemi ActiveTwo
electrophysiological system (www.biosemi.com) mit
128 sintered Ag/AgCl scalp electrodes. Four additional
electrodes recorded eye movements (two placed laterally
from the outer canthi and two below the eyes on the upper
cheeks). Continuous signals were recorded using an open
pass band from direct current to 150 Hz and digitized at
1024 Hz.

EEG-Vorverarbeitung

All processing was performed in MATLAB 2014a (Der
MathWorks) using functions from EEGLAB (Delorme &
Makeig, 2004) on the Shared Hierarchical Academic
Research Computing Network (www.sharcnet.ca). EEG
data were band-pass filtered between 1 Und 50 Hz, Und
epoched from 1000 msec prestimulus to 2000 msec post-
stimulus. Each epoch was baseline corrected using the
whole-epoch mean (Groppe, Makeig, & Kutas, 2009).
Channels with a standard deviation exceeding 200 μV were
interpolated after referencing (on average, 0.97 Kanäle
interpolated per participant, with a total of 35 Kanäle
interpolated). Bad epochs were rejected if they had voltage
spikes exceeding 500 μV or violated EEGLAB’s joint prob-
ability functions (Delorme, Sejnowski, & Makeig, 2007).

Single-subject EEG data were submitted to an extended
adaptive mixture independent component (IC) Analyse
(Palmer, Kreutz-Delgado, & Makeig, 2012) with an n −

(1 + interpolated channels) principal components analy-
sis reduction (Makeig, Glocke, Jung, & Sejnowski, 1995).
Decomposing an EEG signal into ICs allows for analysis
of each individual signal produced by the brain that would
otherwise be indistinguishable. Dipoles were then fit to
each IC using the fieldtrip plugin for EEGLAB following
adaptive mixture IC analysis (Oostenveld, Fries, Maris, &
Schoffelen, 2011). ICs for which dipoles were located out-
side the brain, or explained less than 85% of the weight
variance, were excluded from further analysis. On average,
20.47 ICs per participant were excluded from analysis.

ERSP Measure Projection Analysis

ERSP was computed for each of the remaining ICs. Fifty
log-spaced frequencies between 3 Und 50 Hz were com-
puted, with three cycles per wavelet at the lowest fre-
quency up to 25 at the highest. MPA was used to cluster
ICs across participants using the Measure Projection
Toolbox for MATLAB (Bigdely-Shamlo, Mullen, Kreutz-
Delgado, & Makeig, 2013). MPA is a method of categoriz-
ing the location and consistency of EEG measures, wie zum Beispiel
ERSP, across single-subject data into 3-D domains. Jede
domain is a subset of ICs that are identified as having
spatially similar dipole models, as well as similar cortical
Aktivität (measure-similarity). MPA fits the selected ICs into
a 3-D model of the brain, composed of a cubic space grid
with 8-mm spacing according to normalized Montreal
Neurological Institute space. The MPA toolbox identified
cortical regions of interest by incorporating the probabilis-
tic atlas of human cortical structures provided by the
Laboratory of Neuroimaging project (Shattuck et al.,
2008). Voxels that fell outside of the brain model (muscle
artifacts, usw.) were excluded from the analysis.

We then calculated local convergence values, using an
algorithm based on Bigdely-Shamlo et al. (2013), welche
deals with the multiple comparisons problem. Local con-
vergence calculates the measure-similarity of dipoles
within a given domain and compares them with random-
ized dipoles. A pairwise IC similarity matrix was created by
estimating the signed mutual information between IC-pair
ERSP measure vectors, assuming a Gaussian distribution,
to compare dipoles. As explained in detail by Bigdely-
Shamlo et al. (2013), signed mutual information was esti-
mated to improve the spatial smoothness of the obtained
MPA significance value beyond determining similarity of
dipoles through correlation. Bootstrap statistics were used
to obtain a significance threshold for convergence at each
location of our 3-D brain model. Following past literature,
we set the raw voxel significance threshold to p < .001 (Chung, Ofori, Misra, Hess, & Vaillancourt, 2017; Bigdely- Shamlo et al., 2013). Our analyses focused on two relevant domains: the right motor area, with the greatest concentration of dipoles consistent with right premotor and SMA (BA 6), and the left motor area, with the greatest concentration of dipoles consistent with left premotor and SMA (BA 6). For the 1096 Journal of Cognitive Neuroscience Volume 35, Number 7 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 right motor area, each participant contributed, on average, 2.33 (±1.53) ICs, with each participant contributing at least one IC, with a range from 1–7 ICs. For the left motor area, each participant contributed, on average, 2.19 (±1.51) ICs. There were five participants who did not contribute to this domain. The range of contributed ICs was 0–6. ERSPs were computed for each experimental condition within each domain calculated by MPA. Bootstrap statistics were used to assess differences in ERSP between condi- tions to uncover main effects of task and SOA. Differences at each power band were computed by projecting the ERSP for each condition to each voxel in the domain. This projection was weighted by dipole density per voxel and then normalized by the total domain voxel density for each participant. Analysis of projected source measures were separated into discrete spatial domains by threshold- based affinity propagation clustering based on a similarity matrix of pairwise correlations between ERSP measure values for each position. Following Chung et al. (2017), we used the maximal exemplar-pair similarity, which ranges from 0–10 to set a value of 0.8 (Chung et al., 2017; Ofori, Coombes, & Vaillancourt, 2015; Bigdely-Shamlo et al., 2013). RESULTS Behavioral Results Behavioral data were analyzed with two 2 × 3 repeated- measures ANOVAs for measures of judgment accuracy and RT. Outliers were defined as trials with RTs greater than 3 SDs above or below the mean in each condition and were eliminated from all further analyses. The Greenhouse–Geisser correction was applied to all effects that violated Mauchley’s test of sphericity. All behavioral results are illustrated in Figure 2. Accuracy Participants were more accurate at discriminating direc- tion in the attend-visual task (M = 99%, SE = .003) than the attend-physical task (M = 95%, SE = .01), F(1, 35) = 10.50, p = .003, ηp 2 = .23. Moreover, there was a main effect of SOA on accuracy (Greenhouse–Geisser cor- rected), F(1.69, 59.02) = 5.77, p = .03, ηp 2 = .14, and a Task ×SOA interaction, F(2, 70) = 5.00, p = .009, ηp 2 = .13. Bonferroni-corrected pairwise comparisons sup- ported the observation that the SOA effects were apparent during the attend-physical task only; there were no signif- icant differences in accuracy between any of the SOA conditions during the attend-visual task. More specifically, participants were more accurate in the attend-physical physical-first (AP(P1st)) condition (M = 95.9%, SE = .01) than the attend-physical visual-first (AP( V1st)) condition (M = 94.10%, SE = 0.02; p = .007). Response Time Participants were faster at discriminating direction in the attend-visual task (M = 1018 msec, SE = 90.20) than the attend-physical task (M = 1409 msec, SE = 78.72), F(1, 35) = 39.43, p < .001, ηp 2 = .53. There was a main effect of SOA, F(2, 70) = 519.35, p < .001, η p 2 = .94, such that responses were fastest in the V1st conditions (M = 1189 msec, SE = 6.10), followed by the simultaneous conditions (M = 1317 msec, SE = 5.88), and slowest in the P1st conditions (M = 1451 msec, SE = 6.16). There was a trend toward a Task × SOA interaction on RTs (Greenhouse–Geisser corrected), F(1.52, 53.22) = 3.48, p = .05, η p 2 = .9, such that Bonferroni-corrected pairwise comparisons revealed RT differences across conditions in both attend-physical and attend-visual tasks. During the attend-visual task, responses were faster for the visual-first (AV( V1st)) trials (M = 899 msec, SE = 92.99) compared Figure 2. Behavioral data. (A) Boxplots for accuracy data showing post hoc simple effects within each task. (B) Boxplots for RT data showing post hoc simple effects within each task. All p values were corrected for multiple comparisons using Bonferroni correction (*p < .05, **p < .001). AVV = attend-visual visual first; AVS = attend-visual simultaneous, attend-visual physical first; APV = attend-physical visual first; APS = attend-physical simultaneous; APP = attend-physical physical first. Townsend et al. 1097 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 with simultaneous (AV(S)) trials (M = 1020 msec, SE = 90.19; p < .001), which were in turn faster than physical- first (AV(P1st)) trials (M = 1135 msec, SE = 88.36; p < .001). Likewise, during the attend-physical task, responses were faster for the AP( V1st) trials (M = 1269 msec, SE = 77.69) compared with simultaneous (AP(S)) trials (M = 1406 msec, SE = 79.04; p < .001), which were in turn faster than AP(P1st) trials (M = 1552 msec, SE = 80.12; p < .001). Thus, two important observations are that (1) participants are faster overall when attending to visual motion, but importantly, (2) both attend-visual and attend-physical conditions are highly sensitive to which stimulus was pre- sented first. Exploring the ERSP results provides insights into how the temporal order of stimuli may be affecting multisensory integration and thus leading to differences in accuracy and RTs. Oscillatory Power Effects of SOA in Attend-Visual Task Figure 3 presents a comparison of the left and right motor areas to illustrate the effect of the timing of the stimulus onset on the cortical activity during the attend-visual conditions in both MPA domains. All ERSP represents a difference in oscillatory power compared with baseline (pretrial) cortical activity, where an ERS represents more spectral power than baseline and an ERD represents less spectral power than baseline. The 1000-msec baseline EEG was recorded during the ISI before each trial, while the simulator was stationary and participants were fixating on the fixation cross. Figure 3A shows the left motor area, with the highest dipole den- sity in the premotor and SMA (BA 6), and Figure 3D shows the right motor area, with the highest dipole den- sity in the premotor and SMA (BA 6). In Panel B (left motor) and E (right motor), we show the associated ERSP plots for the AV( V1st), AV(S), and AV(P1st) condi- tions. The ERSP plots are followed by bootstrapped comparisons (α = .05) between each possible pair of conditions for left (Panel C) and right (Panel F) motor areas. The following sections will describe observations of the activity changes associated with experimental con- ditions across frequency bands theta, alpha, beta, and gamma. All of the comparisons outlined in the following sections were significant at p < .05. Theta-band latency differences. The AV(P1st) condition elicited theta ERS significantly later than the AV(S) and AV( V1st) conditions. Specifically, in both the left and right motor areas (Panels C and F, respectively), AV(S) elicited greater theta ERS from ∼100 msec to 200 msec post stimulus and AV(P1st) elicited greater theta ERS later in the trial, from ∼500 msec to 950 msec post stimulus. Likewise, AV(V1st) elicited greater theta ERS from stimulus onset to 300 msec post stimulus and AV(P1st) elicited greater theta ERS from ∼500 msec to 1000 msec post stimulus. In the left and right Alpha-band power differences. motor areas (C and F, respectively), AV(P1st) elicited the strongest alpha ERD, compared with AV(S) (∼750– 1500 msec poststimulus) and AV( V1st) (∼600–1500 msec poststimulus), and AV(S) elicited stronger alpha ERD than AV( V1st) (∼550–1500 msec poststimulus). Thus, in general, alpha ERD AV(P1st) > AV(S) > AV( V1st).

Beta-band power differences. Much like the results in
the alpha band, we found that the earlier the physical motion
was presented, the stronger the elicited beta-band ERD
power. In the left and right motor areas (C and F, bzw-
aktiv), AV(P1st) elicited the strongest beta ERD, verglichen
with AV(S) (∼500–1500 msec poststimulus) and AV(V1st)
(∼400–1500 msec poststimulus), and AV(S) elicited stronger
alpha ERD than AV(V1st) (∼300–1000 msec poststimulus).
Daher, in general, beta ERD AV(P1st) > AV(S) > AV(V1st).

Gamma-band power differences. AV( V1st) elicited a
more powerful gamma ERS than AV(P1st) from ∼600–
1200 msec poststimulus in the right motor area (F).

Effects of SOA in Attend-Physical Task

Figur 4 presents a comparison of the same left and right
motor areas as Figure 3 to illustrate the effect of stimulus
onset timing on the cortical activity during the attend-
physical conditions in both MPA domains. All of the com-
parisons outlined in the following sections were significant
at p < .05. Theta-band latency differences. The AP(P1st) condition elicited theta ERS significantly later than the AP(S) and AP( V1st) conditions. Specifically, in both the left and right motor areas (C and F, respectively), AP(S) elicited greater theta ERS from stimulus onset to ∼300 msec post stimulus and AP(P1st) elicited greater theta ERS later in the trial, from ∼500 msec to 600 msec post stimulus. Likewise, AP( V1st) elicited greater theta ERS from stimulus onset to ∼400 msec post stimulus and AP(P1st) elicited greater theta ERS from ∼500 msec to 600 msec post stimulus. In the left and right Alpha-band power differences. motor areas (C and F, respectively), AP(P1st) elicited the strongest alpha ERD, compared with AP(S) (∼700– 1500 msec poststimulus) and AP( V1st) (∼600–1500 msec poststimulus), and AP(S) elicited stronger alpha ERD than AP( V1st) (∼600–1500 msec poststimulus). Thus, in general, alpha ERD AP(P1st) > AP(S) > AP(V1st).

In the left and right motor
Beta-band power differences.
Bereiche (C and F, jeweils), AP(P1st) elicited the strongest
beta ERD, compared with AP(S) (∼550–1500 msec post-
stimulus) and AP( V1st) (∼500–1500 msec poststimulus),
and AP(S) elicited stronger alpha ERD than AP( V1st)
(∼800–1200 msec poststimulus). Daher, in general, beta
ERD AP(P1st) > AP(S) > AP(V1st).

1098

Zeitschrift für kognitive Neurowissenschaften

Volumen 35, Nummer 7

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
J

/

Ö
C
N
A
R
T
ich
C
e

P
D

l

F
/

/

/

3
5
7
1
0
9
2
2
1
4
0
2
4
0

/

/
J

Ö
C
N
_
A
_
0
1
9
9
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
J

/

Ö
C
N
A
R
T
ich
C
e

P
D

l

F
/

/

/

3
5
7
1
0
9
2
2
1
4
0
2
4
0

/

/
J

Ö
C
N
_
A
_
0
1
9
9
4
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
7
S
e
P
e
M
B
e
R
2
0
2
3

Figur 3. Attend-visual task. Left motor area (A, B, and C) and right motor area (D, E, and F) identified by MPA and respective ERSP analysis. Der
ERSP plots show time (ms) across the x axis and frequency of the EEG signal along the y axis. Panels B (links) and E (Rechts) show the associated ERSP
plots for the attend-visual visual first (AV( V1st)), attend-visual simultaneous (AVS), and attend-visual physical first (AV(P1st)) Bedingungen. Panels C (links
motor area) and F (right motor area) show the bootstrapped comparisons ( P < .05) between each possible pair of conditions. ERS power is depicted in yellow/red, ERD power is depicted in blue, and green shows no difference in spectral power compared with baseline. MPA motor areas: (A and D) 3-D representations of the brain with the yellow region representing the left motor area and the blue region representing the right motor area. The greatest concentration of dipoles in left and right regions was consistent with premotor and SMAs (BA 6). (B and E) ERSP plots for each condition. (C and F) Bootstrapped comparisons examine each possible pair of conditions; frequency and time of significant comparisons are shown by the colored boxes. Both left and right motor areas show similar conditional differences. Theta: AV( V1st) and AV(S) elicits theta ERS significantly earlier than AV(P1st) (white boxes). Alpha: AV(P1st) elicits stronger alpha ERD than AV(S) and AV( V1st), and AV(S) elicits strong alpha ERD than AV( V1st) (black boxes). Beta: AV(P1st) elicits stronger beta ERD than AV(S) and AV( V1st), and AV(S) elicits stronger beta ERD than AV( V1st) (brown boxes). Gamma: Differences in gamma existed only in the right motor area: The AV( V1st) condition elicited significantly stronger gamma ERS than AV(P1st) (red boxes). Effects of Attention Allocation across SOA Conditions Figure 5 presents the same right motor area as Figures 3 and 4 to illustrate the interaction of stimulus onset timing and attention allocation. We compared cortical activity between conditions of attention allocation at each level of the SOA condition (i.e., AV(S) vs. AP(S), AV( V1st) vs. AP( V1st), and AV(P1st) vs. AP(P1st)). Similar results were found in the left motor area. All of the comparisons out- lined in the following sections were significant at p < .05. Townsend et al. 1099 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 4. (Attend-physical task). Left motor area (A, B, and C) and right motor area (D, E, and F) identified by MPA and respective ERSP analysis. The ERSP plots show time (msec) across the x axis and frequency of the EEG signal along the y axis. Panels B (left) and E (right) show the associated ERSP plots for the attend-physical visual first (AP( V1st)), attend-physical simultaneous (APS), and attend-physical physical first (AP(P1st)) conditions. Panels C (left motor area) and F (right motor area) show the bootstrapped comparisons ( p < .05) between each possible pair of conditions. ERS power is depicted in yellow/red, ERD power is depicted in blue, and green shows no difference in spectral power compared with baseline. MPA motor areas: (A) and (D) show 3-D representations of the brain, with the yellow region representing the left motor area and the blue region representing the right motor area. The greatest concentration of dipoles in the left and right regions were consistent with premotor and SMAs (BA 6). (B and E) ERSP plots for each condition. (C and F) Bootstrapped comparisons examine each possible pair of conditions; frequency and time of significant comparisons are shown by the colored boxes. Both left and right motor areas show similar conditional differences. Theta: AP( V1st) and AV(S) elicits theta ERS significantly earlier than AP(P1st) (white boxes). Alpha: AP(P1st) elicits stronger alpha ERD than AP(S) and AP( V1st), and AP(S) elicits strong alpha ERD than AP( V1st) (black boxes). Beta: AP(P1st) elicits stronger beta ERD than AP(S) and AP( V1st), and AP(S) elicits stronger beta ERD than AP( V1st) (brown boxes). Theta-band power differences. AV(S) elicited a more powerful theta ERS than AP(S) from ∼250 msec to 400 msec post stimulus (C). In the right motor area Alpha-band power differences. (A), AV(S) elicited a stronger alpha ERD, compared with AP(S) (∼50–550 msec poststimulus) (C). AP(V1st) elicited greater alpha ERD than AV( V1st) from ∼800 msec – end of trial (D). In the right motor area Beta-band power differences. (A), AP(P1st) elicited a stronger beta ERD than AV(P1st) from ∼550–1500 msec poststimulus (B), AV(S) elicited a stronger beta ERS than AP(S) from ∼800 msec – end 1100 Journal of Cognitive Neuroscience Volume 35, Number 7 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Figure 5. Right motor area identified by MPA and respective ERSP analysis. The ERSP plots show time (msec) across the x axis and frequency of the EEG signal along the y axis. (B), (C), and (D) show the associated ERSP plots for the attend-physical and attend-visual conditions at each level of the SOA condition, and the bootstrapped comparisons ( p < .05) between each pair of conditions. ERS power is depicted in yellow/red, ERD power is depicted in blue, and green shows no difference in spectral power compared with baseline. MPA right motor area: (A) 3-D representations of the brain with the blue region representing the right motor area. The greatest concentration of dipoles in right region was consistent with premotor and SMAs (BA 6). (B, C, and D) Bootstrapped comparisons examine each possible pair of conditions; frequency and time of significant comparisons are shown by the colored boxes. Theta: AV(S) elicits stronger theta ERS than AP(S) (C; white box). Alpha: AV(S) elicits stronger alpha ERD than AP(S) (C), and AP( V1st) elicits stronger alpha ERD than AV( V1st) (D; black boxes). Beta: AP(P1st) elicits stronger beta ERD than AV(P1st) (B), AV(S) elicits stronger beta ERS than AP(S) (C), and AV( V1st) elicits stronger beta ERS than AP( V1st) (D; brown boxes). of trial (C), and AV( V1st) elicited more powerful beta ERS than AP(V1st) from ∼700 msec – end of trial (D). DISCUSSION Behavioral research has demonstrated a temporal binding window for visual–vestibular integration, in which multi- sensory integration affects heading perception, temporal order judgements, and attention allocation (e.g., Rodriguez & Crane, 2021; Shayman et al., 2018). Research exploring the cortical processes underlying this temporal window is currently scarce. To better understand the online processes related to multisensory temporal bind- ing, we must look to literature focused on the integration of other senses, such as audiovisual, or visuotactile inte- gration. Studies such as Senkowski et al. (2007) have dem- onstrated that the closer audiovisual stimuli are presented temporally, the more powerful the elicited feature-binding gamma ERS response. Past multisensory research has demonstrated a Gaussian integration window, in which integration breaks at a temporal asynchrony specific to the senses being integrated (e.g., Rodriguez & Crane, 2021). The present study explored how EEG oscillations related to attention and multisensory weighting in self- motion perception (theta, alpha, and beta; Townsend et al., 2019, 2022), and multisensory feature binding (gamma; Senkowski et al., 2007) were affected by varying conditions of SOA. All differences in cortical activity dis- cussed are projected from the motor area (likely including integrative areas such as ventral intraparietal area and medial superior temporal area) based on the MPA, which identified ROIs across participants. The Effects of Timing Onset within an Attended Modality Recent research by Townsend et al. (2019, 2022) showed that theta, alpha, and beta oscillations reveal brain net- works involved in the perception of self-motion. More- over, the power of these individual oscillations changed Townsend et al. 1101 dynamically depending on which sensory inputs were attended to. Taken together, our two previous studies demonstrated that the beta band is most sensitive to changes in visual–vestibular weighting. Specifically, these studies showed that a strong beta ERS is an electrophysi- ological signature of heavy visual weighting, and a strong beta ERD is a signature of vestibular weighting. The current study revealed changes in the same spectral bands as the previously mentioned studies and contrib- uted additional key insights to the understanding of self- motion perception. One robust result that we observed was when presenting an attended motion cue before an ignored cue, the power of the beta oscillation associated with weighting bias toward the attended modality (ERS for visual and ERD for vestibular) was greater than during simultaneous presentation of the attended and ignored cues. This result suggests that the power of weighting- related beta oscillations during self-motion perception is also sensitive to the timing of the onset, and not just atten- tion allocation. Regardless of which modality is being attended to, the earlier the attended motion cue is pre- sented in relation to the ignored cue, the more powerful the weighting-related ERSP. The inverse was true when the ignored cues were presented before the attended cues. Beta ERS was less powerful in the AV(P1st) condition versus AV(S), and beta ERD was less powerful in the AP( V1st) condition versus AP(S). The beta cycle has long been thought to reflect an initi- ation and termination of motor output (for a review, see Kilavik, Zaepffel, Brovelli, MacKay, & Riehle, 2013). Con- trary to this hypothesis, Townsend et al. (2019, 2022) dem- onstrated a beta rebound during passive full-body motion that was induced by attention, and suggested that beta oscillations during motor processing may actually reflect perceptual weighting of the visual, vestibular, and propri- oceptive systems. The beta rebound may reflect the inhibition of processing the physical-motion stimuli, con- sidering visual–vestibular integration is a subadditive process. Subadditive inhibition typically occurs during integration when there is a discrepancy in the reliability of multiple sensory inputs (Angelaki, Gu, & DeAngelis, 2009). The Townsend et al. (2022) study showed that participants performed the heading discrimination task at 99% accuracy in both visual- and physical-motion only conditions (the same motion stimuli as the current study). Considering there were likely no significant differences in reliability between the two sensory inputs, we believe that the temporal advantage caused by the SOA led to strong inhibitory responses during integration. Our behavioral and EEG results fall in line with Townsend et al. (2019, 2022). Similar to our previous research, the average of participants’ accuracy on the heading discrimination task ranged between conditions from 98–100%. We believe the oscillatory differences in the beta band between the stimulus onset timing conditions may be a product of the perceptual weights being changed because of the SOA. For example, the processing of the visual stimulus during the AV( V1st) condition began 100 msec before the processing of the physical-motion stimulus. This perceptual head start could have increased the weighting in favor of the visual stimulus, more so than in the AV(S) condition. A similar weighting bias may have taken place during the attend-physical conditions, as we found similar results (but in beta ERD). These power differences in ERSP did not result in differences in accuracy, however (attend- visual 99% accuracy, attend-physical 95% accuracy). We believe that the tasks may not have been sensitive enough to capture correlations between behavioral differences and oscillatory power. RTs, on the other hand, were affected by the SOA. Keep- ing in mind that RTs were measured from the onset of the to-be-attended stimulus, RTs were fastest when the visual- motion cues were presented first regardless of whether visual or physical cues were attended. In contrast, RTs were slowest when the physical-motion cues were pre- sented first, regardless of which cue was attended. The visual system is dominant over the vestibular system, as reported in many studies (e.g., Angelaki et al., 2009), and it is not surprising that we see this RT effect with 100-msec SOAs. Visual cues also lead to faster perceptual processing compared with vestibular cues (Barnett-Cowan & Harris, 2013), and the visual cue would have provided stronger priming than the vestibular cue when attention was directed to the opposite cue. Thus, RTs benefited more when the visual-motion cue was presented first. The present study clearly demonstrates that the timing of stimulus onset is a critical component of the visual– vestibular weighting process and is indexed by dynamic changes in the beta band. The Interaction of Stimulus Timing and Attentional Selection Not only did we find that the timing of stimulus onsets affected ERSP, we also found an interaction between the timing of onsets and attention allocation. This result has a direct application to pilot training; for example, current policies of Transport Canada and Federal Aviation Admin- istration require physical cues to motion to precede visual cues to motion during pilot simulator training. Pilots are trained to attend to visual instruments and ignore vestib- ular inputs caused by forces such as turbulence, to avoid spatial disorientation (Braithwaite, 1997). One question that arises from this practice is how the temporal asynchrony and selective attention interact to affect pilots’ multisensory processing. We compared the visual- versus the physical-motion conditions at each SOA condition. Our comparison of AP(S) versus AV(S) was a replication of a condition in Townsend et al. (2019), and we found similar results in the present study, the most important observa- tion being stronger beta ERS in attend-visual conditions and stronger beta ERD in attend-physical conditions. This comparison acted as a baseline, whereas the other two comparisons presented novel findings. 1102 Journal of Cognitive Neuroscience Volume 35, Number 7 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 The comparisons AP(P1st) versus AV(P1st) (contrasting attention conditions when the physical stimulus onset first), and AP(V1st) versus AV(V1st) (contrasting attention condi- tions when the visual stimulus onset first) demonstrated an interaction of attention allocation and SOA in the beta band. When the physical-motion cue was presented 100 msec before the visual cue, there were fewer ERSP differences between AP(P1st) versus AV(P1st), compared with the baseline comparison. Most notably, the typical beta rebound elicited by attention to the visual-motion cue was not present in the AV(P1st) condition. Based on the findings of Townsend et al. (2019, 2022), the lack of a beta rebound in the AV(P1st) condition suggests that presenting the physical-motion cue before the visual-motion cue resulted in greater weighting of vestibular signals than if the motion cues were presented simultaneously. This finding is rele- vant to simulator training for pilots. If the vestibular cue to motion is presented before the visual cue, it may disrupt the operator’s ability to down-weight potentially disorient- ing vestibular cues that pilots are trained to ignore. The lack of a beta rebound in the AV(P1st) condition resulted in relatively little difference in ERSP between AP(P1st) versus AV(P1st). However, when the visual-motion cue was presented 100 msec before the physical-motion cue, there was a robust beta ERS in the AV( V1st) condition versus a beta ERD in the AP( V1st) condition. This analysis revealed that visual–vestibular weighting is more sensitive to changes in the onset timing of the visual cues to motion than the vestibular cues. This finding is supported by Barnett-Cowan and Harris (2013), who demonstrated that perception of visual stimuli is faster than perception of vestibular stimuli. Considering the visual cue naturally has a temporal advantage (during simultaneous presenta- tion), it is likely that the vestibular cue would need to be presented more than 100 msec before the visual cue to create the robust ERSP differences that were demon- strated between the conditions of attention allocation when the visual cue was presented first. Feature-binding Gamma ERS in Visual–Vestibular Integration We examined gamma ERS under varying conditions of SOA to test the temporal correlation hypothesis (Engel, Fries, & Singer, 2001; Singer & Gray, 1995) in the context of visual–vestibular integration. This hypothesis posits that synchronization of gamma-band oscillations is a key mech- anism for integration across distributed cortical networks. Evidence supporting this hypothesis has been demon- strated in multiple studies (e.g., Senkowski et al., 2007; Sakowitz, Quiroga, Schürmann, & Başar, 2001) that typi- cally focus on audiovisual integration. For example, Senkowski et al. (2007) presented human participants with audiovisual stimuli with varying degrees of temporal asynchrony and required them to attend to one modality- specific stimuli while ignoring the other. They found that gamma ERS was not significantly different between modalities but, for both modalities, significantly stronger gamma ERS was elicited when temporal asynchrony was 25 msec or less, compared with longer SOAs. In the pres- ent study, the temporal correlation hypothesis predicts that the simultaneous conditions (AP(S) and AV(S)) elicit stronger gamma ERS compared with the V1st and P1st conditions. Our results do not support this hypothesis. The present study only found differences in the gamma band when comparing the AV( V1st) and AV(P1st) condi- tions, such that AV( V1st) elicited stronger gamma ERS than AV(P1st). We are currently unaware of any literature directly explaining this finding. We offer two possible con- clusions for our results. First, visual–vestibular integration does not rely on gamma ERS to synchronize modality- specific information across cortical networks. This facilita- tion of gamma ERS could be specific to superadditive integration processes (e.g., audiovisual integration; Dias, McClaskey, & Harris, 2021) as opposed to subadditive inte- gration processes (e.g., visual–vestibular integration; Angelaki et al., 2009). Or second, visual–vestibular integra- tion has a broader temporal window than 100 msec for gamma facilitation (compared with the Senkowski et al., 2007, temporal window of 25 msec), and therefore our experimental design was not sensitive enough to detect differences in gamma ERS because of SOA. A broader tem- poral window for visual–vestibular integration would be consistent with behavioral research (Rodriguez & Crane, 2021) and research demonstrating that perception for ves- tibular inputs being relatively slower than other senses (Barnett-Cowan & Harris, 2013). More research needs to be conducted to better understand the role of stimulus timing in visual–vestibular feature binding. Limitation and Future Directions Our heading discrimination task required participants to push a button as quickly as possible to make a heading judgment. It is possible that the preparation and execution of thumb movements during the button press contributed to the recorded EEG signal in the motor areas. Pilot studies revealed that participants had a tendency to only attend to visual cues to motion unless they were told that some physical-motion cues were spatially incongruent to visual-motion cues. Collecting RTs during the heading judgment task was important to ensure that participants attended to the correct motion cues to elicit the appropri- ate cortical activity. Our previous research (Townsend et al., 2019, 2022) demonstrated that RT data were diag- nostic of attention allocation, such that visual headings were judged faster than physical headings. The somatosensory system detects pressure and stretch on the skin, muscles, and joints during self-motion (Lackner, 1992). The forces generated by acceleration that produce vestibular or proprioceptive cues would be strong signals of self-motion perception; however, forces generated by the acceleration of our motion simulator would have also stimulated receptors in the back, seat, Townsend et al. 1103 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 and feet of the seated participants. Although there is evidence from patients with spinal lesions that the somato- sensory system does not contribute significantly to our per- ception of self-motion (Walsh, 1961), we cannot completely rule out the somatosensory system’s contribution to our EEG signal projecting from the motor areas. Functional neuroimaging studies exploring the neural correlates of visual motion perception typically use optic flow to elicit cortical responses to vection, or the illusion of inertial motion generated by visual-only stimuli. Some studies have compared coherent optic flow to control stimuli such as random (incoherent) dot motions (e.g., Cardin & Smith, 2010), static dot patterns (e.g., Deutschländer et al., 2004), or spatially scrambled versions of the original self-motion stimulus (e.g., Barry et al., 2014). In these studies, participants are not physically moved, so researchers commonly rely on self-report data to determine whether participants experienced the vection illusion. We did not collect self-report data to determine whether participants experienced vection from our visual-motion cues in the present study. There- fore, we cannot be completely certain that our visual- motion stimuli would have elicited vection on their own. However, a large body of research has shown that visually induced vection is strengthened when paired with vestib- ular stimulation (e.g., Gallagher, Dowsett, & Ferrè, 2019; Weech & Troje, 2017; Johnson, Sunahara, & Landolt, 1999). Our visual- and physical-motion stimuli were developed to combine for an immersive experience of self-motion that is similar to environments used in aviation and driving research and training. Our research can be applied to the clinical space to bet- ter understand pathologies of self-motion perception and visual–vestibular integration. Patients with pathologies such as Mal de Débarquement Syndrome ( Van Ombergen, Van Rompaey, Maes, Van de Heyning, & Wuyts, 2016), Persistent Postural-Perceptual Dizziness (Popkirov, Staab, & Stone, 2018), and Parkinson’s disease ( Yakubovich et al., 2020) show lower thresholds for self-motion percep- tion. For example, a recent study has shown that, com- pared with healthy, age-matched controls, Parkinson’s disease patients perform worse on heading judgment tasks because of overweighting of impaired visual-motion cues (Yakubovich et al., 2020). If we can establish electro- physiological biomarkers of the healthy versus impaired self-motion perception, we will develop a better under- standing of the integration and motor impairments that are common in pathologies such as Parkinson’s disease. Identification of these biomarkers in the prediagnostic phase of the disease could lead to a greater time window for possible preventative measures and earlier treatments (Noyce, Lees, & Schrag, 2016). Conclusion The present study examined cortical activity elicited in response to self-motion cues that varied in attention allocation and stimulus onset synchrony. There were two main findings. First, SOA produced robust differences in cortical activity during attention to both visual and physical motion. The electrophysiological signatures of visual (strong beta ERS) versus vestibular (strong beta ERD) weighting bias were enhanced when the attended motion cue was presented 100 msec before the ignored cue. When comparing across conditions of attention allocation, presenting the visual-motion cue first created more robust conditional differences than when physical-motion cues were presented first. These results demonstrate that the timing of visual–vestibular stimuli plays a critical role in multisensory weighting during self-motion perception, and that this weighting process is more sensitive to tem- poral changes in visual stimuli compared with vestibular stimuli. Second, contrary to the findings of several audio- visual and visuotactile studies, the temporal synchrony of visual- and physical-motion cues did not elicit gamma ERS beyond baseline. It is possible that the 100-msec SOA was not long enough to elicit these hypothesized differences. It could also be the case that visual–vestibular integration does not elicit processes indexed by gamma ERS. Reprint requests should be sent to Ben Townsend, Department of Psychology, Neuroscience and Behaviour, McMaster Univer- sity, 1280 Main St. West, Hamilton, Ontario, Canada L8S 4 L8, or via e-mail: townsepb@mcmaster.ca. Data Availability Statement The data and code for all analyses are available online at https://github.com/ bentownsend11/Stimulus-onset -asynchrony-affects-attention-related-ERSP-in-self-motion -perception. Author Contributions Ben Townsend: Conceptualization; Formal analysis; Inves- tigation; Methodology; Project administration; Visualiza- tion; Writing—Original draft; Writing—Review & editing. Joey K. Legere: Formal analysis; Software. Martin v. Mohrenschildt: Funding acquisition; Methodology; Resources; Software; Supervision. Judith M. Shedden: Conceptualization; Funding acquisition; Methodology; Project administration; Resources; Supervision; Writing— Review & editing. Funding Information Funding for this study was provided to JMS and MvM by The Natural Sciences and Engineering Research Council of Canada, grant numbers: RGPGP-2014-00051 and RGPIN-2020-07245; and the Canada Foundation for Inno- vation (https://dx.doi.org/10.13039/501100000196), grant number: 2009M00034. These funding sources had no 1104 Journal of Cognitive Neuroscience Volume 35, Number 7 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 involvement in the study design, the collection, analysis and interpretation of data, in the writing of the report, and in the decision to submit the article for publication. Diversity in Citation Practices Retrospective analysis of the citations in every article pub- lished in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender iden- tification of first author/last author) publishing in the Jour- nal of Cognitive Neuroscience ( JoCN ) during this period were M(an)/M = .407, W(oman)/M = .32, M/ W = .115, and W/ W = .159, the comparable proportions for the arti- cles that these authorship teams cited were M/M = .549, W/M = .257, M/ W = .109, and W/ W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encour- ages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the oppor- tunity to report their article’s gender citation balance. The authors of this article report its proportions of citations by gender category to be as follows: M/M = .675; W/M = .125; M/ W = .15; W/ W = .05. REFERENCES Alais, D., & Burr, D. (2004). The ventriloquist effect results from near-optimal bimodal integration. Current Biology, 14, 257–262. https://doi.org/10.1016/j.cub.2004.01.029, PubMed: 14761661 Angelaki, D., Gu, Y., & DeAngelis, G. (2009). Multisensory integration: Psychophysics, neurophysiology, and computation. Current Opinion in Neurobiology, 19, 452–458. https://doi.org/10.1016/j.conb.2009.06.008, PubMed: 19616425 Barnett-Cowan, M., & Harris, L. R. (2009). Perceived timing of vestibular stimulation relative to touch, light and sound. Experimental Brain Research, 198, 221–231. https://doi.org /10.1007/s00221-009-1779-4, PubMed: 19352639 Barnett-Cowan, M., & Harris, L. R. (2013). Vestibular perception is slow: A review. Multisensory Research, 26, 387–403. https://doi.org/10.1163/22134808-00002421, PubMed: 24319930 Neuroscience, 16, 130–138. https://doi.org/10.1038/nn.3304, PubMed: 23354386 Cardin, V., & Smith, A. T. (2010). Sensitivity of human visual and vestibular cortical regions to egomotion-compatible visual stimulation. Cerebral Cortex, 20, 1964–1973. https://doi.org /10.1093/cercor/bhp268, PubMed: 20034998 Chung, J. W., Ofori, E., Misra, G., Hess, C. W., & Vaillancourt, D. E. (2017). Beta-band activity and connectivity in sensorimotor and parietal cortex are important for accurate motor performance. Neuroimage, 144, 164–173. https://doi.org/10 .1016/j.neuroimage.2016.10.008, PubMed: 27746389 DeAngelis, G. C., & Angelaki, D. E. (2012). Visual–vestibular integration for self-motion perception. In M. M. Murray & M. T. Wallace (Eds.), The neural bases of multisensory processes. CRC Press/Taylor & Francis. Available from: https:// www.ncbi.nlm.nih.gov/books/NBK92839/. Delorme, A., & Makeig, S. (2004). EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134, 9–21. https://doi.org/10.1016/j.jneumeth.2003 .10.009, PubMed: 15102499 Delorme, A., Sejnowski, T., & Makeig, S. (2007). Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis. Neuroimage, 34, 1443–1449. https://doi.org/10.1016/j.neuroimage.2006.11 .004, PubMed: 17188898 Deutschländer, A., Bense, S., Stephan, T., Schwaiger, M., Dieterich, M., & Brandt, T. (2004). Rollvection versus linearvection: Comparison of brain activations in PET. Human Brain Mapping, 21, 143–153. https://doi.org/10.1002 /hbm.10155, PubMed: 14755834 Dias, J. W., McClaskey, C. M., & Harris, K. C. (2021). Audiovisual speech is more than the sum of its parts: Auditory–visual superadditivity compensates for age-related declines in audible and lipread speech intelligibility. Psychology and Aging, 36, 520–530. https://doi.org/10.1037/pag0000613, PubMed: 34124922 Engel, A. K., Fries, P., & Singer, W. (2001). Dynamic predictions: Oscillations and synchrony in top–down processing. Nature Reviews Neuroscience, 2, 704–716. https://doi.org/10.1038 /35094565, PubMed: 11584308 Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149–1160. https://doi.org/10.3758/BRM.41.4 .1149, PubMed: 19897823 Fister, J. K., Stevenson, R. A., Nidiffer, A. R., Barnett, Z. P., & Wallace, M. T. (2016). Stimulus intensity modulates multisensory temporal processing. Neuropsychologia, 88, 92–100. https://doi.org/10.1016/j.neuropsychologia.2016 .02.016, PubMed: 26920937 Barry, R. J., Palmisano, S., Schira, M. M., De Blasio, F. M., Gallagher, M., Dowsett, R., & Ferrè, E. R. (2019). Vection in virtual Karamacoska, D., & MacDonald, B. (2014). EEG markers of visually experienced self-motion (vection). In Frontiers of Human Neuroscience Conference Abstract: Australasian Society for Psychophysiology, Inc. https://doi.org/10.3389 /conf.fnhum.2014.216.00013 Bigdely-Shamlo, N., Mullen, T., Kreutz-Delgado, K., & Makeig, S. (2013). Measure projection analysis: A probabilistic approach to EEG source comparison and multi-subject inference. Neuroimage, 72, 287–303. https://doi.org/10.1016/j .neuroimage.2013.01.040, PubMed: 23370059 Braithwaite, M. G. (1997). The British Army Air Corps in-flight spatial disorientation demonstration sortie. Aviation, Space, and Environmental Medicine, 68, 342–345. PubMed: 9096833 Buzsáki, G., & Moser, E. I. (2013). Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nature reality modulates vestibular-evoked myogenic potentials. European Journal of Neuroscience, 50, 3557–3565. https://doi .org/10.1111/ejn.14499, PubMed: 31233640 Groppe, D. M., Makeig, S., & Kutas, M. (2009). Identifying reliable independent components via split-half comparisons. Neuroimage, 45, 1199–1211. https://doi.org/10.1016/j .neuroimage.2008.12.038, PubMed: 19162199 Johnson, W. H., Sunahara, F. A., & Landolt, J. P. (1999). Importance of the vestibular system in visually induced nausea and self-vection. Journal of Vestibular Research, 9, 83–87. https://doi.org/10.3233/ VES-1999-9202, PubMed: 10378179 Keil, J., & Senkowski, D. (2018). Neural oscillations orchestrate multisensory processing. Neuroscientist, 24, 609–626. https://doi.org/10.1177/1073858418755352, PubMed: 29424265 Townsend et al. 1105 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Kenney, D. M., O’Malley, S., Song, H. M., Townsend, B., von Mohrenschildt, M., & Shedden, J. M. (2020). Velocity influences the relative contributions of visual and vestibular cues to self-acceleration. Experimental Brain Research, 238, 1423–1432. https://doi.org/10.1007/s00221-020-05824-9, PubMed: 32367145 Kilavik, B. E., Zaepffel, M., Brovelli, A., MacKay, W. A., & Riehle, A. (2013). The ups and downs of beta oscillations in sensorimotor cortex. Experimental Neurology, 245, 15–26. https://doi.org/10.1016/j.expneurol.2012.09.014, PubMed: 23022918 Kingma, H. (2005). Thresholds for perception of direction of linear acceleration as a possible evaluation of the otolith function. BMC Ear, Nose and Throat Disorders, 5, 5. https:// doi.org/10.1186/1472-6815-5-5, PubMed: 15972096 Klimesch, W. (2012). Alpha-band oscillations, attention, and controlled access to stored information. Trends in Cognitive Sciences, 16, 606–617. https://doi.org/10.1016/j.tics.2012.10 .007, PubMed: 23141428 Lackner, J. R. (1992). Multimodal and motor influences on orientation: Implications for adapting to weightless and virtual environments. Journal of Vestibular Research, 2, 307–322. https://doi.org/10.3233/ VES-1992-2405, PubMed: 1342405 Li, W., Piëch, V., & Gilbert, C. D. (2004). Perceptual learning and top–down influences in primary visual cortex. Nature Neuroscience, 7, 651–657. https://doi.org/10.1038/nn1255, PubMed: 15156149 Macaluso, E., Noppeney, U., Talsma, D., Vercillo, T., Hartcher- O’Brien, J., & Adam, R. (2016). The curious incident of attention in multisensory integration: Bottom–up vs. top–down. Multisensory Research, 29, 557–583. https://doi .org/10.1163/22134808-00002528 Makeig, S., Bell, A., Jung, T. P., & Sejnowski, T. J. (1995). Independent component analysis of electroencephalographic data. Advances in Neural Information Processing Systems, 8, 145–151. Noyce, A. J., Lees, A. J., & Schrag, A. E. (2016). The prediagnostic phase of Parkinson’s disease. Journal of Neurology, Neurosurgery & Psychiatry, 87, 871–878. https://doi.org/10 .1136/jnnp-2015-311890, PubMed: 26848171 O’Malley, S., Townsend, B., von Mohrenschildt, M., & Shedden, J. M. (2015). The integration of physical acceleration cues with visual acceleration cues. Canadian Journal of Experimental Psychology-Revue, 69, 349. Ofori, E., Coombes, S. A., & Vaillancourt, D. E. (2015). 3D cortical electrophysiology of ballistic upper limb movement in humans. Neuroimage, 115, 30–41. https://doi.org/10.1016/j .neuroimage.2015.04.043, PubMed: 25929620 Öhman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130, 466–478. https://doi.org/10.1037/0096-3445.130.3.466, PubMed: 11561921 Oostenveld, R., Fries, P., Maris, E., & Schoffelen, J. M. (2011). FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Computational Intelligence and Neuroscience, 2011, 156869. https://doi.org/10.1155/2011/156869, PubMed: 21253357 Palmer, J. A., Kreutz-Delgado, K., & Makeig, S. (2012). AMICA: An adaptive mixture of independent component analyzers with shared components. Swartz Center for Computatonal Neursoscience, University of California San Diego (Technical Report). Popkirov, S., Staab, J. P., & Stone, J. (2018). Persistent postural-perceptual dizziness (PPPD): A common, characteristic and treatable cause of chronic dizziness. Practical Neurology, 18, 5–13. https://doi.org/10.1136 /practneurol-2017-001809, PubMed: 29208729 Rodriguez, R., & Crane, B. T. (2021). Effect of timing delay between visual and vestibular stimuli on heading perception. Journal of Neurophysiology, 126, 304–312. https://doi.org/10 .1152/jn.00351.2020, PubMed: 34191637 Sakowitz, O. W., Quiroga, R. Q., Schürmann, M., & Başar, E. (2001). Bisensory stimulation increases gamma-responses over multiple cortical regions. Cognitive Brain Research, 11, 267–279. https://doi.org/10.1016/S0926-6410(00)00081-1, PubMed: 11275488 Senkowski, D., Schneider, T. R., Foxe, J. J., & Engel, A. K. (2008). Crossmodal binding through neural coherence: Implications for multisensory processing. Trends in Neurosciences, 31, 401–409. https://doi.org/10.1016/j.tins .2008.05.002, PubMed: 18602171 Senkowski, D., Talsma, D., Grigutsch, M., Herrmann, C. S., & Woldorff, M. G. (2007). Good times for multisensory integration: Effects of the precision of temporal synchrony as revealed by gamma-band oscillations. Neuropsychologia, 45, 561–571. https://doi.org/10.1016/j.neuropsychologia.2006.01 .013, PubMed: 16542688 Shattuck, D. W., Chiang, M.-C., Barysheva, M., McMahon, K. L., de Zubicaray, G. I., Meredith, M., et al. (2008). Visualization tools for high angular resolution diffusion imaging. In Proceedings of the medical image computing and computer-assisted intervention. MICCAI International Conference Medical Image Computing Computer-Assisted Intervention (Vol. 5242, pp. 298–305). https://doi.org/10 .1007/978-3-540-85990-1_36 Shayman, C. S., Seo, J. H., Oh, Y., Lewis, R. F., Peterka, R. J., & Hullar, T. E. (2018). Relationship between vestibular sensitivity and multisensory temporal integration. Journal of Neurophysiology, 120, 1572–1577. https://doi.org/10.1152/jn .00379.2018, PubMed: 30020839 Sheppard, J. P., Raposo, D., & Churchland, A. K. (2013). Dynamic weighting of multisensory stimuli shapes decision- making in rats and humans. Journal of Vision, 13, 4. https:// doi.org/10.1167/13.6.4, PubMed: 23658374 Siegel, M., Donner, T. H., & Engel, A. K. (2012). Spectral fingerprints of large-scale neuronal interactions. Nature Reviews Neuroscience, 13, 121–134. https://doi.org/10.1038 /nrn3137, PubMed: 22233726 Singer, W., & Gray, C. M. (1995). Visual feature integration and the temporal correlation hypothesis. Annual Review of Neuroscience, 18, 555–586. https://doi.org/10.1146/annurev .ne.18.030195.003011, PubMed: 7605074 Slutsky, D. A., & Recanzone, G. H. (2001). Temporal and spatial dependency of the ventriloquism effect. NeuroReport, 12, 7–10. https://doi.org/10.1097/00001756-200101220-00009, PubMed: 11201094 Townsend, B., Legere, J. K., von Mohrenschildt, M., & Shedden, J. M. (2022). Beta-band power is an index of multisensory weighting during self-motion perception. Neuroimage: Reports, 2, 100102. https://doi.org/10.1016/j.ynirp.2022 .100102 Townsend, B., Legere, J. K., O’Malley, S., von Mohrenschildt, M., & Shedden, J. M. (2019). Attention modulates event-related spectral power in multisensory self-motion perception. Neuroimage, 191, 68–80. https://doi.org/10.1016/j .neuroimage.2019.02.015, PubMed: 30738208 Van Ombergen, A., Van Rompaey, V., Maes, L. K., Van de Heyning, P. H., & Wuyts, F. L. (2016). Mal de debarquement syndrome: A systematic review. Journal of Neurology, 263, 843–854. https://doi.org/10.1007/s00415-015-7962-6, PubMed: 26559820 Walsh, E. G. (1961). Role of the vestibular apparatus in the perception of motion on a parallel swing. Journal of l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 1106 Journal of Cognitive Neuroscience Volume 35, Number 7 Physiology, 155, 506–513. https://doi.org/10.1113/jphysiol .1961.sp006643, PubMed: 13782902 Weech, S., & Troje, N. F. (2017). Vection latency is reduced by bone-conducted vibration and noisy galvanic vestibular stimulation. Multisensory Research, 30, 65–90. https://doi.org /10.1163/22134808-00002545 Yakubovich, S., Israeli-Korn, S., Halperin, O., Yahalom, G., Hassin-Baer, S., & Zaidel, A. (2020). Visual self-motion cues are impaired yet overweighted during visual–vestibular integration in Parkinson’s disease. Brain Communications, 2, fcaa035. https://doi.org/10.1093/braincomms/fcaa035, PubMed: 32954293 l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / j / o c n a r t i c e - p d l f / / / 3 5 7 1 0 9 2 2 1 4 0 2 4 0 / / j o c n _ a _ 0 1 9 9 4 p d . f b y g u e s t t o n 0 7 S e p e m b e r 2 0 2 3 Townsend et al. 1107Stimulus Onset Asynchrony Affects Weighting-related image
Stimulus Onset Asynchrony Affects Weighting-related image
Stimulus Onset Asynchrony Affects Weighting-related image
Stimulus Onset Asynchrony Affects Weighting-related image
Stimulus Onset Asynchrony Affects Weighting-related image
Stimulus Onset Asynchrony Affects Weighting-related image

PDF Herunterladen