An Important Step toward Understanding the Role of
Body-based Cues on Human Spatial Memory
for Large-Scale Environments
Derek J. Huffman1
and Arne D. Ekstrom2
Abstract
■ Moving our body through space is fundamental to human
navigation; however, technical and physical limitations have
hindered our ability to study the role of these body-based cues
experimentally. We recently designed an experiment using novel
immersive virtual-reality technology, which allowed us to tightly
control the availability of body-based cues to determine how
these cues influence human spatial memory [Huffman, D. J., &
Ekstrom, A. D. A modality-independent network underlies the
retrieval of large-scale spatial environments in the human brain.
Neuron, 104, 611–622, 2019]. Our analysis of behavior and fMRI
data revealed a similar pattern of results across a range of body-
based cues conditions, thus suggesting that participants likely
relied primarily on vision to form and retrieve abstract, holistic
representations of the large-scale environments in our experi-
ment. We ended our paper by discussing a number of caveats
and future directions for research on the role of body-based cues
in human spatial memory. Here, we reiterate and expand on this
discussion, and we use a commentary in this issue by A. Steel,
C. E. Robertson, and J. S. Taube (Current promises and limita-
tions of combined virtual reality and functional magnetic reso-
nance imaging research in humans: A commentary on Huffman
and Ekstrom (2019). Journal of Cognitive Neuroscience, 2020)
as a helpful discussion point regarding some of the questions that
we think will be the most interesting in the coming years. We
highlight the exciting possibility of taking a more naturalistic
approach to study the behavior, cognition, and neuroscience of
navigation. Moreover, we share the hope that researchers who
study navigation in humans and nonhuman animals will syner-
gize to provide more rapid advancements in our understanding
of cognition and the brain. ■
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
INTRODUCTION
One of the major issues in cognitive neuroscience involves
relating neural signals to the types of behaviors we en-
counter during real-world situations, such as navigating to
our local supermarket to find our favorite foods. Steel,
Robertson, and Taube (2020) discuss one important issue
when considering modeling real-world navigation in the
laboratory: how we account for body movements in naviga-
tion experiments that use virtual reality (VR). In particular,
they critiqued one of our recent papers (Huffman &
Ekstrom, 2019a) in which we showed that the spatial repre-
sentations of well-learned environments were similar across a
range of body-based cue conditions. Thus, our results
suggested that participants might rely more strongly on
visual cues to form and retrieve memories of large-scale
spatial environments. Steel et al. (2020) challenged the
validity of our approach by arguing that (1) the use of immer-
sive VR is unnatural, (2) our results are limited because we
did not find a difference in behavioral performance during
fMRI scanning, (3) our behavioral tasks did not adequately
assess spatial representations, and (4) our theoretical aims
This Review is part of a Special Focus, “Promises and Limitations
of Virtual Reality-Based Studies of Human Navigation.”
1Colby College, 2The University of Arizona
involve a false dichotomy. Careful evaluation of each of
their critiques, however, reveals that our findings are robust
to their concerns. Moreover, we aim to clarify the rationale
behind our experimental design and to highlight some of
the key differences between our study (in humans) and
studies on the neuroscience of navigation in rodents. In par-
ticular, we discuss several approaches that we think could
enhance our understanding of how we represent large-
scale, ecologically relevant spatial environments under nat-
uralistic behavioral demands. Thus, we will interleave our
discussion of Steel et al.’s (2020) criticisms within the
broader framework of experimental designs that seek to
understand how humans form and retrieve memories of
large-scale spatial environments, such as the towns in
which we live.
WHAT DID WE FIND IN OUR PREVIOUS PAPER?
In this section, we will briefly review our experimental
design as well as our results and discussion (Huffman &
Ekstrom, 2019a). In our experiment, participants learned
three virtual cities under three levels of body-based cues:
(1) impoverished: participants stood on an omnidirectional
treadmill and viewed the environment via a head-mounted
© 2020 Massachusetts Institute of Technology. Published under a
Creative Commons Attribution 4.0 International (CC BY 4.0) license.
Journal of Cognitive Neuroscience 33:2, pp. 167–179
https://doi.org/10.1162/jocn_a_01653
display, but they controlled all of their navigation via a joy-
stick; (2) limited: rotations and head movements were
yoked to real-world movements via a head-mounted display,
but they moved forward using a joystick; (3) enriched: rota-
tions and head movements were yoked to real-world move-
ments via a head-mounted display, and participants moved
forward in the environment by walking on an omnidirec-
tional treadmill. Participants learned three cities to criterion
performance (based on their abstract, holistic knowledge of
the spatial environment as assessed by performance of the
judgments of relative direction [JRD] task) before undergo-
ing fMRI scanning. During fMRI scanning, participants per-
formed the JRD task as well as a perceptually matched
active baseline task (a math task that looked visually similar
to the JRD task and involved similar button presses) and a
resting-state task.
We analyzed our data using a variety of approaches, in-
cluding (1) a Bayesian analysis of task performance across
multiple measures, (2) a classification analysis of putative
network interactions, (3) a classification analysis of single-
trial patterns of activity in ROIs (the hippocampus, para-
hippocampal cortex, and retrosplenial cortex), (4) an
activation analysis, (5) a whole-brain Bayesian analysis, and
(6) a pattern similarity analysis investigating distance-
related coding. Importantly, the results generated from
all of these analyses revealed that behavioral performance
and the patterns of brain activity were similar across the
three body-based cue conditions. Moreover, we used a
machine learning approach of “generalization testing” to
assess whether patterns of activity generalized between
different, but related, conditions (e.g., between different
JRD task conditions). We would like to emphasize that
such an approach is beneficial because finding evidence
for the similarity of conditions (i.e., similar generalization
performance between conditions) is stronger evidence
that the brain is treating different conditions similarly than
relying on Bayesian effects alone (e.g., Bayes null).
Importantly, we found similar generalization performance
(and strong correlations) between the different JRD tasks
as a function of body-based cues during initial learning and
our perceptually matched active baseline task and the rest-
ing state for (1) our network-based analysis and (2) single-
trial classification analysis within our ROIs and the whole
brain. Altogether, we concluded that participants likely re-
lied primarily on vision when they retrieved information
about large-scale spatial environments. We highlighted
caveats and future directions, which we will reiterate and
expand upon below.
VR TECHNOLOGY ALLOWS FOR TIGHT
EXPERIMENTAL CONTROL WHILE
ALSO OFFERING A HIGHLY
IMMERSIVE EXPERIENCE
In this section, we will discuss Steel et al.’s (2020) first criti-
cism: that the use of immersive VR with real-world head
rotations and leg movements is completely unnatural. In
our paper (Huffman & Ekstrom, 2019a), we used VR to
tightly control the amount of exposure and the level of
body-based cues with which a participant learned large-scale
virtual environments to determine the influence of body-
based cues on human spatial memory during retrieval.
Although we agree with Steel et al. (2020) that navigation
on an omnidirectional treadmill is not a complete substitu-
tion for real-world body movements, the use of this technol-
ogy provided critical scientific control for our experiment.
For example, we were able to directly match the visual
features and the immersive feeling of wearing the head-
mounted display across the three body-based cue conditions.
We tested the idea that body-based cues exist along a
continuum, including designs that (1) do not include any
body-based cues (e.g., verbal communication of spatial in-
formation), (2) include only optic flow (e.g., desktop-based
navigation), (3) use a head-mounted display but use a joy-
stick for all movements (e.g., our “impoverished” condi-
tion), (4) employ a head-mounted display and yoke head
and body movements to real-world body movements
(e.g., our “limited” condition), (5) employ a head-mounted
display and an omnidirectional treadmill to simulate many
of the relevant cues for walking (e.g., our “enriched” con-
dition), and (6) employ real-world body translations and ro-
tations. Thus, we assumed that participants could make
use of any (and all) of the body-based cues that were avail-
able to them, perhaps most critically, including real-world
body rotations (see the green plot in Figure 1A). In con-
trast, Steel et al. (2020) argued that our participants likely
ignored body-based cues in our paradigm, thus advocating
for a model in which body-based cues cannot be used by
participants unless they involve conditions with strictly
real-world locomotion (see the purple plot in Figure 1A).
As we discuss below, there are several lines of evidence
to suggest that our participants could have, in fact, used
the body-based cues that were available to them in our
limited and enriched conditions, contradicting arguments
from Steel et al. (2020).
Specifically, we included a condition (“limited”) in which
participants stood on the treadmill but they actively moved
their head and body to explore the virtual environment. In
particular, this condition would activate the semicircular
canals in the same manner as it would with real-world
movement. For example, previous research has shown
similar head-direction coding under conditions of passive
rotation during constant location (e.g., Shinder & Taube,
2011), which would suggest that our active rotation con-
ditions should have been sufficient to activate the head-
direction system in humans. Moreover, Steel et al. (2020) cited
and discussed other studies (e.g., Robertson, Hermann,
Mynick, Kravitz, & Kanwisher, 2016; Shine, Valdés-Herrera,
Hegarty, & Wolbers, 2016) that are equivalent to our
“limited” condition that have also advanced our knowledge
of spatial representations. They suggested that these
designs could have allowed participants to reactivate
body-based cues during retrieval when the rotations in
these studies (with a head-mounted display) were identical
168
Journal of Cognitive Neuroscience
Volume 33, Number 2
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Figure 1. The influence of body-based cues might differ between the navigation interface, the nature of the behavioral task, and the spatial scale of
the environment. (A) We tested a model that suggests that a participant’s ability to use body-based cues exists along a continuum. Specifically, we
suggested that participants can use any body-based cues that are available to them; thus, our “limited” condition (rotations: real-world body
rotations; translation: joystick) and “enriched” condition (rotations: real-world body rotations; translation: omnidirectional treadmill) would give
participants access to useful information about real-world body rotations and about taking steps on the treadmill. In contrast, Steel et al. (2020)
argue that participants cannot use body-based cues unless they are physically moving their bodies through space (i.e., real-world navigation).
Thus, in their model, participants use body-based cues neither from real-world body rotations in immersive VR nor from taking steps on an
omnidirectional treadmill. (B) We suggest that the influence of body-based cues on human spatial cognition might differ based on the nature of the
behavioral task or on the spatial scale of the environment. Specifically, body-based cues might exert a stronger influence on tasks that emphasize
a navigator’s ability to keep track of themselves relative to another object in the environment (i.e., more egocentric tasks) as opposed to tasks
that encourage participants to form holistic, abstract representations of the environment (i.e., more allocentric tasks). In addition, because of
the accumulation of error in the path integration system, body-based cues might exert a stronger effect in smaller-scale environments. HMD =
head-mounted display.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
to those employed in our study. Therefore, it is not clear
why they argue that participants in our “limited” (and
“enriched”) condition could not have also used and reacti-
vated these real-world rotational cues.
Steel et al. (2020) also raised questions about how natu-
rally omnidirectional treadmills replicate real-world walking
in the “enriched” condition. Although we described our
approach in the methods of our paper, we would like to
emphasize that we used a detailed procedure in which we
attempted to train participants to walk as naturally as possi-
ble. We first trained participants to walk on the omnidirec-
tional treadmill without the head-mounted display. Once
we determined that they were walking naturally, we then
had them don the head-mounted display and perform
several practice tasks to teach them to be able to comfort-
ably navigate on the treadmill. Participants then returned
another day for the main task session. Thus, before partici-
pants began to learn the environments for the fMRI exper-
iment, they already had between 1.5 and 2.5 hr of
experience walking on the treadmill. We also required
participants to walk as naturally as possible throughout
the experiment. In addition, we ensured that our interface
between the treadmill and the VR software was set to
approximate average human walking speed. Finally, we
ensured that the treadmill responded accurately and dy-
namically to changes in the participant’s walking speed
(e.g., the faster a participant walked on the treadmill, the
faster they moved in the virtual environment).
Previous research with animals as varied as desert ants
(e.g., Dahmen, Wahl, Pfeffer, Mallot, & Wittlinger, 2017), rats
(e.g., Aronov & Tank, 2014), and humans (e.g., Harootonian,
Wilson, Hejtmánek, Ziskin, & Ekstrom, 2020) has shown
that animals can use information from their body-based cues
to navigate on omnidirectional treadmills. For example,
previous research in the desert ant showed that they could
accurately walk the distance and direction to a home location
while walking on an omnidirectional treadmill (Dahmen
et al., 2017). Although this is an extreme example because
the desert ant is thought to rely heavily on body-based cues
(e.g., step counting; Wehner, 2020; Wittlinger, Wehner, &
Wolf, 2006), such findings suggest that if animals (e.g.,
humans, rats) make use of information from taking steps
(e.g., proprioceptive feedback and motor-efference copies)
and head rotations (e.g., vestibular information), then
omnidirectional treadmills can provide access to at least
some of these relevant cues. For example, previous research
has shown that rats can accurately navigate to target loca-
tions in a virtual version of the Morris Water Maze using
an omnidirectional treadmill (Aronov & Tank, 2014).
Moreover, previous research has suggested that human
participants can accurately perform a task that is thought
to measure path integration while they walked on the same
Huffman and Ekstrom
169
omnidirectional treadmill that we used in our experiment
(specifically, a triangle-completion task; Harootonian et al.,
2020). Importantly, the Harootonian et al. (2020) study
solely used body-based cues (i.e., no visual cues), thus pro-
viding direct support for the notion that participants can
and do use information from body-based cues to navigate
on omnidirectional treadmills.
More relevant for the discussion of the neuroscience of
navigation, previous research revealed the full complement
of spatially selective cells when rats walked on an omnidirec-
tional treadmill that was very similar to our treadmill (Aronov
& Tank, 2014). Specifically, this study reported similar place
cells, head-direction cells, and grid cells between their VR
condition and the real world. Moreover, they reported
evidence of border cells and of remapping of place cells
between similar environments. Although these cellular
findings in rodents clearly cannot speak directly to our study
in humans involving fMRI, these findings suggest that if
similar mechanisms are at play in their apparatus in rats
and in our apparatus in humans, then we might expect
that our enriched condition on the treadmill would reveal
similar spatial representations to real-world navigation in
the human brain. Of course, such a prediction awaits future
experimentation, for example, in patients with implanted
electrodes (e.g., Topalovic et al., 2020; Aghajan et al.,
2017; Bohbot, Copara, Gotman, & Ekstrom, 2017) or in
mobile scalp EEG studies (e.g., Djebbara, Fich, Petrini, &
Gramann, 2019; Jungnickel, Gehrke, Klug, & Gramann,
2019; Park & Donaldson, 2019; Liang, Starrett, & Ekstrom,
2018; Park, Dudchenko, & Donaldson, 2018). Thus, similar
to the arguments made by Steel et al. (2020), we agree that
such experiments will be fundamental to increasing our un-
derstanding of the role of body-based cues on human spatial
memory. On the basis of the evidence reviewed above,
however, we disagree with Steel et al. (2020) that partici-
pants could not have used any of the relevant body-based
cues in our experiment. Therefore, we argue that our
approach allowed us to study part of the larger continuum
of body-based cues, although undoubtedly future experi-
ments will be helpful in further testing these predictions
(i.e., the comparison between our models in Figure 1A).
We also want to highlight one of the main benefits of
using an omnidirectional treadmill in the laboratory in the
first place: It allowed us to fit a city-sized environment into a
small room. Importantly, as we will discuss later, these
larger-scale environments are the kinds of spaces we are
most interested in studying because they relate to our abil-
ity to navigate over ecologically and evolutionarily relevant
dimensions. In addition, similar to more traditional
laboratory-based tasks, VR allows tight experimental control
over the degree of exposure to an environment, and it al-
lows experimenters to gather detailed data regarding a par-
ticipant’s full exploration history within such environments.
Moreover, VR allows researchers to create experiences that
would be difficult or impossible to create in the real world
(e.g., to study the strategies underlying human navigation;
Warren, Rothman, Schnapp, & Ericson, 2017).
IT IS IMPERATIVE TO MATCH BEHAVIORAL
PERFORMANCE BETWEEN CONDITIONS TO
MAKE ANY CONCLUSIONS ABOUT THE
ROLE OF BODY-BASED CUES ON THE
NEURAL REPRESENTATION OF
SPATIAL INFORMATION
In this section, we will address Steel et al.’s (2020) second
criticism: that our results are limited because we did not
find a difference in behavioral performance during fMRI
scanning. In contrast to their view, we argue that it is imper-
ative that researchers match behavioral performance when
looking at the role of body-based cues on brain responses
to spatially guided behavior. In fact, this is one area that we
want to highlight as a seeming misunderstanding of our
overall approach. Specifically, if we had sent participants
into the scanner with differing levels of spatial memory
performance, then if we had observed differences in their
brain as a function of how they originally learned the envi-
ronment, we could not have deconfounded whether these
brain differences were caused by a difference of the quality
of spatial memory retrieval (i.e., a memory effect) versus
the effect of body-based cues per se (i.e., the mode of
locomotion through the environment). That is, such an
effect could be caused by an artifact of a failure to retrieve
spatial information. For example, if you were asked to
perform spatial memory tasks for an environment that
you have never visited, then, of course, your brain would
show little to no retrieval or spatial-like coding. If we want
to know, however, if the role of body-based cues mattered
per se, then we could study, for example, if different
networks supported the retrieval of an environment that
you had walked versus an environment that you had
navigated via other mechanisms. Thus, we designed our
experiment using this logic of purposely matching perfor-
mance between the body-based cue conditions before
beginning fMRI data collection. Therefore, to ensure that
our point is perfectly clear, the fact that participants had
equal behavioral performance during the fMRI session
was an intended consequence of our training paradigm
and was in no way an artifact.
We would like to further highlight the importance of
matching behavioral performance between conditions in
studies in the rodent. For example, many studies of spatial
coding in the rodent brain are done in the complete
absence of any overt behavioral demands. Specifically,
rodents are often participating in a random foraging exper-
iment, where they are walking around without any specific
task. Moreover, in some studies of the role of body-based
cues, rodents are then put into stressful situations (e.g.,
being wrapped in a towel [i.e., a makeshift straightjacket]
and then are passively transported through the environ-
ment; e.g., Foster, Castro, & McNaughton, 1989). Other
conditions involve no explicit behavioral demands at all
(e.g., being passively moved in a cart [i.e., akin to being a
passenger in a car]; Winter, Clark, & Taube, 2015; Winter,
Mehlman, Clark, & Taube, 2015; Stackman & Taube, 1997).
170
Journal of Cognitive Neuroscience
Volume 33, Number 2
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Thus, under such conditions, it is difficult to know anything
about the animal’s mental processes. Is there any behav-
ioral benefit for them to keep track of their location in the
environment? We argue that it is paramount for researchers
to have animals perform the same behavioral task between
body-based cue conditions so that confounds such as behav-
ioral performance, strategy use, and attention to spatial
information can be mitigated. In fact, previous research in
humans has suggested that behavioral demands can shape
performance (and likely the underlying brain representa-
tions) to become more modality independent (e.g.,
between visual and proprioceptive conditions: Experiment 3
of Avraamides, Loomis, Klatzky, & Golledge, 2004).
Therefore, if brain differences are observed under condi-
tions of matched behavioral performance, then it would
provide more compelling evidence that body-based cues
(i.e., mode of locomotion) significantly contribute to spa-
tial coding (e.g., “Where am I?”).
More generally, we and others have highlighted the impor-
tance of disentangling the active/passive task performance
versus the role of body-based cues on spatial representations
(e.g., Huffman & Ekstrom, 2019a; Chrastil & Warren, 2012,
2013, 2015). Briefly, many rodent studies of the role of
body-based cues have also excluded active navigation
strategies. That is, active versus passive differences con-
found the possible contributions of decision-making with
the role of body-based cues. Thus, in our experiment, par-
ticipants in all three conditions were performing the same
active navigation task and had the same behavioral de-
mands to form stable, abstract spatial representations of
the environment. Therefore, we agree with Chrastil and
Warren (2012) that studies investigating the role of body-
based cues should seek to deconfound the role of these
cues from the role of active decision-making.
THE IMPORTANCE OF THE TYPE OF
BEHAVIORAL TASK
In this section, we will address Steel al.’s (2020) third crit-
icism: that our behavioral tasks did not adequately assess
spatial representations. The JRD task is one of the most
well-characterized and widely used tasks in the human spa-
tial navigation literature because it provides access to holis-
tic representations of space (Vass & Epstein, 2017; Waller
& Hodgson, 2006; Mou, McNamara, Valiquette, & Rump,
2004; McNamara, Rump, & Werner, 2003; Shelton &
McNamara, 1997, 2001; Rieser, 1989). We recently com-
pleted a detailed experimental paper that supported the
construct validity of the JRD task (Huffman & Ekstrom,
2019b). For example, our results suggested that partici-
pants recruit similar underlying representations to solve
the JRD task and a map drawing task, which is perhaps
the strongest example of a task that taps into holistic spatial
knowledge. Another advantage of the JRD task in our par-
ticular case is that it is easily employed in the scanner (com-
pared to map drawing) and has no particular advantage
that is evident for visual versus vestibular cues: All pointing
is done based on imagined heading, and the only cues are
text (reading) cues. A final advantage is that it allowed us to
match behavioral performance, as we discussed in the pre-
vious section. Therefore, we believe that the JRD task was
an appropriate and valid choice for spatial retrieval and one
that provided perhaps the most insight into what we think
of when we think of a “spatial representation” or “cognitive
map”: one that, like a map, is referenced to other land-
marks and provides insight into participants’ abstract,
holistic knowledge of the environment. We also want to
clarify that we also examined participants’ behavior during
navigation and we reported similar patterns of changes in
excess path length between conditions, thus providing a
measure of spatial memory performance during navigation.
On the basis of our finding of the similarity both of be-
havioral performance and of our neuroimaging analyses,
we suggested (similar to others in the field) that tasks that
require more of a holistic, abstract representation of the
environment might place less demands on body-based
cues (e.g., the JRD task and map drawing tasks; Huffman
& Ekstrom, 2019a; Waller & Greenauer, 2007; Waller,
Loomis, & Haun, 2004). On the other hand, body-based
cues might play a stronger role in the performance of tasks
that require the navigator to keep track of themselves
relative to a salient landmark (or landmarks) in the environ-
ment (cf. Waller, Loomis, & Steck, 2003). These tasks
would place greater emphasis on path integration and
egocentric-based navigation (e.g., Ruddle, Volkova,
Mohler, & Bülthoff, 2011; Ruddle & Lessels, 2006; Waller
et al., 2004; Chance, Gaunet, Beall, & Loomis, 1998;
Klatzky, Loomis, Beall, Chance, & Golledge, 1998). In fact,
previous behavioral research has provided evidence of a
double dissociation between performance on tasks that
encourage participants to form an abstract, holistic repre-
sentation of the environment (e.g., the JRD task) and tasks
in which participants are asked to point to landmarks in the
environment from their current location and orientation
(e.g., Waller & Hodgson, 2006). Therefore, as we discussed
in our previous paper, it will be interesting for future
studies to test the role of body-based cues on the perfor-
mance of these different types of tasks (Figure 1B). For
example, Waller et al. (2004, p. 162) argued that “the effect
of body-based information on developing complex config-
ural knowledge of spatial layout (as opposed to knowledge
of self-to-object relations) may be minimal” (also see Waller
& Greenauer, 2007). To elaborate on this idea, we submit-
ted the F statistics from the results of their map drawing
tasks to a Bayes factor analysis (Faulkenberry, 2019), and
we found that BF01 = 16.7 (F(2, 69) = 1.43; Waller et al.,
2004, p. 161) and BF01 > 30 (F(2, 81) < 1; Waller &
Greenauer, 2007, p. 329), indicating that the observed data
are approximately 16 and more than 30 times more likely
under the null hypothesis than the alternative hypothesis
for these two studies, respectively. Therefore, we agree
with Steel et al. (2020) that it will be important for future
studies to determine the conditions under which body-
based cues contribute to human spatial memory, and we
Huffman and Ekstrom
171
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
think that tasks that vary along this continuum will be an
important topic for such studies. Our study nonetheless
provides an important boundary condition for under-
standing when body-based cues might not play a funda-
mental role, that is, for the retrieval of abstract, holistic
spatial knowledge.
THE MODALITY-DEPENDENT HYPOTHESIS VS.
THE MODALITY-INDEPENDENT HYPOTHESIS
In this section, we will address Steel et al.’s (2020) fourth
criticism, which argued that our distinction between the
modality-independent hypothesis and the modality-
dependent hypothesis was based on a false dichotomy.
In our paper, we outlined two theoretical constructs, both
of which have support within the field of spatial navigation.
The first is the modality-independent hypothesis, which
argues that human spatial representations do not depend
on the manner in which they were encoded and, at least in
some instances, distill to the same modality-independent
spatial representation regardless of how they were en-
coded (Giudice, Betty, & Loomis, 2011; Wolbers, Klatzky,
Loomis, Wutte, & Giudice, 2011; Avraamides et al., 2004;
Loomis, Lippa, Klatzky, & Golledge, 2002; Bryant, 1997;
Taylor & Tversky, 1992). The second is what we termed
the modality-dependent hypothesis, which argues that
the encoding modality will ultimately affect the manner
in which participants encode and retrieve spatial represen-
tations (Taube, Valerio, & Yoder, 2013). As stated in Taube
et al. (2013), navigating in desktop-based virtual environ-
ments involves “conditions [in which] idiothetic cues and
the path integration system would not be activated,” which
they go onto say: “Only activate[s] a portion of the neural
network that is engaged during more naturalistic condi-
tions that involve active movement.” This is consistent
with similar arguments about the cognitive map pointing
out the fundamental importance of path integration
to how rodents represent space (Moser & Moser, 2008;
McNaughton, Battaglia, Jensen, Moser, & Moser, 2006;
O’Keefe & Nadel, 1978).
We appreciate Steel et al.’s (2020) consideration about
the viability of these competing hypotheses; however, the
question about whether we proposed a false dichotomy
comes down to a number of important questions about
the proposed nature of spatial representations. First, do
body-based cues play a fundamental role in the representa-
tion of spatial information? That is, without such cues, rep-
resentations (and behavior) will not be stable, no matter
how much experience an animal has with an environment.
Alternatively, do body-based cues contribute to spatial rep-
resentations by augmenting the representations or enhanc-
ing the rate of spatial learning? In this case, we would expect
that animals might take longer to learn an environment in
the absence of body-based cues; however, once the environ-
ment is well learned, we might expect the representations
(e.g., the “cognitive map”) to look similar. Second, which
sensory modalities are most relevant to the formation of
spatial representations, and how do these differ between
species? For example, it is possible that vision plays a more
predominant role in human cognition (e.g., Ekstrom, 2015;
Posner, Nissen, & Klein, 1976) and that rodents rely on
body-based cues (and other cues such as olfaction and
sensory input from their whiskers) to a greater extent.
Third, do humans and other animals have holistic, abstract,
and centralized spatial representations? Previous research
has suggested that this certainly need not be true. For
example, a neural network model of navigation in insects
(e.g., the desert ant, honeybees) suggests that these animals
do not have an integrated, coherent “cognitive map”
but instead have separate dedicated systems for different
modalities (e.g., path integration vs. landmark-based
memory; Cruse & Wehner, 2011; also see Collett, Chittka,
& Collett, 2013).
In addition to the evidence reviewed above, other VR
paradigms provide further support for the notion that
vision might be the predominant cue that humans use
during navigation. Briefly, a VR technique called redirected
walking is used to allow participants to visually navigate in
large-scale virtual environments while they physically walk
within smaller-scale spaces (for a review, see Nilsson et al.,
2018). These techniques vary in how they are imple-
mented, but the key idea is that small, subthreshold visual
rotations are induced and these cause participants to turn
their bodies as they walk; that is, this causes them to slowly
turn as they walk although they are often not aware of these
rotations. Interestingly, Hodgson, Bachmann, and Waller
(2008) found that participants performed similarly on the
JRD task for environments that they learned under natural
walking conditions and under redirected walking (i.e.,
when visual cues and body-based cues are in competition
with each other). To elaborate on this idea, we submitted
the F statistic from Hodgson et al. (2008, p. 19; within-
participant manipulation, two conditions with 49 partici-
pants: F(1, 47) = 0.18, p = .68) to a Bayes factor analysis
(Faulkenberry, 2019), and we found that BF01 = 6.32, indi-
cating that the observed data are approximately six times
more likely under the null hypothesis than the alternative
hypothesis. This provides further evidence that holistic,
abstract spatial representations are not strongly affected
by body-based cues.
On the basis of the evidence above, we argue that we did
not propose a false dichotomy between the modality-
dependent hypothesis and the modality-independent
hypothesis (Huffman & Ekstrom, 2019a). We agree that
future research could elucidate whether different species
tend to exhibit more evidence for one of these hypotheses,
with our recent work providing some support for the
modality-independent hypothesis for large-scale spatial
memory in humans (Huffman & Ekstrom, 2019a). We do
not think, however, that there is enough compelling evidence
to outright reject the modality-dependent hypothesis in all
cases; therefore, we disagree that we proposed a false
dichotomy. Instead, we suggest that our results provide
an important boundary condition for when body-based
172
Journal of Cognitive Neuroscience
Volume 33, Number 2
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
cues might not be expected to contribute to human spatial
memory.
THE IMPORTANCE OF SCALES OF SPACE
In addition to points addressed above, we would like to re-
iterate from our previous discussion (Huffman & Ekstrom,
2019a) that we think that it is important to consider the
scale of the environments that are used in the study of
the role of body-based cues. For example, it is possible that
body-based cues might be more important in smaller envi-
ronments. Previous research has suggested that staying ori-
ented within environments solely using body-based cues is
typically highly unreliable and rapidly diminishes because of
error accumulation in the path integration system even over
relatively short distances (e.g., Kim, Sapiurka, Clark, &
Squire, 2013; for similar arguments, see Eichenbaum &
Cohen, 2014). For example, blindfolded participants have
been shown to walk in circles within a relatively short diam-
eter (e.g., 20 m), thus suggesting that navigating large-scale
spaces with body-based cues alone is insufficient for accu-
rate wayfinding (Souman, Frissen, Sreenivasa, & Ernst,
2009). Therefore, we argue that it will be important for
future studies to investigate whether there is a difference
in the relative contribution of body-based cues at different
scales of space (see Figure 1B). Specifically, we predict that
body-based cues play a stronger role in smaller scale envi-
ronments (see also Warren et al., 2017; Chrastil & Warren,
2014; Foo, Warren, Duchon, & Tarr, 2005; Loomis et al.,
1993). This distinction is important, especially because
most neuroscience studies in rodents have taken place in
small-scale “vista” spaces in which the entire environment
is immediately visible to the navigator (e.g., a 1 m × 1 m
open arena during a random foraging task).
The idea of scales of space is actively considered in human
spatial cognition (e.g., Wolbers & Wiener, 2014; Meilinger,
2008; Montello, 1993) and in ecological studies of animals
such as rats (e.g., rats live in large, underground tunnels:
Calhoun, 1963; rats move an average of over 675 m per
night when searching for a new home: Russell, McMorland,
& MacKay, 2010), bats (e.g., up to 100 km: Harten, Katz,
Goldshtein, Handel, & Yovel, 2020; Toledo et al., 2020;
Tsoar et al., 2011), ants (e.g., Wehner, 2020; Wittlinger
et al., 2006), and honeybees (e.g., von Frisch, 1954). For
example, one important consideration about the scales of
space is whether we and other animals form globally
coherent spatial representations of large-scale spaces (e.g.,
Wolbers & Wiener, 2014; Meilinger, 2008; Hirtle & Jonides,
1985). Moreover, boundaries are thought to play a funda-
mental role in spatial and episodic memory (Sargent, Zacks,
Hambrick, & Lin, 2019; Brunec, Moscovitch, & Barense, 2018;
Horner, Bisby, Wang, Bogus, & Burgess, 2016; McNamara,
1986, 1991). Thus, understanding how we remember and
navigate in large scale spaces, which are commonly separated
by several boundaries, is of fundamental importance.
Moreover, previous research in humans has suggested that,
although there is shared variance between small-scale spatial
abilities and large-scale navigation, there is also unique
variance (e.g., Hegarty, Montello, Richardson, Ishikawa, &
Lovelace, 2006). Thus, studying the similarities and differ-
ences between small- and large-scale navigation will provide
a more complete understanding of spatial cognition in
humans and nonhuman animals. Our understanding, how-
ever, of the neuroscience of spatial navigation over large-
scale spaces is currently limited (Peer, Ron, Monsa, &
Arzy, 2019; Geva-Sagiv, Las, Yovel, & Ulanovsky, 2015).
The issue of scales of space is also directly relevant to the
study of spatial coding in the rodent brain. Rats explore
large-scale spaces in the wild, including large underground
tunnels. For example, rats have been shown to explore very
long distances from their home environment (e.g., Russell
et al., 2010; Calhoun, 1963). Recent computational model-
ing research suggested that the path integration codes
afforded by grid cells would become severely disrupted
over large-scale, multicompartment environments, such as
rodents would encounter in their natural habitat (Stella,
Urdapilleta, Luo, & Treves, 2020). Thus, path integration
codes from grid cells could only enable accurate path inte-
gration over short distances. Although these predictions
remain to be tested in electrophysiological studies, these
findings again point to the importance of considering the
scale and complexity of the environment. Thus, we agree
with the conclusion that findings from small-scale, vista
spaces with regularly shaped, flat environments might not
scale to real-world environments with naturalistic behavioral
demands (Stella et al., 2020).
WHAT IS THE METRIC OF THE
“COGNITIVE MAP”?
We would also like to push back against the notion raised by
Steel et al. (2020) that the concept of the cognitive map is
universally accepted. In fact, behavioral experiments have
led to debates about the nature of spatial representations
in humans (e.g., Chrastil & Warren, 2014; Tversky, 1992,
1993; McNamara, 1991; Moar & Bower, 1983; for a compre-
hensive review, see Warren, 2019) and nonhuman animals
(e.g., Cheung et al., 2014; Collett et al., 2013; Cruse &
Wehner, 2011; Benhamou, 1996; Bennett, 1996). For exam-
ple, Warren et al. (2017) found that human spatial naviga-
tion performance is better accounted for by a labeled
graph than by Euclidean knowledge. Specifically, when
the very metric of the space was disrupted (via wormholes
that translated and rotated participants through the environ-
ment), participants failed to notice these inconsistencies in
the environment and readily adapted these into their spatial
knowledge, thus suggesting that they formed non-Euclidean
representations of the environment. Importantly, these
experiments were conducted when participants had full
access to body-based cues while wearing a head-mounted
display. Moreover, previous research and theoretical views
have suggested that humans frequently rely on heuristics
rather than metric Euclidean representations of spatial
environments (e.g., the cognitive collage: Tversky, 1992,
Huffman and Ekstrom
173
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
1993; labeled graphs: Warren, 2019; Warren et al., 2017;
Chrastil & Warren, 2014).
Much of what we know about the neuroscience of navi-
gation has come from electrophysiological investigations of
the rodent brain (e.g., Hafting, Fyhn, Molden, Moser, &
Moser, 2005; Taube, Muller, & Ranck, 1990; O’Keefe &
Nadel, 1978; O’Keefe & Dostrovsky, 1971). These studies
have revealed an abundance of potential underlying mech-
anisms supporting navigation. For example, place cells,
head-direction cells, grid cells, and border cells (among
others) could potentially contribute to the structure of spa-
tial knowledge, that is, an underlying metric that allows
animals to navigate (e.g., Moser & Moser, 2008).
However, this leads to an important question: How do we
reconcile the seemingly disparate views of heuristic-based
behavior and the seemingly metric-like representations in
the brain (Figure 2)? In brief, we think the best approach
forward will be to place a strong emphasis on trying to
understand how the brain and behavior give rise to latent
cognitive states. Importantly, such investigations should
rely on the method of converging operations, in which
we seek to find similar results between multiple conditions
and approaches (e.g., McNamara, 1991).
Recent theoretical views have argued that cognitive neu-
roscientists often focus on the neuroscience at the expense
of understanding the cognition or the behavior of the organ-
ism (Poeppel & Adolfi, 2020; Krakauer, Ghazanfar, Gomez-
Marin, MacIver, & Poeppel, 2017). Many studies on the
neuroscience of navigation have focused on investigating
nonhuman animals under relatively constrained conditions
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Figure 2. It is important that future studies of spatial cognition consider the relationship between behavior, the brain, and latent cognitive states.
Many neuroscience studies have focused on understanding relationships between the brain and the surrounding environment of the animal (e.g.,
place cells, head-direction cells, grid cells, border cells). Conversely, many studies have used behavioral measures to try to understand the latent
underlying cognitive representations supporting human spatial memory. These studies have provided evidence that spatial knowledge is typically
non-Euclidean. In contrast, spatial cells in the rodent brain appear to be more Euclidean in nature. Thus, it will be important for future research to
determine how these seemingly disparate findings fit together. We argue that a more comprehensive understanding of spatial cognition can be
obtained by considering the relationships between these various measures. (Note: fMRI figure from Huffman & Ekstrom, 2019a.)
174
Journal of Cognitive Neuroscience
Volume 33, Number 2
with little to no measures of behavior or performance. Such
“reduced preparations” could lead to a fundamental gap in
our understanding of the processes that we really care about
with respect to high-level cognitive tasks—that is, a better
understanding of the mental processes of the animal. For
example, Krakauer et al. (2017) argued that we should first
conduct detailed analyses of behavior before seeking to
understand the underlying neural implementation. They
discuss the important difference between Marr’s (2010)
algorithmic level (i.e., the computations supporting behavior;
the software) and the implementational level (i.e., the
neural processes supporting behavior; the hardware), and
they argue that the algorithmic level can best be understood
by designing clever behavioral experiments that are part of
the natural repertoire of the organism in question. We argue
that we should take a similar approach to understanding
the connection between neural responses, latent cognitive
representations, and spatially guided behavior, such as
navigation. We further suggest that these three levels exist
in a dynamic interplay in which each level can affect the
other levels (see Figure 2). Because of length restrictions,
we will refer the interested reader to other papers in which
we have discussed these issues in more detail (Ekstrom,
Harootonian, & Huffman, 2020; Ekstrom, Huffman, &
Starrett, 2017; for a more general discussion, see Poeppel &
Adolfi, 2020; Krakauer et al., 2017).
OPEN QUESTIONS
We would also like to raise several remaining questions:
1. Does the role of body-based cues differ as a function of
behavioral task? For example, do body-based cues play
a stronger role for the performance of tasks that empha-
size self-to-object representations versus tasks that
emphasize a holistic, abstract spatial representation
(see Figure 1B; cf. Waller & Greenauer, 2007; Waller
et al., 2004)?
2. How do behavioral demands alter spatial representa-
tions in humans and rodents? Does the role of body-
based cues in the rodent still differ under conditions
in which behavioral performance is matched (e.g.,
allowing one to rule out confounds of behavioral per-
formance or spatial attention)?
3. Does the role of body-based cues differ at different scales
of space (see Figure 1B)? Specifically, do body-based
cues play a stronger role on memory for small-scale spa-
tial environments (i.e., because of error accumulation in
the path integration system over longer distances)?
Relatedly, how do spatial representations differ across
spatial scales?
4. What is the relationship between patterns of brain
activity (e.g., place cells, head-direction cells, grid cells,
and border cells), behavioral expression, and latent
cognitive states (see Figure 2)?
5. The study of human navigation has revealed that there
are substantial individual differences in navigation ability
more generally (e.g., Hegarty et al., 2006; Ishikawa &
Montello, 2006; for similar findings in bats, see Harten
et al., 2020). Are there similar individual differences in
the use of body-based cues (e.g., professional 3-D video
game players vs. orienteers)?
6. What is the role of active decision-making versus the role
of body-based cues per se (cf. Chrastil & Warren, 2012)?
As we discussed, many rodent studies conflate these two
variables; thus, a better understanding of the role of
these processes can be obtained by separating the task
demands from the mode of locomotion.
7. We agree that it is unlikely that fMRI technology will
advance in the near future to allow the acquisition of
data while participants actively navigate. Thus, future
research can focus on studying the role of body-based
cues by using existing mobile technology such as
mobile EEG (e.g., Djebbara et al., 2019; Jungnickel
et al., 2019; Park & Donaldson, 2019; Park et al.,
2018), functional near-infrared spectroscopy, or intra-
cranial recordings in human patients (e.g., Topalovic
et al., 2020; Aghajan et al., 2017; Bohbot et al., 2017).
In the future, mobile PET and MEG (Boto et al., 2018)
might provide solutions. What kinds of codes can we
obtain with such methods? Will they allow us to under-
stand the brain areas involved in spatial cognition? One
potentially exciting approach could be to first find
evidence of neural differences using mobile EEG and
to then test those participants using fMRI (e.g., to de-
termine the brain regions that are involved in such
differences).
8. What is the relationship between laboratory-based ex-
periments in the rodent (e.g., random foraging within
a 1 m × 1 m open arena) and ecologically valid, large-
scale navigation under naturalistic behavioral demands
(cf. Wehner, 2020, Chapter 7; Jacobs & Menzel, 2014)?
Conclusion
In conclusion, we have many points of agreement with Steel
et al. (2020). In fact, we made many similar arguments in our
original paper (Huffman & Ekstrom, 2019a). We also raised
several points of disagreement here. Therefore, we clarified
points of potential misunderstanding, and we aimed to pro-
vide a constructive discussion of how future research with
humans and nonhuman animals can answer interesting
questions about the neuroscience of spatial cognition. In
addition, because humans and rodents are different species,
in addition to replicating findings between species, we
should also design tasks that tap into the specific skills and
cues that different species use to navigate. Accordingly, we
think that the path forward is a wide set of behavioral and
neural assays under varying levels of body-based cues and
naturalistic designs. With such designs and experiments,
we can better delineate the boundary conditions under
which vision versus body-based cues are fundamental to
how we navigate.
Huffman and Ekstrom
175
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Reprint requests should be sent to Derek J. Huffman, Department
of Psychology, Colby College, 5550 Mayflower Hill Drive,
Waterville, ME 04901, or via e-mail: derek.huffman@colby.edu.
Author Contributions
Derek J. Huffman: Conceptualization; Visualization; Writing-
original draft; Writing-review & editing. Arne D. Ekstrom:
Conceptualization; Writing-review & editing.
Funding Information
Arne D. Ekstrom, National Institute of Neurological Disorders
and Stroke (http://dx.doi.org/10.13039/100000065), grant
numbers: NIH/NINDS NS076856, NIH/NINDS NS120237.
Division of Behavioral and Cognitive Sciences (http://dx
.doi.org/10.13039/100000169), grant number: NSF BCS-
1630296.
REFERENCES
Aghajan, Z. M., Schuette, P., Fields, T. A., Tran, M. E., Siddiqui,
S. M., Hasulak, N. R., et al. (2017). Theta oscillations in the
human medial temporal lobe during real-world ambulatory
movement. Current Biology, 27, 3743–3751. DOI: https://
doi.org/10.1016/j.cub.2017.10.062, PMID: 29199073,
PMCID: PMC5937848
Aronov, D., & Tank, D. W. (2014). Engagement of neural
circuits underlying 2D spatial navigation in a rodent virtual
reality system. Neuron, 84, 442–456. DOI: https://doi.org
/10.1016/j.neuron.2014.08.042, PMID: 25374363, PMCID:
PMC4454359
Avraamides, M. N., Loomis, J. M., Klatzky, R. L., & Golledge, R. G.
(2004). Functional equivalence of spatial representations
derived from vision and language: Evidence from allocentric
judgments. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 30, 804–814. DOI: https://doi.org
/10.1037/0278-7393.30.4.804, PMID: 15238025
Benhamou, S. (1996). No evidence for cognitive mapping in
rats. Animal Behaviour, 52, 201–212. DOI: https://doi.org
/10.1006/anbe.1996.0165
Bennett, A. T. (1996). Do animals have cognitive maps? Journal
of Experimental Biology, 199, 219–224. PMID: 8576693
Bohbot, V. D., Copara, M. S., Gotman, J., & Ekstrom, A. D.
(2017). Low-frequency theta oscillations in the human
hippocampus during real-world and virtual navigation.
Nature Communications, 8, 14415. DOI: https://doi.org
/10.1038/ncomms14415, PMID: 28195129, PMCID:
PMC5316881
Boto, E., Holmes, N., Leggett, J., Roberts, G., Shah, V., Meyer, S. S.,
et al. (2018). Moving magnetoencephalography towards
real-world applications with a wearable system. Nature, 555,
657–661. DOI: https://doi.org/10.1038/nature26147, PMID:
29562238, PMCID: PMC6063354
Brunec, I. K., Moscovitch, M., & Barense, M. D. (2018).
Boundaries shape cognitive representations of spaces and
events. Trends in Cognitive Sciences, 22, 637–650. DOI:
https://doi.org/10.1016/j.tics.2018.03.013, PMID: 29706557
Bryant, D. J. (1997). Representing space in language and
perception. Mind & Language, 12, 239–264. DOI: https://doi
.org/10.1111/1468-0017.00047
Calhoun, J. B. (1963). The ecology and sociology of the Norway
rat. Bethesda, MD: U.S. Department of Health, Education,
and Welfare, Public Health Service. DOI: https://doi.org
/10.5962/bhl.title.112283
Chance, S. S., Gaunet, F., Beall, A. C., & Loomis, J. M. (1998).
Locomotion mode affects the updating of objects encountered
during travel: The contribution of vestibular and proprioceptive
inputs to path integration. Presence: Teleoperators and Virtual
Environments, 7, 168–178. DOI: https://doi.org/10.1162
/105474698565659
Cheung, A., Collett, M., Collett, T. S., Dewar, A., Dyer, F., Graham,
P., et al. (2014). Still no convincing evidence for cognitive
map use by honeybees. Proceedings of the National Academy
of Sciences, U.S.A., 111, E4396–E4397. DOI: https://doi.org
/10.1073/pnas.1413581111, PMID: 25277972, PMCID:
PMC4210289
Chrastil, E. R., & Warren, W. H. (2012). Active and passive
contributions to spatial learning. Psychonomic Bulletin &
Review, 19, 1–23. DOI: https://doi.org/10.3758/s13423-011
-0182-x, PMID: 22083627
Chrastil, E. R., & Warren, W. H. (2013). Active and passive
spatial learning in human navigation: Acquisition of survey
knowledge. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 39, 1520–1537. DOI: https://doi
.org/10.1037/a0032382, PMID: 23565781
Chrastil, E. R., & Warren, W. H. (2014). From cognitive maps to
cognitive graphs. PLoS One, 9, e112544. DOI: https://doi.org
/10.1371/journal.pone.0112544, PMID: 25389769, PMCID:
PMC4229194
Chrastil, E. R., & Warren, W. H. (2015). Active and passive
spatial learning in human navigation: Acquisition of graph
knowledge. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 41, 1162–1178. DOI: https://doi
.org/10.1037/xlm0000082, PMID: 25419818
Collett, M., Chittka, L., & Collett, T. S. (2013). Spatial memory in
insect navigation. Current Biology, 23, R789–R800. DOI:
https://doi.org/10.1016/j.cub.2013.07.020, PMID: 24028962
Cruse, H., & Wehner, R. (2011). No need for a cognitive
map: Decentralized memory for insect navigation. PLoS
Computational Biology, 7, e1002009. DOI: https://doi.org
/10.1371/journal.pcbi.1002009, PMID: 21445233, PMCID:
PMC3060166
Dahmen, H., Wahl, V. L., Pfeffer, S. E., Mallot, H. A., &
Wittlinger, M. (2017). Naturalistic path integration of
Cataglyphis desert ants on an air-cushioned lightweight
spherical treadmill. Journal of Experimental Biology, 220,
634–644. DOI: https://doi.org/10.1242/jeb.148213, PMID:
28202651
Djebbara, Z., Fich, L. B., Petrini, L., & Gramann, K. (2019).
Sensorimotor brain dynamics reflect architectural affordances.
Proceedings of the National Academy of Sciences, U.S.A., 116,
14769–14778. DOI: https://doi.org/10.1073/pnas.1900648116,
PMID: 31189596, PMCID: PMC6642393
Eichenbaum, H., & Cohen, N. J. (2014). Can we reconcile the
declarative memory and spatial navigation views on hippocampal
function? Neuron, 83, 764–770. DOI: https://doi.org/10.1016
/j.neuron.2014.07.032, PMID: 25144874, PMCID: PMC4148642
Ekstrom, A. D. (2015). Why vision is important to how we navigate.
Hippocampus, 25, 731–735. DOI: https://doi.org/10.1002
/hipo.22449, PMID: 25800632, PMCID: PMC4449293
Ekstrom, A. D., Harootonian, S. K., & Huffman, D. J. (2020).
Grid coding, spatial representation, and navigation: Should
we assume an isomorphism? Hippocampus, 30, 422–432.
DOI: https://doi.org/10.1002/hipo.23175, PMID: 31742364,
PMCID: PMC7409510
Ekstrom, A. D., Huffman, D. J., & Starrett, M. (2017). Interacting
networks of brain regions underlie human spatial navigation:
A review and novel synthesis of the literature. Journal of
Neurophysiology, 118, 3328–3344. DOI: https://doi.org/10
.1152/jn.00531.2017, PMID: 28931613, PMCID: PMC5814720
176
Journal of Cognitive Neuroscience
Volume 33, Number 2
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Faulkenberry, T. J. (2019). Estimating evidential value from
analysis of variance summaries: A comment on Ly et al. (2018).
Advances in Methods and Practices in Psychological
Science, 2, 406–409. DOI: https://doi.org/10.1177
/2515245919872960
Foo, P., Warren, W. H., Duchon, A., & Tarr, M. J. (2005). Do
humans integrate routes into a cognitive map? Map- versus
landmark-based navigation of novel shortcuts. Journal of
Experimental Psychology: Learning, Memory, and Cognition,
31, 195–215. DOI: https://doi.org/10.1037/0278-7393.31.2.195,
PMID: 15755239
Foster, T. C., Castro, C. A., & McNaughton, B. L. (1989). Spatial
selectivity of rat hippocampal neurons: Dependence on
preparedness for movement. Science, 244, 1580–1582. DOI:
https://doi.org/10.1126/science.2740902, PMID: 2740902
Geva-Sagiv, M., Las, L., Yovel, Y., & Ulanovsky, N. (2015). Spatial
cognition in bats and rats: From sensory acquisition to
multiscale maps and navigation. Nature Reviews Neuroscience,
16, 94–108. DOI: https://doi.org/10.1038/nrn3888, PMID:
25601780
Giudice, N. A., Betty, M. R., & Loomis, J. M. (2011). Functional
equivalence of spatial images from touch and vision: Evidence
from spatial updating in blind and sighted individuals. Journal
of Experimental Psychology: Learning, Memory, and Cognition,
37, 621–634. DOI: https://doi.org/10.1037/a0022331, PMID:
21299331, PMCID: PMC5507195
Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., & Moser, E. I.
(2005). Microstructure of a spatial map in the entorhinal
cortex. Nature, 436, 801–806. DOI: https://doi.org/10.1038
/nature03721, PMID: 15965463
Harootonian, S. K., Wilson, R. C., Hejtmánek, L., Ziskin, E. M., &
Ekstrom, A. D. (2020). Path integration in large-scale space
and with novel geometries: Comparing vector addition and
encoding-error models. PLoS Computational Biology, 16,
e1007489. DOI: https://doi.org/10.1371/journal.
pcbi.1007489, PMID: 32379824, PMCID: PMC7244182
Harten, L., Katz, A., Goldshtein, A., Handel, M., & Yovel, Y.
(2020). The ontogeny of a mammalian cognitive map in the
real world. Science, 369, 194–197. DOI: https://doi.org/10
.1126/science.aay3354, PMID: 32647001
Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., &
Lovelace, K. (2006). Spatial abilities at different scales:
Individual differences in aptitude-test performance and
spatial-layout learning. Intelligence, 34, 151–176. DOI:
https://doi.org/10.1016/j.intell.2005.09.005
Hirtle, S. C., & Jonides, J. (1985). Evidence of hierarchies in
cognitive maps. Memory & Cognition, 13, 208–217. DOI:
https://doi.org/10.3758/BF03197683, PMID: 4046821
Hodgson, E., Bachmann, E., & Waller, D. (2008). Redirected
walking to explore virtual environments: Assessing the potential
for spatial interference. ACM Transactions on Applied
Perception, 8, 1–22. DOI: https://doi.org/10.1145/2043603
.2043604
Horner, A. J., Bisby, J. A., Wang, A., Bogus, K., & Burgess, N. (2016).
The role of spatial boundaries in shaping long-term event
representations. Cognition, 154, 151–164. DOI: https://doi.org
/10.1016/j.cognition.2016.05.013, PMID: 27295330, PMCID:
PMC4955252
Huffman, D. J., & Ekstrom, A. D. (2019a). A modality-independent
network underlies the retrieval of large-scale spatial environments
in the human brain. Neuron, 104, 611–622. DOI: https://doi
.org/10.1016/j.neuron.2019.08.012, PMID: 31540825, PMCID:
PMC6842116
Huffman, D. J., & Ekstrom, A. D. (2019b). Which way is the
bookstore? A closer look at the judgments of relative directions
task. Spatial Cognition and Computation, 19, 93–129. DOI:
https://doi.org/10.1080/13875868.2018.1531869, PMID:
31105466, PMCID: PMC6519130
Ishikawa, T., & Montello, D. R. (2006). Spatial knowledge
acquisition from direct experience in the environment:
Individual differences in the development of metric
knowledge and the integration of separately learned places.
Cognitive Psychology, 52, 93–129. DOI: https://doi.org
/10.1016/j.cogpsych.2005.08.003, PMID: 16375882
Jacobs, L. F., & Menzel, R. (2014). Navigation outside of the box:
What the lab can learn from the field and what the field can
learn from the lab. Movement Ecology, 2, 3. DOI: https://doi
.org/10.1186/2051-3933-2-3, PMID: 25520814, PMCID:
PMC4267593
Jungnickel, E., Gehrke, L., Klug, M., & Gramann, K. (2019).
MoBI—Mobile Brain/Body Imaging. In H. Ayaz & F. Dehais
(Eds.), Neuroergonomics: The brain at work and in
everyday life (pp. 59–63). Cambridge, MA: Academic Press.
DOI: https://doi.org/10.1016/B978-0-12-811926-6.00010-5
Kim, S., Sapiurka, M., Clark, R. E., & Squire, L. R. (2013).
Contrasting effects on path integration after hippocampal
damage in humans and rats. Proceedings of the National
Academy of Sciences, U.S.A., 110, 4732–4737. DOI: https://
doi.org/10.1073/pnas.1300869110, PMID: 23404706,
PMCID: PMC3606992
Klatzky, R. L., Loomis, J. M., Beall, A. C., Chance, S. S., &
Golledge, R. G. (1998). Spatial updating of self-position and
orientation during real, imagined, and virtual locomotion.
Psychological Science, 9, 293–298. DOI: https://doi.org
/10.1111/1467-9280.00058
Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A.,
& Poeppel, D. (2017). Neuroscience needs behavior: Correcting
a reductionist bias. Neuron, 93, 480–490. DOI: https://doi.org
/10.1016/j.neuron.2016.12.041, PMID: 28182904
Liang, M., Starrett, M. J., & Ekstrom, A. D. (2018). Dissociation
of frontal–midline delta–theta and posterior alpha oscillations:
A mobile EEG study. Psychophysiology, 55, e13090. DOI: https://
doi.org/10.1111/psyp.13090, PMID: 29682758
Loomis, J. M., Klatzky, R. L., Golledge, R. G., Cicinelli, J. G.,
Pellegrino, J. W., & Fry, P. A. (1993). Nonvisual navigation by
blind and sighted: Assessment of path integration ability.
Journal of Experimental Psychology: General, 122, 73–91.
DOI: https://doi.org/10.1037/0096-3445.122.1.73, PMID:
8440978
Loomis, J. M., Lippa, Y., Klatzky, R. L., & Golledge, R. G. (2002).
Spatial updating of locations specified by 3-D sound and
spatial language. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 28, 335–345. DOI:
https://doi.org/10.1037/0278-7393.28.2.335, PMID: 11911388
Marr, D. (2010). Vision: A computational investigation into the
human representation and processing of visual information.
Cambridge, MA: MIT Press. DOI: https://doi.org/10.7551
/mitpress/9780262514620.001.0001
McNamara, T. P. (1986). Mental representations of spatial relations.
Cognitive Psychology, 18, 87–121. DOI: https://doi.org/10.1016
/0010-0285(86)90016-2, PMID: 3948491
McNamara, T. P. (1991). Memory’s view of space. Psychology of
Learning and Motivation, 27, 147–186. DOI: https://doi.org
/10.1016/S0079-7421(08)60123-1
McNamara, T. P., Rump, B., & Werner, S. (2003). Egocentric and
geocentric frames of reference in memory of large-scale
space. Psychonomic Bulletin & Review, 10, 589–595. DOI:
https://doi.org/10.3758/BF03196519, PMID: 14620351
McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I., &
Moser, M.-B. (2006). Path integration and the neural basis of the
“cognitive map.” Nature Reviews Neuroscience, 7, 663–678.
DOI: https://doi.org/10.1038/nrn1932, PMID: 16858394
Meilinger, T. (2008). The network of reference frames theory: A
synthesis of graphs and cognitive maps. In C. Freksa, N. S.
Newcombe, P. Gärdenfors, & S. Wölfl (Eds.), Spatial cognition
VI: Learning, reasoning, and talking about space (Vol. 5248,
Huffman and Ekstrom
177
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
pp. 344–360). Berlin, Heidelberg: Springer. DOI: https://doi
.org/10.1007/978-3-540-87601-4_25
Moar, I., & Bower, G. H. (1983). Inconsistency in spatial knowledge.
Memory & Cognition, 11, 107–113. DOI: https://doi.org/10.3758
/BF03213464, PMID: 6865743
Montello, D. R. (1993). Scale and multiple psychologies of space.
In A. U. Frank & I. Campari (Eds.), Spatial information theory:
A theoretical basis for GIS (Vol. 716, pp. 312–321). Berlin,
Heidelberg: Springer. DOI: https://doi.org/10.1007/3-540
-57207-4_21
Moser, E. I., & Moser, M.-B. (2008). A metric for space. Hippocampus,
18, 1142–1156. DOI: https://doi.org/10.1002/hipo.20483,
PMID: 19021254
Mou, W., McNamara, T. P., Valiquette, C. M., & Rump, B. (2004).
Allocentric and egocentric updating of spatial memories.
Journal of Experimental Psychology: Learning, Memory,
and Cognition, 30, 142–157. DOI: https://doi.org/10.1037
/0278-7393.30.1.142, PMID: 14736303
Nilsson, N. C., Peck, T., Bruder, G., Hodgson, E., Serafin, S.,
Whitton, M., et al. (2018). 15 years of research on redirected
walking in immersive virtual environments. IEEE Computer
Graphics and Applications, 38, 44–56. DOI: https://doi.org
/10.1109/MCG.2018.111125628, PMID: 29672255
O’Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a
spatial map. Preliminary evidence from unit activity in the
freely-moving rat. Brain Research, 34, 171–175. DOI: https://
doi.org/10.1016/0006-8993(71)90358-1, PMID: 5124915
O’Keefe, J., & Nadel, L. (1978). The hippocampus as a
cognitive map. Oxford: Clarendon Press.
Park, J. L., & Donaldson, D. I. (2019). Detecting the neural
correlates of episodic memory with mobile EEG: Recollecting
objects in the real world. Neuroimage, 193, 1–9. DOI:
https://doi.org/10.1016/j.neuroimage.2019.03.013, PMID:
30862534
Park, J. L., Dudchenko, P. A., & Donaldson, D. I. (2018).
Navigation in real-world environments: New opportunities
afforded by advances in mobile brain imaging. Frontiers in
Human Neuroscience, 12, 361. DOI: https://doi.org/10.3389
/fnhum.2018.00361, PMID: 30254578, PMCID: PMC6141718
Peer, M., Ron, Y., Monsa, R., & Arzy, S. (2019). Processing of
different spatial scales in the human brain. eLife, 8, e47492.
DOI: https://doi.org/10.7554/eLife.47492, PMID: 31502539,
PMCID: PMC6739872
route knowledge. Memory & Cognition, 39, 686–699. DOI:
https://doi.org/10.3758/s13421-010-0054-z, PMID: 21264583
Russell, J. C., McMorland, A. J. C., & MacKay, J. W. B. (2010).
Exploratory behaviour of colonizing rats in novel environments.
Animal Behaviour, 79, 159–164. DOI: https://doi.org/10.1016
/j.anbehav.2009.10.020
Sargent, J. Q., Zacks, J. M., Hambrick, D. Z., & Lin, N. (2019).
Event memory uniquely predicts memory for large-scale
space. Memory & Cognition, 47, 212–228. DOI: https://doi
.org/10.3758/s13421-018-0860-2, PMID: 30229479
Shelton, A. L., & McNamara, T. P. (1997). Multiple views of
spatial memory. Psychonomic Bulletin & Review, 4, 102–106.
DOI: https://doi.org/10.3758/BF03210780
Shelton, A. L., & McNamara, T. P. (2001). Visual memories from
nonvisual experiences. Psychological Science, 12, 343–347.
DOI: https://doi.org/10.1111/1467-9280.00363, PMID:
11476104
Shinder, M. E., & Taube, J. S. (2011). Active and passive
movement are encoded equally by head direction cells in the
anterodorsal thalamus. Journal of Neurophysiology, 106,
788–800. DOI: https://doi.org/10.1152/jn.01098.2010, PMID:
21613594, PMCID: PMC3154800
Shine, J. P., Valdés-Herrera, J. P., Hegarty, M., & Wolbers, T.
(2016). The human retrosplenial cortex and thalamus code
head direction in a global reference frame. Journal of
Neuroscience, 36, 6371–6381. DOI: https://doi.org/10.1523
/JNEUROSCI.1268-15.2016, PMID: 27307227, PMCID:
PMC5321500
Souman, J. L., Frissen, I., Sreenivasa, M. N., & Ernst, M. O. (2009).
Walking straight into circles. Current Biology, 19, 1538–1542.
DOI: https://doi.org/10.1016/j.cub.2009.07.053, PMID:
19699093
Stackman, R. W., & Taube, J. S. (1997). Firing properties of
head direction cells in the rat anterior thalamic nucleus:
Dependence on vestibular input. Journal of Neuroscience,
17, 4349–4358. DOI: https://doi.org/10.1523/JNEUROSCI
.17-11-04349.1997, PMID: 9151751, PMCID: PMC1489676
Steel, A., Robertson, C. E., & Taube, J. S. (2020). Current
promises and limitations of combined virtual reality and
functional magnetic resonance imaging research in humans:
A commentary on Huffman and Ekstrom (2019). Journal of
Cognitive Neuroscience, 1–8. DOI: https://doi.org/10.1162
/jocn_a_01635, PMID: 33054553
Poeppel, D., & Adolfi, F. (2020). Against the epistemological
Stella, F., Urdapilleta, E., Luo, Y., & Treves, A. (2020). Partial
primacy of the hardware: The brain from inside out, turned
upside down. eNeuro, 7, ENEURO.0215-20.2020. DOI:
https://doi.org/10.1523/ENEURO.0215-20.2020, PMID:
32769167, PMCID: PMC7415919
Posner, M. I., Nissen, M. J., & Klein, R. M. (1976). Visual
dominance: An information-processing account of its origins
and significance. Psychological Review, 83, 157–171. DOI:
https://doi.org/10.1037/0033-295X.83.2.157, PMID: 769017
Rieser, J. J. (1989). Access to knowledge of spatial structure at novel
points of observation. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 15, 1157–1165. DOI:
https://doi.org/10.1037/0278-7393.15.6.1157, PMID: 2530309
Robertson, C. E., Hermann, K. L., Mynick, A., Kravitz, D. J., &
Kanwisher, N. (2016). Neural representations integrate the
current field of view with the remembered 360° panorama
in scene-selective cortex. Current Biology, 26, 2463–2468. DOI:
https://doi.org/10.1016/j.cub.2016.07.002, PMID: 27618266
Ruddle, R. A., & Lessels, S. (2006). For efficient navigational
search, humans require full physical movement, but not a
rich visual scene. Psychological Science, 17, 460–465. DOI:
https://doi.org/10.1111/j.1467-9280.2006.01728.x, PMID:
16771793
Ruddle, R. A., Volkova, E., Mohler, B., & Bülthoff, H. H. (2011). The
effect of landmark and body-based sensory information on
coherence and frustration in self-organizing spherical grids.
Hippocampus, 30, 302–313. DOI: https://doi.org/10.1002
/hipo.23144, PMID: 31339190
Taube, J. S., Muller, R. U., & Ranck, J. B., Jr. (1990). Head-
direction cells recorded from the postsubiculum in freely
moving rats. I. Description and quantitative analysis. Journal
of Neuroscience, 10, 420–435. DOI: https://doi.org/10.1523
/JNEUROSCI.10-02-00420.1990, PMID: 2303851, PMCID:
PMC6570151
Taube, J. S., Valerio, S., & Yoder, R. M. (2013). Is navigation
in virtual reality with fMRI really navigation? Journal of
Cognitive Neuroscience, 25, 1008–1019. DOI: https://doi
.org/10.1162/jocn_a_00386, PMID: 23489142
Taylor, H. A., & Tversky, B. (1992). Spatial mental models
derived from survey and route descriptions. Journal of
Memory and Language, 31, 261–292. DOI: https://doi.org
/10.1016/0749-596X(92)90014-O
Toledo, S., Shohami, D., Schiffner, I., Lourie, E., Orchan, Y.,
Bartan, Y., et al. (2020). Cognitive map-based navigation in
wild bats revealed by a new high-throughput tracking system.
Science, 369, 188–193. DOI: https://doi.org/10.1126/science
.aax6904, PMID: 32647000
Topalovic, U., Aghajan, Z. M., Villaroman, D., Hiller, S., Christov-
Moore, L., Wishard, T. J., et al. (2020). Wireless programmable
178
Journal of Cognitive Neuroscience
Volume 33, Number 2
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
recording and stimulation of deep brain activity in freely moving
humans. Neuron, 108, 322–334. DOI: https://doi.org/10.1016
/j.neuron.2020.08.021, PMID: 32946744
Tsoar, A., Nathan, R., Bartan, Y., Vyssotski, A., Dell’Omo, G., &
Ulanovsky, N. (2011). Large-scale navigational map in a mammal.
Proceedings of the National Academy of Sciences, U.S.A., 108,
E718–E724. DOI: https://doi.org/10.1073/pnas.1107365108,
PMID: 21844350, PMCID: PMC3174628
Tversky, B. (1992). Distortions in cognitive maps. Geoforum, 23,
131–138. DOI: https://doi.org/10.1016/0016-7185(92)90011-R
Tversky, B. (1993). Cognitive maps, cognitive collages, and
spatial mental models. In A. U. Frank & I. Campari (Eds.),
Spatial information theory: A theoretical basis for GIS
(Vol. 716, pp. 14–24). Berlin, Heidelberg: Springer. DOI:
https://doi.org/10.1007/3-540-57207-4_2
Vass, L. K., & Epstein, R. A. (2017). Common neural representations
for visually guided reorientation and spatial imagery. Cerebral
Cortex, 27, 1457–1471. DOI: https://doi.org/10.1093/cercor
/bhv343, PMID: 26759482, PMCID: PMC5964444
von Frisch, K. (1954). The dancing bees: An account of the life
and senses of the honey bee. Vienna: Springer. DOI: https://
doi.org/10.1007/978-3-7091-4697-2
Waller, D., & Greenauer, N. (2007). The role of body-based
sensory information in the acquisition of enduring spatial
representations. Psychological Research, 71, 322–332.
DOI: https://doi.org/10.1007/s00426-006-0087-x, PMID:
16953434
Waller, D., & Hodgson, E. (2006). Transient and enduring
spatial representations under disorientation and self-rotation.
Journal of Experimental Psychology: Learning, Memory,
and Cognition, 32, 867–882. DOI: https://doi.org/10.1037
/0278-7393.32.4.867, PMID: 16822154, PMCID: PMC1501085
Waller, D., Loomis, J. M., & Haun, D. B. M. (2004). Body-based
senses enhance knowledge of directions in large-scale
environments. Psychonomic Bulletin & Review, 11, 157–163.
DOI: https://doi.org/10.3758/BF03206476, PMID: 15117002
Waller, D., Loomis, J. M., & Steck, S. D. (2003). Inertial cues do not
enhance knowledge of environmental layout. Psychonomic
Bulletin & Review, 10, 987–993. DOI: https://doi.org/10.3758
/BF03196563, PMID: 15000550
Warren, W. H. (2019). Non-Euclidean navigation. Journal of
Experimental Biology, 222(Suppl. 1), jeb187971. DOI: https://
doi.org/10.1242/jeb.187971, PMID: 30728233
Warren, W. H., Rothman, D. B., Schnapp, B. H., & Ericson, J. D.
(2017). Wormholes in virtual space: From cognitive maps to
cognitive graphs. Cognition, 166, 152–163. DOI: https://doi
.org/10.1016/j.cognition.2017.05.020, PMID: 28577445
Wehner, R. (2020). Desert navigator: The journey of the ant.
London: Harvard University Press. DOI: https://doi.org/10
.4159/9780674247918
Winter, S. S., Clark, B. J., & Taube, J. S. (2015). Disruption of the
head direction cell network impairs the parahippocampal grid
cell signal. Science, 347, 870–874. DOI: https://doi.org/10.1126
/science.1259591, PMID: 25700518; PMCID: PMC4476794
Winter, S. S., Mehlman, M. L., Clark, B. J., & Taube, J. S. (2015).
Passive transport disrupts grid signals in the parahippocampal
cortex. Current Biology, 25, 2493–2502. DOI: https://doi.org
/10.1016/j.cub.2015.08.034, PMID: 26387719, PMCID:
PMC4596791
Wittlinger, M., Wehner, R., & Wolf, H. (2006). The ant odometer:
Stepping on stilts and stumps. Science, 312, 1965–1967. DOI:
https://doi.org/10.1126/science.1126912, PMID: 16809544
Wolbers, T., Klatzky, R. L., Loomis, J. M., Wutte, M. G., &
Giudice, N. A. (2011). Modality-independent coding of spatial
layout in the human brain. Current Biology, 21, 984–989.
DOI: https://doi.org/10.1016/j.cub.2011.04.038, PMID:
21620708, PMCID: PMC3119034
Wolbers, T., & Wiener, J. M. (2014). Challenges for identifying
the neural mechanisms that support spatial navigation: The
impact of spatial scale. Frontiers in Human Neuroscience, 8,
571. DOI: https://doi.org/10.3389/fnhum.2014.00571, PMID:
25140139, PMCID: PMC4121531
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
j
/
o
c
n
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
3
3
2
1
6
7
1
8
6
2
5
2
9
/
j
o
c
n
_
a
_
0
1
6
5
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Huffman and Ekstrom
179