Viewing Oneʼs Own Face Being Touched Modulates
Tactile Perception: An fMRI Study
Flavia Cardini1, Marcello Costantini2, Gaspare Galati3,4,
Gian Luca Romani2, Elisabetta Làdavas1,
and Andrea Serino1
D
oh
w
norte
yo
oh
a
d
mi
d
Abstracto
■ The perception of tactile stimuli on the face is modulated if
subjects concurrently observe a face being touched; este efecto,
termed visual remapping of touch ( VRT), is maximum for ob-
serving oneʼs own face. In the present fMRI study, we investigated
the neural basis of the VRT effect. Participants in the scanner re-
ceived tactile stimuli, near the perceptual threshold, on their right,
izquierda, or both cheeks. Concurrently, they watched movies depict-
ing their own face, another personʼs face, or a ball that could be
touched or only approached by human fingers. Los participantes fueron
requested to distinguish between unilateral and bilateral tactile
stimulation. Behaviorally, perception of tactile stimuli was modu-
lated by viewing a tactile stimulation, with a stronger effect when
viewing oneʼs own face being touched. In terms of brain activity,
viewing touch was related with an enhanced activity in the ventral
intraparietal area. The specific effect of viewing touch on one-
self was instead related with a reduced activity in both the ventral
premotor cortex and the somatosensory cortex. The present find-
ings suggest that VRT is supported by a network of fronto-parietal
areas. The ventral intraparietal area might remap visual informa-
tion about touch onto tactile processing. Ventral premotor cortex
might specifically modulate multisensory interaction when sen-
sory information is related to oneʼs own body. Then this activity
might back project to the somatosensory cortices, thus affecting
tactile perception. ■
INTRODUCCIÓN
Viewing another person or even an object being touched
activates brain regions normally recruited during tactile per-
ception, even if the observerʼs body is not directly tactilely
stimulated. Such visually evoked somatosensory activity
involves a network of fronto-parietal areas distributed along
the postcentral gyrus, the supramarginal gyrus, y el
precentral gyrus (premotor cortex) (Ebisch et al., 2008;
Blakemore, Bristow, Bird, Frith, & Ward, 2005; Keyser
et al., 2004). This overlap of brain activity for perceiving
and viewing touch has been taken as an evidence for the
existence of a “tactile mirror system,” a neural mechanism
remapping tactile sensation seen on the body of others
onto oneʼs own somatosensory system.
This visually dependent somatosensory activity does not
normally result in an actual tactile percept, as most subjects
do not report to feel touch when observing touch on the
body of others. Visuotactile synesthetes represent an inter-
esting exception, in that they report feeling touch on their
body when they view the body of others being touched
(Banissy & Ward, 2007). A neuroimaging study run on a
single synesthetic subject showed that the brain activity
1University of Bologna, Cesena, Italia, 2University “G. dʼAnnunzio,"
Chieti, Italia, 3Universidad La Sapienza, Roma, Italia, 4Santa Lucia Foun-
dación, Roma, Italia
evoked by the observation of touch in the aforementioned
fronto-parietal areas was stronger in this subject than that
in nonsynesthetic controls (Blakemore et al., 2005). Estos
findings suggest that a modulation of tactile processing
due to the vision of touch occurs in all subjects, but only
in synesthetes, this effect is sufficient to overcome the
threshold of conscious experience. In line with this view,
we have recently shown that if perceptual thresholds are
experimentally manipulated, an effect of viewing touch
on tactile perception can be behaviorally unmasked also
in nonsynesthetes (Serino, Pizzoferrato, & Ladavas, 2008).
The perception of near-threshold tactile stimuli on the face
of nonsynesthetic subjects was modulated if they observed
a face being touched by two fingers in comparison with
when they observed the same face being just approached
by the fingers. This effect, called visual remapping of touch
(VRT), was specific for viewing a bodily stimulus because
the effect of vision on touch disappeared if the subjects ob-
served the picture of an object instead of a face. Además,
the effect of vision on touch was maximum when subjects
observed their own face being touched instead of the face
of another person, suggesting that the VRT effect increases
as much as the observerʼs and the observed body match. A
remap a sensation from one sensory modality to another—
a saber, from vision to touch—the remapping could be
favored if the two modalities share a common reference
sistema, in this case the same body. Como consecuencia,
© 2010 Instituto de Tecnología de Massachusetts
Revista de neurociencia cognitiva 23:3, páginas. 503–513
yo
yo
/
/
/
/
j
F
/
t
t
i
t
.
:
/
/
F
r
oh
metro
D
h
oh
t
w
t
norte
pag
oh
:
a
/
d
/
mi
metro
d
i
F
t
r
oh
pag
metro
r
C
h
.
s
pag
i
yo
d
v
i
mi
r
mi
r
C
C
t
.
h
metro
a
i
r
mi
.
d
tu
C
oh
oh
metro
C
/
norte
j
a
oh
r
C
t
i
norte
C
/
mi
a
–
pag
r
d
t
i
2
C
3
yo
3
mi
–
5
pag
0
d
3
F
1
/
9
2
4
3
0
/
5
3
5
/
3
5
oh
0
C
3
norte
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
oh
pag
C
d
norte
.
b
2
y
0
gramo
1
tu
0
mi
.
s
t
2
oh
1
norte
4
8
0
4
7
.
S
pag
mi
d
pag
F
mi
metro
b
y
b
mi
gramo
r
tu
2
mi
0
2
s
3
t
/
j
.
F
/
.
t
.
oh
norte
1
8
METRO
a
y
2
0
2
1
visual information about the self may modulate the sense
of touch.
This experimental finding opens a new intriguing ques-
ción. Por un lado, multisensory integration has typically
been studied between low levels of sensory processing. On
the other hand, the study of self-representation usually
concerns high levels of information processing. In the case
of the results of the study of Serino et al. (2008), high-order
visual information concerning the representation of one-
self, as different from others, modulates the perception of
tactile stimuli. How does this effect occur? Which are the
neural underpinnings of such complex form of multisensory
interacción?
When viewing a face, high-order visual areas in the extra-
striate cortex, connected to portions of the middle and in-
ferior frontal gyrus (Platek, Wathne, Tierney, & Thomson,
2008), signal whether that face belongs to oneself or to
another individual. In the case of viewing oneʼs own face,
this complex visual judgment might activate different re-
presentations of the self. Cognitive neuroscience litera-
tura (Stamenov, 2005) individuates at least two levels of
representations of the self: a semantic, conceptual repre-
sentation, the narrative self (DʼArgembeau et al., 2007;
Buckner & Carroll, 2006), and a sensory motor represen-
tation of oneʼs own body, the embodied self (Blanke &
Metzinger, 2009; Tsakiris, Hesse, Boy, Haggard, & Fink,
2007; Ehrsson, holmes, & Passingham, 2005). A pool of
brain structures in the ventromedial pFC are thought to
support the representation of the narrative self because
those areas are engaged during a number of tasks re-
quiring the processing of self-knowledge, self-referencing
(DʼArgembeau et al., 2007; Heatherton et al., 2006; Northoff
& Bermpohl, 2004), mentalizing, or judgments about one-
self relative to other people in general ( Jenkins, Macrae,
& mitchell, 2008; mitchell, Macrae, & Banaji, 2006). On
the other hand, a network of fronto-parietal areas is sup-
posed to underlie the representation of the embodied
self because those areas are involved in integrating multi-
sensory information pertaining oneʼs own body and are
engaged when people experience a sense of ownership
of a body-like stimulus, such as in the so-called rubber
hand illusion (RHI; Tsakiris et al., 2007; Ehrsson et al.,
2005; Botvinick & cohen, 1998). En el presente estudio,
we asked which kind of self-representation could mod-
ulate tactile perception and how such high-level repre-
sentation could directly influence low-level perceptual
Procesando.
To answer these questions, in the present work we
adapted the paradigm from Serino et al. (2008) for fMRI
scanning. Subjects received an electrical stimulation on
their right, izquierda, or both cheeks and were requested to dis-
criminate between unilateral and bilateral stimulation. A
manipulate perceptual thresholds, the stimulus on the left
cheek was stronger than that on the right cheek. En esto
way, in condition of bilateral stimulation, the stronger stim-
ulus would frequently extinguish the weaker one (Serino,
Giovagnoli, & Ladavas, 2009; Serino et al., 2008). During
la tarea, subjects were watching a movie showing, in differ-
ent trials, the image of their own face, of another personʼs
rostro, or of a nonbody stimulus, a saber, a ball. The image
could be touched or just approached bilaterally by two
human fingers (one on its left and one on its right side) en
different trials. Subjects were instructed to respond only on
the basis of tactile stimulation and not of visual stimula-
ción. We studied neural activity evoked in different brain
areas as a function of the different experimental conditions
and in relationship to subjectsʼ perceptual reports.
The first question was whether the modulation of VRT
due to viewing oneʼs own face relies on the activation of
a conceptual or of a physical representation of self. If the
narrative self is responsible for the effect, a specific modu-
lation of brain activity in ventromedial prefrontal areas
should be found when subjects view oneʼs own face being
touched in comparison with viewing another personʼs face
or an object. A diferencia de, if the embodied self is the ori-
gin of the effect, such modulation of brain activity should
be found in fronto-parietal multisensory areas and not in
ventromedial frontal areas.
Segundo, once either representation of the self is acti-
vated, we asked how such representation could affect the
perception of touch. A possible explanation is that visual
information about the self modulates tactile processing
because the activity in high-order self-related areas projects
to somatosensory cortices, where the tactile stimulus is
procesado. If this is the case, the same modulation of neu-
ral activity for the different experimental conditions found
in the brain network underlying the self representation
should be found also in somatosensory cortices within
the parietal lobe.
MÉTODOS
Participantes
Fifteen healthy young adults (10 women) were included
in the present study (edad media = 23.6 años, range = 19–
30 años). All participants were right-handed, had nor-
mal or corrected-to-normal vision, had normal touch, y
were naive as to the purposes of the experiment. Participe-
pants gave their written informed consent to participate
in the study and were paid (A25) por su participación.
The study was approved by the ethics committee of the
“G. dʼAnnunzio” University, Chieti, and was conducted
in accordance with the ethical standards of the 1964
Declaration of Helsinki.
fMRI Data Acquisition
All images were collected with a 1.5-T Philips Achieva
scanner operating at the Institute of Advanced Biomedical
Technologies (I.T.A.B. Fondazione G. dʼAnnunzio, Chieti,
Italia). T1-weighted anatomical images were collected using
a multiplanar rapid acquisition gradient-echo sequence
504
Revista de neurociencia cognitiva
Volumen 23, Número 3
D
oh
w
norte
yo
oh
a
d
mi
d
yo
yo
/
/
/
/
j
t
t
F
/
i
t
.
:
/
/
F
r
oh
metro
D
h
oh
t
w
t
norte
pag
oh
:
a
/
d
/
mi
metro
d
i
F
t
r
oh
pag
metro
r
C
h
.
s
pag
i
yo
d
v
i
mi
r
mi
r
C
C
t
.
h
metro
a
i
r
mi
.
d
tu
C
oh
oh
metro
C
/
norte
j
a
oh
r
C
t
i
norte
C
/
mi
a
–
pag
r
d
t
i
2
C
3
yo
3
mi
–
5
pag
0
d
3
F
1
/
9
2
4
3
0
/
5
3
5
/
3
5
oh
0
C
3
norte
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
oh
pag
C
d
norte
.
b
2
y
0
gramo
1
tu
0
mi
.
s
t
2
oh
1
norte
4
8
0
4
7
.
S
pag
mi
d
pag
F
mi
metro
b
y
b
mi
gramo
r
tu
2
mi
0
2
s
3
t
/
j
F
.
/
.
.
t
oh
norte
1
8
METRO
a
y
2
0
2
1
(230 sagittal slices, voxel size = 0.5 × 0.5 × 0.8 mm, repe-
tition time = 8.08 mseg, echo time = 3.7 mseg). Funcional
images were collected with a gradient-echo EPI sequence.
Each subject underwent four acquisition runs, each includ-
En g 198 consecutive volumes comprising 25 consecutive
4-mm-thick slices oriented parallel to the anterior-posterior
commissure and covering the whole brain (repetition time =
2.3 segundo, echo time = 60 mseg, 64 × 64 image matrix, 4 ×
4-mm in-plane resolution).
Stimuli and Conditions
The experimental stimuli consisted of both tactile and vi-
sual stimuli.
Tactile stimuli were delivered via a pair of miniaturized
screen electrodes placed on the subjectsʼ cheeks (stimulus
duration = 5 mseg). In different trials, a tactile stimulus was
administered to the right, izquierda, or both cheeks. The tactile
stimulus on the left cheek was calibrated to be more in-
tense than that on the right cheek. Before the experiment,
while the subject was lying in the fMRI scanner, el en-
tensity of the electrical stimuli was titrated for each sub-
ject in the absence of visual information. Using a staircase
procedimiento, stimulus intensity was titrated at a threshold of
100% of detection for the stronger stimulus (mean thresh-
old = 20 ± 3 mA) y de 60% for the weaker stimulus
(mean threshold = 13 ± 4 mA). Thresholds were recali-
brated before each experimental block.
Visual stimuli consisted of three sets of gray scale mov-
es, one depicting the subjectʼs own face (self ), the sec-
ond depicting the face of another person (of the same
age and sex as the subject; otro), and the third depicting
a ball (object). A ball has a perceptual configuration simi-
lar to a face but is anatomically categorized as a nonbodily
stimulus.
The movie also showed two fingers initially positioned
on the lower part of the screen, one on the right and
one on the left. During the movie, both fingers moved
toward the centrally presented image and then backward
to their starting position. In different trials, the motion
followed one of two trajectories: in the touch condition,
the fingers actually touched the central image, and in the
no-touch condition, the fingers stopped about 5 cm away
from the image.
Visual and tactile stimuli were synchronized so that when
the fingers reached the image, a tactile input (a bilateral
or a unilateral tactile stimulation) was delivered to the sub-
jectʼs face. Each movie lasted in total 1000 mseg, and tactile
stimulation was delivered at ∼500 msec from the begin-
ning of the movie. Each movie was preceded by a fixation
stimulus lasting a variable, nonpredicable interval of either
2000, 2500, o 3000 mseg (ver figura 1).
Subjects laid supine in the scanner with their arms out-
stretched beside their abdomen. Visual stimuli were pro-
jected onto a back-projection screen situated behind the
subjectʼs head and were visible via a mirror (10 × 15 cm).
Sound-attenuating headphones were used to muffle
scanner noise. The presentation of the stimuli and the re-
cording of the participantsʼ responses were controlled by
a PC running Cogent 2000 (developed by the Cogent 2000
Cifra 1. Top: Visual stimuli
used in the tactile confrontation
tarea. Bottom: A typical
experimental trial. In randomized
bloques, subjects receive either a
unilateral or a bilateral tactile
stimulation on their cheeks.
Concurrently, they are required
to pay attention to the screen in
front of them showing a movie
where an image is touched or
only approached by two human
fingers. The shown image is
either the subjectʼs own face,
another personʼs face, or a ball,
in different conditions.
D
oh
w
norte
yo
oh
a
d
mi
d
yo
yo
/
/
/
/
j
t
t
F
/
i
t
.
:
/
/
F
r
oh
metro
D
h
oh
t
w
t
norte
pag
oh
:
a
/
d
/
mi
metro
d
i
F
t
r
oh
pag
metro
r
C
h
.
s
pag
i
yo
d
v
i
mi
r
mi
r
C
C
t
.
h
metro
a
i
r
mi
.
d
tu
C
oh
oh
metro
C
/
norte
j
a
oh
r
C
t
i
norte
C
/
mi
a
–
pag
r
d
t
i
2
C
3
yo
3
mi
–
5
pag
0
d
3
F
1
/
9
2
4
3
0
/
5
3
5
/
3
5
oh
0
C
3
norte
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
oh
pag
C
d
norte
.
b
2
y
0
gramo
1
tu
0
mi
.
s
t
2
oh
1
norte
4
8
0
4
7
.
S
pag
mi
d
pag
F
mi
metro
b
y
b
mi
gramo
r
tu
2
mi
0
2
s
3
t
/
j
F
t
.
/
.
.
oh
norte
1
8
METRO
a
y
2
0
2
1
Cardini et al.
505
team at the FIL and the ICN, University College London, Reino Unido)
and Cogent Graphics (developed by John Romaya at the
LON at the Wellcome Department of Imaging Neuroscience,
University College London, Reino Unido) under Matlab (The Math-
works Company, Natick, MAMÁ) on the Microsoft Windows
XP operating system.
consisting in the fixation interval lasting 2000, 2500, o
3000 mseg; each of the three different baseline durations
had the same probability of occurrence. Each run lasted
acerca de 7 minutos. A pause of 5 minutos, during which tactile
stimuli were recalibrated, was interspersed between runs.
Design and Procedure
The event-related paradigm consisted of four acquisition
runs of the tactile confrontation task. Each run presented
six unique stimuli representing all combinations of type of
imagen (self, otro, and object) and fingers movement tra-
jectory (touch and no touch), synchronized with a bilateral
tactile stimulation. De este modo, the experimental design was a 3
(imagen: self, otro, and object) × 2 (trajectory: touch and
no touch) within-subjects factorial design. The six unique
stimuli were repeated 18 veces, for a total of 108 trials per
run, presented in pseudorandom order. Tactile stimulation
was also presented simultaneously with visual stimulation.
In each run, 22 unilateral tactile stimuli were also included.
In total, the experiment consisted of 520 ensayos.
Before scanning, participants were told that electrical
stimuli would be delivered either to one or to both cheeks
and that concurrently they would be presented with short
movies with a different content. They were instructed
to press a button with the right hand when they would
perceive a unilateral tactile stimulus and to refrain from
responding when they would perceive a bilateral tactile
stimulus. Participants were instructed to look at visual
information and to answer on the basis only of tactile
stimulation.
The fMRI design differs from the behavioral study by
Serino et al. (2008) for two important aspects. Primero, en
the present study, subjects actively responded only to uni-
lateral tactile stimuli, which were rare in the total number
of trials, whereas in the study of Serino et al., subjects were
requested to differently respond to unilateral left, bien, y
bilateral stimuli. Segundo, in the present study, visual in-
formation always signaled a bilateral stimulation, mientras
in Serino et al., the side of tactile and visual stimulations
was completely crossed. These modifications were neces-
sary to study the neural basis of the VRT effect. The current
paradigm indeed was designed to maximize the number
of trials critical to show the modulation of the effect (es decir.,
bilateral tactile stimulation), to minimize the number of
possible combinations of visuotactile stimuli (using only
bilateral visual stimulation), and to minimize possible brain
activations not directly involved in the effect, such as those
derived from motor responses. For these reasons, subjects
received much less unilateral than bilateral tactile stimuli,
viewed only bilateral stimuli, and were requested to ac-
tively respond only to trials with unilateral tactile stimula-
ción (which were not included in fMRI analyses).
The experimental design was a rapid event-related fMRI
design alternating a state of stimulation—that is, 1000 mseg
movies plus electrical stimulation—with a baseline state
Análisis de los datos
fMRI data were analyzed using SPM5 ( Wellcome Trust
Centre for Neuroimaging, University College, Londres).
Functional images were first corrected for head movement
using a least-squares approach and a six-parameter rigid
body spatial transformation (Friston et al., 1995) and for
difference in acquisition timing between slices. The high-
resolution anatomical image and the functional images
were coregistered and stereotactically normalized to the
Montreal Neurological Institute brain template used in
SPM5 (Mazziotta, Toga, evans, Fox, & Lancaster, 1995).
Functional images were resampled with a voxel size of 4 ×
4 × 4 mm and spatially smoothed with a three-dimensional
Gaussian filter of 8-mm FWHM (Friston et al., 1995).
The time series of functional MR images obtained from
each participant was then analyzed on a voxel-by-voxel basis
using the principles of the general linear model extended
to allow the analysis of fMRI data as a time series (Worsley
& Friston, 1995). The onset of each trial constituted a neural
evento, which was modeled through a canonical hemody-
namic response function, chosen to represent the relation-
ship between neuronal activation and BOLD signal changes
(Friston et al., 1998). Unilateral catch trials (20%) and false
alarm trials (es decir., when participants had pressed the button
in the presence of a bilateral tactile stimulus; 18%) eran
modeled as separate conditions and then excluded from
further analyses, which concentrated on correct responses
(es decir., no response to bilateral stimulation).
Group analysis was performed in two steps. Primero, we used
a conventional voxel-by-voxel group random-effects analy-
hermana, which allowed to test hypotheses relative to the whole
population and to identify brain regions responding during
the experimental trials relative to the baseline condition of
el estudio, eso es, the intertrial fixation interval. This was done
through an omnibus F test comparing each of the six con-
ditions resulting from the combination of the image and
trajectory factors with the intertrial fixation. La resultante
statistical parametric maps of the F statistics were thresh-
olded at p < .01, corrected for multiple comparisons over
the total amount of acquired brain volume using false dis-
covery rate (Genovese, Lazar, & Nichols, 2002). The result-
ing regions are listed in Table 1 and rendered in Figure 2 and
include all voxels showing a reliable BOLD response evoked
by the onset of the experimental trials, irrespective of the
somatosensory stimulus, visual image, and fingers move-
ment trajectory delivered in any particular trial and of the
sign (positive or negative) of the evoked BOLD response.
The second step consisted in searching for modulation
of BOLD responses in these voxels as a function of the type
of image (image factor: self, other, and object) and finger
506
Journal of Cognitive Neuroscience
Volume 23, Number 3
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
f
/
t
t
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
/
.
t
.
.
f
o
n
1
8
M
a
y
2
0
2
1
Table 1. Proportion of Correct Responses to Bilateral, Unilateral Left, and Unilateral Right Stimuli for the 3
(Image: Self Face, Other Face, and Object) × 2 (Trajectory: Touch and No Touch) Experimental Conditions
Self Face ( %)
Other Face ( %)
Object ( %)
Touch
No Touch
Touch
No Touch
Touch
No Touch
Bilateral
Average
SEM
Unilateral Left
Average
SEM
Unilateral Right
Average
SEM
84
4.1
30
6
15
5
81
3.9
34
7
17
6
82
4.1
29
5
16
6
79
4.1
35
8
14
6
80
4.8
39
8
15
5
80
4.2
42
8
18
7
movement trajectory (trajectory factor: touch and no touch).
To increase sensitivity of the analysis, this step was per-
formed on regionally averaged data as follows: Voxels
resulting from the first step were grouped into regions, that
is, clusters of adjacent significant voxels. For each subject
and region, we computed a regional estimate of the am-
plitude of the hemodynamic response in each experimental
condition by entering a spatial average (across all voxels in
the region) of the preprocessed time series into the indi-
vidual general linear models. Such regional hemodynamic
response estimates, which are shown in the plots in Fig-
ure 2, were then analyzed through a 3 × 2, Image × Trajec-
tory, repeated measures ANOVA. For bilaterally activated
regions, the hemisphere factor was added to the ANOVA.
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
t
t
f
/
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
/
.
t
.
.
f
o
n
1
8
M
a
y
2
0
2
1
Figure 2. Regions showing different activation (and percentage signal change) during observation of any of the six conditions (self face touched,
self face no touch, other face touched, other face no touch, object touched, and object no touch) compared with the intertrial baseline. Group
activation data are rendered on the cortical surface of a “canonical” brain (Mazziotta et al., 1995).
Cardini et al.
507
Note that although the first and the second steps in this
analysis procedure use the same data set, they are inherently
independent because the first step tests for the presence
of any neural response regardless of the identity of the
delivered stimulus, whereas the second step tests for mod-
ulations induced by the kind of visual stimulus on the
responsive regions, thus avoiding the risk of “double dip-
ping” (Kriegeskorte, Simmons, Bellgowman, & Baker, 2009).
RESULTS
Behavioral Results
The behavioral effect of visual stimulation on tactile per-
ception was studied by comparing subjectsʼ accuracy in
responding to bilateral tactile stimuli when the fingers
touched or did not touch the different images. In light of
the results from Serino et al. (2008), we expected that
the perception of bilateral tactile stimuli was higher when
subjects saw their own face being touched rather than ap-
proached. To control that behavioral data from the present
fMRI experiment confirmed this critical prediction, for
each image condition (self, other, and object), subjectsʼ ac-
curacy was compared between the two fingers movement
trajectories (touch and no touch) by means of t tests (one-
tailed). To prevent the risk of inflating one-type error, a
Bonferroni correction was applied; thus, only p values <
.025 were considered significant. When viewing oneʼs own
face, tactile perception was enhanced when fingers touched
the face (accuracy = 84%; SEM = 4.1%) than when just
approached the face (81%; SEM = 3.9%), t(14) = 2.28,
p < .019. A similar nearly significant pattern, t(14) = 1.57,
p =.06, was found for viewing the other face: The ac-
curacy was 82% (SEM = 4.1%) in the touch condition
and 79% (SEM = 4.1%) in the no-touch condition. No
modulation of tactile perception was found for the object
condition: The same accuracy was found for the touch
(80%; SEM = 4.8%) and the no-touch (80%; SEM = 4.2%),
t(14) = 0.13, p =.44, conditions. Behavioral data for re-
sponses to bilateral and unilateral weak and strong stimula-
tion are reported in Table 1.
fMRI Results
From the group-level whole-brain analysis of functional MR
images, we identified six different cortical regions where
BOLD signal was significantly different during any of the
six conditions resulting from the combination of type of
image (self, other, and object) and finger movement tra-
jectory (touch and no touch), relative to the intertrial fixa-
tion intervals. The six regions were located in the bilateral
occipital cortex, ventral intraparietal area ( VIP), somato-
sensory cortex, ventral premotor cortex (VPM), right insula,
and dorsomedial pFC (see Table 2 and Figure 2). To study
the modulation of neural activity within these areas as a
function of the experimental conditions, for each area, we
run an ANOVA on the estimated percent BOLD signal
change with the factors image (self, other, and object) and
trajectory (touch and no touch). A factor hemisphere (right
and left) was added when both left and right activation of
homologue areas was found. Post hoc comparisons were
conducted, when necessary, by means of the Duncan test.
Occipital Cortex
The activation cluster in the occipital cortex included a wide
portion of the occipital lobe encompassing Brodmannʼs
areas (BAs) 17, 18, and 19. To functionally characterize this
cluster, we created three different anatomical masks en-
compassing BAs 17, 18, and 19, respectively, and we com-
puted the BOLD percent signal change in each area and in
each condition. Anatomical masks were created by means
of AAL toolbox available with SPM (Tzourio-Mazoyer et al.,
2002). Results showed no functional difference between
the three areas, so the results will be discussed for the whole
cluster.
The ANOVA showed that BOLD response in this cluster
was modulated only by the type of image viewed by the
subject because only the effect of image was significant,
F(2, 28) = 4.00, p < .05. Post hoc comparisons showed
that BOLD signal was higher when subjects viewed both
their own face (0.30% increase relative to the intertrial fixa-
tion baseline) and another personʼs face (0.29%) than an
object (0.25%, p < .05 in both cases; see Figure 2). Thus,
BOLD signal in this area discriminates between bodily and
nonbodily visual stimuli.
Ventral Intraparietal Area
An activation cluster was bilaterally found at the confluence
of the postcentral and intraparietal sulci, compatibly with
the location of the human VIP (Sereno & Huang, 2006).
In both hemispheres, VIP activation was mainly centered
within BA 40. Neither the main effect of hemisphere nor
any interaction between hemisphere and the other factors
were significant; thus, the results for both hemispheres will
be presented together (see Figure 2). Only the main effect
of trajectory was significant, F(1, 14) = 4.56, p < .05, show-
ing a higher activation during observation of touch (0.19%)
than of no-touch (0.16%) trajectory (see Figure 2). There-
fore, neural activity in this area discriminates visual infor-
mation specifically related to touch from that related to
no-touch stimulation.
Somatosensory Cortices (SI/SII)
An activation cluster was bilaterally found in the ventral
postcentral gyrus. For both hemispheres, this activation
site includes the face area in the primary somatosensory
cortex (Eickhoff, Grefkes, Fink, & Zilles, 2008) and the
secondary somatosensory cortex (Eickhoff et al., 2008).
Face representations in the primary and secondary somato-
sensory cortices are very close to each other, both en-
compassing the ventral aspect of the postcentral gyrus
508
Journal of Cognitive Neuroscience
Volume 23, Number 3
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
f
/
t
t
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
.
/
f
t
.
.
o
n
1
8
M
a
y
2
0
2
1
Table 2. Montreal Neurological Institute (MNI) Coordinates of Peaks of Relative Activation in the Cortical Regions Where BOLD
Signal Was Significantly Different during Observation of Any of the Six Conditions Compared with the Intertrial Baseline
Regions of Activation
Anatomical Location
Extent ( Voxels)
Side
Anatomical Subdivisions
Occipital cortex
849
L
Middle occipital gyrus
Inferior occipital gyrus
R
Cuneus
Calcarine cortex
Inferior occipital gyrus
Inferior parietal lobule
Inferior parietal lobule
Postcentral gyrus (inferior)
Superior temporal gyrus
Postcentral gyrus (inferior)
Precentral gyrus
Precentral gyrus
Insula
Inferior frontal gyrus
Superior frontal gyrus
L
R
L
R
L
R
R
L
VIP
Somatosensory cortices (SI/SII)
VPM
Insula
Dorsomedial pFC
21
48
54
29
44
25
43
25
The table shows local maxima more than 4 mm apart.
Main Local Maxima
MNI Coordinates
x
−12
−48
−20
−24
12
16
4
32
−40
32
48
−60
−52
60
−44
−36
52
48
60
−6
y
−104
−76
−88
−84
−96
−96
−88
−84
−36
−52
−36
−20
−36
−16
−4
−6
8
16
12
58
z
4
4
−20
−4
12
0
4
−4
36
44
48
20
20
20
60
68
36
−4
4
24
F
25.28
20.62
14.78
6.83
17.35
15.48
15.20
7.36
7.21
6.62
4.85
12.24
5.86
8.15
8.77
7.56
6.66
7.93
6.91
7.88
(Eickhoff et al., 2008; Sereno & Huang, 2006). Although our
cluster clearly falls within this region, the present results do
not discriminate any neural activity selectively related to
either SI or SII. Thus, we will refer to this activation cluster
with the comprehensive term “somatosensory cortices.”
The main effect of Image was significant, F(2, 28) = 8.05,
p < .01, with a weaker activation for oneʼs own face (0.17%)
than for the otherʼs face (0.20%, p < .01) and for the ob-
ject (0.20%, p < .01). These results should be interpreted
in the light of the significant two-way interaction Image ×
Trajectory, F(2, 28) = 4.03, p < .05. In the touch condi-
tion, viewing oneʼs own face (0.17%) resulted in weaker
activity than viewing both another personʼs face (0.21%,
p < .01) and an object (0.21%, p < .01). In contrast, in
the no-touch condition, no difference was found between
oneʼs own face (0.18%), the otherʼs face (0.18%), and
the object (0.18%, p > .45 in both cases) (ver figura 2).
Such modulation resulted also in a different pattern of re-
sults when the effect of touch and no touch was compared
across the three images: Although for the object and for
the otherʼs face, neural activity in the touch condition was
higher than that in the no-touch condition ( pag < .05 in both
comparisons), this difference was not found for oneʼs own
face ( p = .22), where rather a nonsignificant opposite
trend was found. Thus, in summary, viewing oneʼs own
face being touched resulted in a reduction of the activity
in right and left somatosensory cortices within the post-
central gyrus.
Ventral Premotor Cortex
An activation cluster was found bilaterally in the precentral
gyrus. Although the cluster on the right hemisphere was
more ventral than that on the left hemisphere, both clus-
ters were located in the ventral half of the precentral gyrus
and fell within BA 6, accordingly to the cytoarchitectonic
atlas (Eickhoff et al., 2005). Neither the main effect of hemi-
sphere nor any interaction between hemisphere and the
other factors were significant; thus, we will present the re-
sults for both hemispheres together (see Figure 2). The
Cardini et al.
509
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
t
t
f
/
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
.
f
/
.
t
.
o
n
1
8
M
a
y
2
0
2
1
critical Image × Trajectory interaction was significant, F(2,
28) = 7.04, p < .01. Post hoc comparisons showed that in
the touch condition, BOLD response for the observation
of oneʼs own face (0.21%) was reduced in comparison with
that for the observation of the otherʼs face (0.24%) and of
the object (0.24%, p < .05 in both cases). Conversely, in
the no-touch condition, BOLD response was enhanced
for the observation of oneʼs own face (0.25%) in compari-
son with that for the observation of the otherʼs face (0.21%,
p < .03) and of the object (0.22%, p < .05). When neural
response between touch and no-touch condition was com-
pared for the different images, we found an opposite pat-
tern of activity for viewing oneʼs own and the otherʼs face:
for the self condition, neural activity was lower in the touch
(0.21%) than that in the no-touch condition (0.25%, p <
.05), whereas for the other condition, BOLD response
was higher in the touch (0.24%) than that in the no-touch
condition (0.21%, p < .05) (see Figure 2). For the object
condition, the pattern of results showed a trend similar
to that for the other condition ( p = .09). Thus, BOLD re-
sponse in the left and right precentral gyrus seems able to
discriminate between the effect of viewing touch on oneʼs
own face as compared with viewing touch on another per-
sonʼs face or on an object. The self-specific effect consists in
a reduction of metabolic activity when viewing oneʼs own
face being touched.
Right Insula
The activation cluster in the right insula was centered on
BA 47. The ANOVA performed on the percent BOLD sig-
nal change in this cluster (see Figure 2) showed a signifi-
cant Image × Touch interaction, F(2, 28) = 10.53, p <
.01. Post hoc comparisons showed that in the touch con-
dition, the BOLD response for the otherʼs face (0.22%) was
higher than that for the object (0.19%, p < .05). In the no-
touch condition, the BOLD response for the otherʼs face
(0.15%) was weaker than that for oneʼs own face (0.19%,
p < .05) and for the object (0.20%, p < .01). Finally, for
the otherʼs face condition, the effect of touch (0.22%)
was higher than that of no touch (0.15%, p < .01) (see
Figure 2).
Dorsomedial pFC
A deactivated cluster was found in the dorsomedial pFC.
The cluster was mainly centered within BA 10. The ANOVA
performed on the percent BOLD signal change in this clus-
ter showed no main effect or interaction (see Figure 2).
DISCUSSION
Viewing oneʼs own face being touched affects tactile per-
ception on the face more than viewing another personʼs
face or a nonbody stimulus (Serino et al., 2008). Here we
studied which brain areas underlie this effect. In particular,
we asked how a high-level representation of the self con-
veyed by visual stimulation may interact with the process-
ing of tactile sensation.
To this aim, we used fMRI to measure brain activity in
subjects involved in a tactile sensory discrimination task
on their face (discriminating between a unilateral and a
bilateral stimulation) while they viewed three different im-
ages, namely, their own face, another personʼs face, or an
object, being touched bilaterally or just approached by
fingers. The experimental paradigm was designed to maxi-
mize brain activity specifically related to the effect of inter-
est (i.e., the modulation of touch due to visual information
about the self ) rather than to study the cognitive mecha-
nism underlying the effect (see Serino et al., 2008, 2009).
Nevertheless, behavioral data basically replicate the main
important finding on VRT: Subjects more frequently re-
ported to feel a bilateral stimulation on their face when
they viewed a picture of their own face being touched bi-
laterally in comparison with when they viewed their own
face being only approached. We will now relate these be-
havioral findings to neural activity recorded by fMRI.
Neural Activity Related to Viewing a Face
In a wide area of the occipital cortex, involving BAs 17, 18,
and 19, BOLD signal was modulated as a function of the
shown image: Neural activity was higher when subjects
viewed a face, both their own and another personʼs face,
than when they viewed a picture of a ball. Thus, this neural
modulation may reflect the processing of complex visual
information, such as that pertaining to a face, as compared
with the processing of a simpler visual stimulus, such as a
ball. These findings are in keeping with several previous
data showing that the human body and its parts are spe-
cially relevant visual stimuli, processed by dedicated high-
order visual areas (e.g., the so-called extrastriate body
area; Downing, Jiang, Shuman, & Kanwisher, 2001) for
the body, the occipital face area (Pitcher, Walsh, Yovel,
& Duchaine, 2007; Gauthier et al., 2000; Haxby, Hoffman,
& Gobbini, 2000), and the so-called fusiform face area
(Kanwisher & Yovel, 2006).
Neural Activity Related to Viewing Touch
Neural activity in visual cortex did not discriminate visual in-
formation specifically related to touch from that not related
to touch because the modulation of BOLD signal due to
viewing different images was independent from whether
the image was touched or just approached by the fin-
gers. Conversely, such information pertaining finger move-
ment trajectories affected neural activity in a portion of
the parietal cortex, probably corresponding to the VIP
(Sereno & Huang, 2006). VIP activity was enhanced when
subjects received a tactile stimulation on their face and
510
Journal of Cognitive Neuroscience
Volume 23, Number 3
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
f
/
t
t
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
t
.
.
f
.
/
o
n
1
8
M
a
y
2
0
2
1
viewed two fingers touching an image rather than point-
ing beside that image.
Neurons in the monkey VIP respond to both visual and
somatosensory information directed toward the animalʼs
face (Avillac, Deneve, Olivier, Pouget, & Duhamel, 2005;
Grefkes & Fink, 2005; Duhamel, Colby, & Goldberg, 1998;
Colby, Duhamel, & Goldberg, 1993). Analogously, in
humans, VIP contains a visuotactile somatotopic map of
the face (Sereno & Huang, 2006). However, differently from
the above-cited studies, in the present experiment, visual
stimulation was not directed toward the subjectʼs real face
but toward an image facing the subject. Thus, information
derived from viewing touch was remapped, such as touch
was directed toward oneʼs own face and integrated with
an actual tactile stimulation received on the face. We sug-
gest that the modulation of VIP activity found in the pres-
ent study actually reflects such integrative and remapping
process. This suggestion is supported by recent neuro-
physiological data on monkeys showing that some VIP
neurons respond not only to visual and tactile stimulation
administered on or close to a part of the animalʼs body but
also when a stimulus is directed toward a part of the body
of an experimenter facing the animal (Ishida, Nakajiama,
Inase, & Murata, 2010). This response property of VIP cells
allows to link the representation of an individualʼs body
with that of the body of others. We believe that a similar
mechanism might underlie the VRT effect in humans, as
shown by the present fMRI results.
Neural Activity Related to Viewing Touch on Oneʼs
Own Face
Therefore, neural activity in occipital and VIPs may discrimi-
nate between viewing a face from viewing an object and
between viewing touch from viewing no touch, respectively.
However, the critical information strongly modulating sub-
jectʼs perception, that is, viewing touch on oneʼs own face,
is processed elsewhere. A significant interaction between
the viewed image and the fingers movement trajectory was
found bilaterally in the VPM. In the VPM, BOLD signal when
viewing oneʼs own face being touched was significantly dif-
ferent from that when viewing oneʼs own face not being
touched and when viewing another personʼs face and an
object being touched. In particular, a reduction of VPM ac-
tivity was found for oneʼs own face in the touch condition.
Thus, neural activity in VPM may specifically represent in-
formation about touch on oneʼs own face.
VPM is a well-known multisensory area, integrating visual,
somatosensory, and proprioceptive information about the
body and the space immediately surrounding the body. In
the monkey, the homologous VPM area contains motor
neurons with sensory proprieties, in that they respond also
to visual, acoustic, and tactile stimulation administered on
the monkeyʼs body or within monkeyʼs peripersonal space
(Graziano & Cooke, 2006; Rizzolatti, Fogassi, & Gallese,
2002). VPM neurons are also active when the monkey sees
a part of its body (Graziano, Cooke, & Taylor, 2000). In
humans, VPM is activated when processing tactile informa-
tion on the face and visual or acoustic information moving
toward the face (Huang & Sereno, 2007; Bremmer et al.,
2001). VPM is largely interconnected with VIP (Luppino,
Murata, Govoni, & Matelli, 1999) and receives important
projections from visual and somatosensory cortices (Matelli,
Camarda, Glickstein, & Rizzolatti, 1986; Godschalk, Lemon,
Kuypers, & Ronday, 1984). Thus, VPM together with VIP
represents an ideal candidate for integrating visual and tac-
tile information related to face stimulation. The new find-
ing from the present study is that, differently from VIP,
VPM activity discriminated when the observed touch was
administered to the observerʼs face rather than to another
personʼs face or an object. In other words, VPM processed
and integrated visuotactile information specifically pertain-
ing to the self.
Previous fMRI findings have shown that VPM is directly
involved in the feeling of body ownership (Ehrsson et al.,
2005; Ehrsson, Spence, & Passingham, 2004). In the so-
called RHI, viewing touch on a fake hand and feeling syn-
chronously touch on oneʼs own hidden hand result in
an illusory percept of the fake hand as oneʼs own hand
(Botvinick & Cohen, 1998). During synchronous visuo-
tactile stimulation causing the RHI, VPM is active. Moreover,
brain lesions involving VPM are related to disorders of body
ownership, such as anosognosia for hemiplegia (Pia, Neppi-
Modona, Ricci, & Berti, 2004) and asomatognosia (Arzy,
Overney, Landis, & Blanke, 2006). Thus, VPM together with
other regions in the inferior parietal cortex (Berlucchi &
Aglioti, 1997, 2010) is a key area in subserving the feeling
of ownership of oneʼs own body that is the embodied self.
It is worth noting that no activation specifically related to
the present experimental manipulations was found in me-
dial pFC, in areas processing more abstract and semantic
representations of oneself, that is, the narrative self ( Jenkins
et al., 2008; Mitchell et al., 2006). The cluster of activation
change recorded in the dorsomedial pFC, indeed, did not
vary as a function of the kind of visuotactile stimulation
the subject was processing. Thus, coming back to the first
questions of the present study, namely, which brain areas
and which representation of the self underlie the self-related
enhancement of VRT effect, we might conclude that VPM
and the embodied self are the respective answers.
It remains to explain why such self-related VPM modu-
lation is characterized by a reduction of neural activity,
instead of by an enhancement, as one might more simply
expect. Neural activity in VPM during the RHI positively
correlates with the subjective feeling of body ownership.
It has been proposed that the strength of VPM activation
does reflect the effort of integrating different modalities
into a unique body representation; accordingly to this
view, VPM plays a specific role in embodying a nonbody
object (Tsakiris et al., 2007). So, the higher activation of
this area, in the case of viewing another personʼs face
and an object being touched, might reflect the effort in
the embodying process, whereas viewing oneʼs own face
Cardini et al.
511
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
f
/
t
t
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
t
/
.
f
.
.
o
n
1
8
M
a
y
2
0
2
1
being touched facilitates embodiment and thus less VPM
activity is recorded.
Neural Activity Related to the Modulation of Touch
Perception When Viewing Oneʼs Own Face
Finally, how does visuotactile integration related to one-
self modulate tactile perception? The pattern of neural
response shown in the premotor cortex reflects to the
somatosensory areas. In particular, a reduced activity in
the somatosensory cluster including the face area of SI
and SII was found for viewing oneʼs own face being
touched in comparison with all other conditions. It is al-
ready known that visual information modulates tactile pro-
cessing within somatosensory cortices (Macaluso, Frith,
& Driver, 2005), probably via feedback projections from
multimodal fronto-parietal areas (Macaluso & Driver, 2005;
Bremmer et al., 2001; Macaluso, Frith, & Driver, 2000). In
line with this view, we suggest that VPM exerts a modula-
tion on the somatosensory cortex (Macaluso, 2006). On
the other hand, it is impossible to hypothesize a direct
modulation of the somatosensory areas due to different
images because somatosensory cortices cannot directly
process visual features about the face identity. Indeed, it
has already been shown that these areas are not sensitive
to the identity of the object being touched (Keysers et al.,
2004). Thus, the most likely interpretation is that VPM in-
tegrates information about viewing touch on oneself with
tactile information and then differently modulates the so-
matosensory areas where tactile information is processed.
To support this model, it remains to explain how a re-
duction in the activity of somatosensory areas results in
an increase of reported bilateral tactile percept when view-
ing oneʼs own face being touched. We suggest that when
viewing oneself, visuotactile integration is favored, and
therefore visual information might be taken into account
in perceiving tactile stimulation. In other words, perception
of touch while viewing oneself being touched might rely
more strongly in what is seen and less on what is felt. As a
consequence, a weaker bilateral activation in the somato-
sensory cortices might be sufficient to evoke a bilateral tac-
tile percept because this percept is supported by bilateral
visual information. In contrast, when the fingers just ap-
proach oneʼs own face or when subjects view another per-
son or an object, visuotactile integration is less effective,
and therefore tactile perception more strongly depends
on unisensory tactile signals: As a consequence, a stronger
bilateral activity in the somatosensory areas is necessary to
elicit a bilateral tactile percept.
To sum up, VRT is defined as a modulation of tactile per-
ception felt on oneʼ own body when viewing touch on an
external stimulus, this effect being maximum when viewing
touch on oneʼs own body. The present results show that
the neuronal counterpart of this effect relies on an extended
network of fronto-parietal structures representing multi-
sensory information pertaining the bodily self.
Acknowledgments
The authors are grateful to Dr. Mauro Gianni Perrucci for his
help in collecting data. This work was supported by grants from
MURST to E. L.
Reprint requests should be sent to Andrea Serino, Centro Studi e
Ricerche in Neuroscienze Cognitive, Università di Bologna, Via
Brusi, 20, 47023 Cesena, Italy, or via e-mail: andrea.serino@
unibo.it.
REFERENCES
Arzy, S., Overney, L. S., Landis, T., & Blanke, O. (2006).
Neural mechanisms of embodiment: Asomatognosia due
to premotor cortex damage. Archives of Neurology, 63,
1022–1025.
Avillac, M., Deneve, S., Olivier, E., Pouget, A., & Duhamel,
J. R. (2005). Reference frames for representing visual and
tactile locations in parietal cortex. Nature Neuroscience,
8, 941–949.
Banissy, M. J., & Ward, J. (2007). Mirror-touch synaesthesia is
linked with empathy. Nature Neuroscience, 10, 815–816.
Berlucchi, G., & Aglioti, S. M. (1997). The body in the
brain: Neural bases of corporeal awareness. Trends
in Neurosciences, 20, 560–564.
Berlucchi, G., & Aglioti, S. M. (2010). The body in the brain
revisited. Experimental Brain Research, 200, 25–35.
Blakemore, S. J., Bristow, D., Bird, G., Frith, C., & Ward, J.
(2005). Somatosensory activations during the observation
of touch and a case of vision-touch synaesthesia. Brain,
128, 1571–1583.
Blanke, O., & Metzinger, T. (2009). Full-body illusions and
minimal phenomenal selfhood. Trends in Cognitive
Sciences, 13, 7–13.
Botvinick, M., & Cohen, J. (1998). Rubber hands “feel” touch
that eyes see. Nature, 391, 756.
Bremmer, F., Schlack, A., Shah, N. J., Zafiris, O., Kubischik, M.,
Hoffmann, K., et al. (2001). Polymodal motion processing
in posterior parietal and premotor cortex: A human fMRI
study strongly implies equivalencies between humans and
monkeys. Neuron, 29, 287–296.
Buckner, R. L., & Carroll, D. C. (2006). Self-projection and
the brain. Trends in Cognitive Sciences, 11, 49–57.
Colby, C. L., Duhamel, J. R., & Goldberg, M. E. (1993).
Ventral intraparietal area of the macaque: Anatomic
location and visual response properties. Journal of
Neurophysiology, 69, 902–914.
DʼArgembeau, A., Ruby, P., Collette, F., Degueldre, C.,
Balteau, E., Luxen, A., et al. (2007). Distinct regions
of the medial prefrontal cortex are associated with
self-referential processing and perspective taking.
Journal of Cognitive Neuroscience, 19, 935–944.
Downing, P. E., Jiang, Y., Shuman, M., & Kanwisher, N.
(2001). A cortical area selective for visual processing
of the human body. Science, 293, 2470–2473.
Duhamel, J. R., Colby, C. L., & Goldberg, M. E. (1998).
Ventral intraparietal area of the macaque: Congruent
visual and somatic response properties. Journal of
Neurophysiology, 79, 126–136.
Ebisch, S. J., Perrucci, M. G., Ferretti, A., Del Gratta, C.,
Romani, G. L., & Gallese, V. (2008). The sense of touch:
Embodied stimulation in a visuotactile mirroring
mechanism for observed animate or inanimate touch.
Journal of Cognitive Neuroscience, 20, 1611–1623.
Ehrsson, H. H., Holmes, N. P., & Passingham, R. E. (2005).
Touching a rubber hand: Feeling of body ownership is
512
Journal of Cognitive Neuroscience
Volume 23, Number 3
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
f
/
t
t
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
f
t
.
.
/
.
o
n
1
8
M
a
y
2
0
2
1
associated with activity in multisensory brain areas.
Journal of Neuroscience, 25, 10564–10573.
Ehrsson, H. H., Spence, C., & Passingham, R. E. (2004). Thatʼs
my hand! Activity in premotor cortex reflects feeling of
ownership of a limb. Science, 305, 875–877.
Eickhoff, S. B., Grefkes, C., Fink, G. R., & Zilles, K. (2008).
Functional lateralization of face, hand, and trunk
representation in anatomically defined human
somatosensory areas. Cerebral Cortex, 18, 2820–2830.
Eickhoff, S. B., Stephan, K. E., Mohlberg, H., Grefkes, C.,
Fink, G. R., Amunts, K., et al. (2005). A new toolbox for
combining probabilistic cytoarchitectonic maps and
functional imaging data. Neuroimage, 25, 1325–1335.
Friston, K., Ashburner, J., Poline, J., Frith, C., Heather, J., &
Frackowiak, R. (1995). Spatial registration and normalization
of images. Human Brain Mapping, 2, 165–189.
Friston, K. J., Fletcher, P., Josephs, O., Holmes, A., Rugg, M. D.,
& Turner, R. (1998). Event-related f MRI: characterizing
differential responses. Neuroimage, 7, 30–40.
Gauthier, I., Tarr, M. J., Moylan, J., Skudlarski, P., Gore, J. C.,
& Anderson, A. W. (2000). The fusiform “face area” is part
of a network that processes faces at the individual level.
Journal of Cognitive Neuroscience, 12, 495–504.
Genovese, C. R., Lazar, N. A., & Nichols, T. (2002). Thresholding
of statistical maps in functional neuroimaging using the
false discovery rate. Neuroimage, 15, 870–878.
Godschalk, M., Lemon, R. N., Kuypers, H. G., & Ronday,
H. K. (1984). Cortical afferents and efferents of monkey
postarcuate area: An anatomical and electrophysiological
study. Experimental Brain Research, 56, 410–424.
Graziano, M. S., & Cooke, D. F. (2006). Parieto-frontal
interactions, personal space, and defensive behavior.
Neuropsychologia, 44, 2621–2635.
Graziano, M. S. A., Cooke, D. F., & Taylor, C. S. R. (2000). Coding
the location of the arm by sight. Science, 290, 1782–1786.
Grefkes, C., & Fink, G. R. (2005). The functional organization
of the intraparietal sulcus in humans and monkeys.
Journal of Anatomy, 207, 3–17.
Haxby, J. V., Hoffman, E. A., & Gobbini, I. M. (2000). The
distributed human neural system for face perception.
Trends in Cognitive Sciences, 4, 223–233.
Heatherton, T. F., Wyland, C. L., Macrae, N. C., Demos, K. E.,
Denny, B. T., & Kelley, W. M. (2006). Medial prefrontal
activity differentiates self from close others. Social
Cognitive and Affective Neuroscience, 1, 18–25.
Huang, R., & Sereno, M. I. (2007). Dodecapus: An
MR-compatible system for somatosensory stimulation.
Neuroimage, 34, 1060–1073.
Ishida, H., Nakajiama, K., Inase, M., & Murata, A. (2010).
Shared mapping of own and othersʼ bodies in visuotactile
bimodal area of monkey parietal cortex. Journal of
Cognitive Neuroscience, 22, 83–96.
Luppino, G., Murata, A., Govoni, P., & Matelli, M. (1999).
Largely segregated parietofrontal connections linking
rostral intraparietal cortex (areas AIP and VIP) and the
ventral premotor cortex (areas F5 and F4). Experimental
Brain Research, 128, 181–187.
Macaluso, E. (2006). Multisensory processing in sensory-specific
cortical areas. Neuroscientist, 12, 327–338.
Macaluso, E., & Driver, J. (2005). Multisensory spatial
interactions: A window onto functional integration in the
human brain. Trends in Neurosciences, 28, 264–271.
Macaluso, E., Frith, C. D., & Driver, J. (2000). Modulation
of human visual cortex by crossmodal spatial attention.
Science, 289, 1206–1208.
Macaluso, E., Frith, C. D., & Driver, J. (2005). Multisensory
stimulation with or without saccades: fMRI evidence for
crossmodal effects on sensory-specific cortices that reflect
multisensory location congruence rather than task-relevance.
Neuroimage, 26, 414–425.
Matelli, M., Camarda, R., Glickstein, M., & Rizzolatti, G.
(1986). Afferent and efferent projections of the inferior
area 6 in the macaque monkey. Journal of Comparative
Neurology, 251, 281–298.
Mazziotta, J. C., Toga, A. W., Evans, A., Fox, P., & Lancaster, J.
(1995). A probabilistic atlas of the human brain: Theory
and rationale for its development. Neuroimage, 2, 89–101.
Mitchell, J. P., Macrae, N. C., & Banaji, M. R. (2006). Dissociable
medial prefrontal contributions to judgments of similar
and dissimilar others. Neuron, 50, 655–663.
Northoff, G., & Bermpohl, F. (2004). Cortical midline structures
and the self. Trends in Cognitive Sciences, 8, 102–107.
Pia, L., Neppi-Modona, M., Ricci, R., & Berti, A. (2004). The
anatomy of anosognosia for hemiplegia: A meta-analysis.
Cortex, 40, 367–377.
Pitcher, D., Walsh, V., Yovel, G., & Duchaine, B. (2007). TMS
evidence for the involvement of the right occipital face area
in early face processing. Current Biology, 17, 1568–1573.
Platek, S. M., Wathne, K., Tierney, N. G., & Thomson, J. W.
(2008). Neural correlates of self-face recognition: An
effect-location meta-analysis. Brain Research, 1232,
173–184.
Rizzolatti, G., Fogassi, L., & Gallese, V. (2002). Motor and
cognitive functions of the ventral premotor cortex.
Current Opinion in Neurobiology, 12, 149–154.
Sereno, M. I., & Huang, R. S. (2006). A human parietal face
area contains aligned head-centered visual and tactile
maps. Nature Neuroscience, 9, 1337–1343.
Serino, A., Giovagnoli, G., & Ladavas, E. (2009). I feel what
you feel if you are similar to me. PLoS One, 4, e4930.
Serino, A., Pizzoferrato, F., & Ladavas, E. (2008). Viewing a
face (especially oneʼs own face) being touched enhances
tactile perception on the face. Psychological Science, 19,
434–438.
Jenkins, A. C., Macrae, C. N., & Mitchell, J. P. (2008). Repetition
Stamenov, M. I. (2005). Body schema, body image, and
suppression of ventromedial prefrontal activity during
judgments of self and others. Proceedings of the National
Academy of Sciences, U.S.A., 105, 4507–4512.
Kanwisher, N., & Yovel, G. (2006). The fusiform face area:
A cortical region specialized for the perception of faces.
Philosophical Transactions of the Royal Society of
London, Series B, Biological Sciences, 361, 2109–2128.
Keysers, C., Wicker, B., Gazzola, V., Anton, J. L., Fogassi, L.,
& Gallese, V. (2004). A touching sight: SII/PV activation
during the observation and experience of touch. Neuron,
42, 335–346.
Kriegeskorte, N., Simmons, W. K., Bellgowman, P. S., &
Baker, C. I. (2009). Circular analysis in systems neuroscience:
The dangers of double dipping. Nature Neuroscience,
12, 535–540.
mirror neurons. In H. De Preester & V. Knockaert (Eds.),
Body image and body schema: Interdisciplinary
perspectives on the body (pp. 21–43). Portland, OR: John
Benjamins Publishing Co.
Tsakiris, M., Hesse, M. D., Boy, C., Haggard, P., & Fink, G. R.
(2007). Neural signatures of body ownership: A sensory
network for bodily self-consciousness. Cerebral Cortex,
17, 2235–2244.
Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D.,
Crivello, F., Etard, O., Delcroix, N., et al. (2002).
Automated anatomical labeling of activations in SPM
using a macroscopic anatomical parcellation of the
MNI MRI single-subject brain. Neuroimage, 15, 273–289.
Worsley, K., & Friston, K. (1995). Analysis of f MRI time-series
revisited-again. Neuroimage, 2, 173–181.
Cardini et al.
513
D
o
w
n
l
o
a
d
e
d
l
l
/
/
/
/
j
f
/
t
t
i
t
.
:
/
/
f
r
o
m
D
h
o
t
w
t
n
p
o
:
a
/
d
/
e
m
d
i
f
t
r
o
p
m
r
c
h
.
s
p
i
l
d
v
i
e
r
e
r
c
c
t
.
h
m
a
i
r
e
.
d
u
c
o
o
m
c
/
n
j
a
o
r
c
t
i
n
c
/
e
a
-
p
r
d
t
i
2
c
3
l
3
e
-
5
p
0
d
3
f
1
/
9
2
4
3
0
/
5
3
5
/
3
5
o
0
c
3
n
/
1
2
0
7
1
7
0
4
8
2
6
1
4
4
8
/
4
j
o
p
c
d
n
.
b
2
y
0
g
1
u
0
e
.
s
t
2
o
1
n
4
8
0
4
7
.
S
p
e
d
p
f
e
m
b
y
b
e
g
r
u
2
e
0
2
s
3
t
/
j
.
f
t
/
.
.
o
n
1
8
M
a
y
2
0
2
1