Structural Similarity Exerts Opposing Effects on Perceptual

Structural Similarity Exerts Opposing Effects on Perceptual
Differentiation and Categorization: An fMRI Study

Christian Gerlach1, Xun Zhu2, and Jane E. Joseph2

Astratto

■ We manipulated the degree of structural similarity between
objects that had to be matched either according to whether they
represented the same object (perceptual matching) or belonged
to the same category (conceptual matching). Behaviorally, per-
formance improved as a linear function of increased structural
similarity during conceptual matching but deteriorated as a lin-
ear function of increased structural similarity during perceptual
matching. These effects were mirrored in fMRI recordings where
activation in several ventral posterior areas exhibited a similar

interaction between match type and structural similarity. Nostro
findings provide direct support for the notion that structural
similarity exerts opposing effects on classification depending
on whether objects are to be perceptually differentiated or
categorized—a notion that has been based on rather circum-
stantial evidence. In particular, the finding that structural similar-
ity plays a major role in categorization of instances according to
taxonomy challenges the view that the organization of super-
ordinate categories is not driven by shared structural features.

INTRODUCTION

Similarity plays a central part in classification of instances
because objects are often assigned category membership
based on their shared characteristics (Sloutsky, 2009).
Similarity in terms of shape (structural similarity [SS])
seems especially important in this respect as reflected,
Per esempio, by the fact that shape similarity forms one
of the ontogenetically earliest and dominating bases for
classificazione (Sloutsky, 2009; Mandler, 2000).

Evidence supporting the notion that SS is an important
parameter in classification has also come from studies of
brain-damaged patients with so-called category-specific
disorders. Typically, these disorders affect the recogni-
tion or comprehension of natural objects (per esempio., animals
and plants), whereas the recognition or comprehension
of artifacts (per esempio., furniture and tools) is relatively pre-
served. Although less frequently, the reverse pattern has
also been reported (for reviews, see Capitani, Laiacona,
Mahon, & Caramazza, 2003; Humphreys & Forde, 2001;
Gainotti, 2000; Caramazza, 1998). Such observations sug-
gest that natural objects and artifacts may be processed
differently, and it has been proposed that the underlying
cause of at least some cases of category-specific disorders
relates directly to differences in similarity between mem-
bers belonging to the categories of natural objects and
artifacts (Gerlach, 2009; Humphreys, Riddoch, & Quinlan,
1988), with natural objects being more visually and seman-
tically similar to each other than artifacts (McRae & Cree,

1University of Southern Denmark, 2Medical University of South
Carolina

© 2015 Istituto di Tecnologia del Massachussetts

2002; Tranel, Logan, Frank, & Damasio, 1997; Humphreys
et al., 1988). This difference in similarity is likely to cause
different category effects depending on the task at hand.
To appreciate this, consider two cases of classification:
object identification, where the object has to be classified
as a particular instance (say “a fox terrier”), and object
categorization, where the object has to be classified as
member of a broader class of objects (say “animals”). These
two cases clearly differ in how demanding they are in terms
of object differentiation. During categorization, the stimu-
lus need not be individuated with respect to other mem-
bers of its category; you need not decide whether the
target item is a cow, a dog, or a horse to categorize it as
an animal. On the contrary, the more similar the stimulus
is to other members of its category, and the less similar it is
to members of other categories, the higher the probability
that it belongs to that particular category compared with
other categories. During object identification, Tuttavia,
the stimulus (per esempio., a fox terrier) needs to be differentiated
from other members of its category; now, you must decide
not only whether the stimulus is a cow, dog, horse, O
something else but also which particular dog you are
presented with. In questo caso, high similarity is harmful for
performance. If natural objects are more similar than arti-
facts, we should thus expect natural objects to be identified
less efficiently than artifacts but also expect artifacts to be
categorized less efficiently than natural objects. Although
these effects have been reported across several studies
(Vedere, per esempio., Gerlach, 2009), it remains unclear whether the
observed effects were indeed driven by differences in
similarity, as similarity was typically not under experi-
mental control. Piuttosto, an effect of similarity was inferred

Journal of Cognitive Neuroscience 27:5, pag. 974–987
doi:10.1162/jocn_a_00748

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

F
/

T
T

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

T

F

/

.

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

post hoc to explain the obtained results. This introduces
the risk of circularity, Per esempio, that objects which are
categorized fast belong to categories of objects with high
similarity, with the degree of similarity being inferred by
whether the objects are quickly categorized. Neither has
it been demonstrated convincingly that it is similarity in
terms of structure rather than similarity in terms of, for ex-
ample, function, an arguably semantic aspect of an object,
which has been driving these effects. In this study, annunciamo-
dress both of these outstanding questions directly.

Participants were presented with stimuli composed of
two line drawings, which had to be compared. SS was
manipulated parametrically in that the pairs could be of
low, intermediate, or high similarity. In addition to SS, we
also manipulated the type of matching to be performed
so that it was either conceptual (“Do the stimuli belong
to the same category?") or perceptual (“Do the stimuli
represent the same object?"; Guarda la figura 1).

To judge whether two images represent the same object
(perceptual matching), it is, in principle, sufficient to
examine whether the two images map onto the same struc-
tural representation stored in visual long-term memory
(VLTM). If they do, the images represent the same object.
If they do not, the images are likely to represent different
objects. In conceptual matching conditions, images also
need to be matched with VLTM representations; Tuttavia,
this will not suffice to judge whether the objects belong to
the same category as members of the same category can
vary in structural composition and thus do not map onto
the same VLTM representation. Evidence supporting these
assumptions comes from studies of patients with visual
agnosia. Some of these patients may have difficulties rec-

ognizing objects because of impaired VLTM representa-
zioni (perceptual matching; Humphreys, Riddoch, &
Boucart, 1992), whereas other patients seem capable of
accessing such representations, Per esempio, by being
capable of matching objects seen from different view-
points, although they cannot match objects according to
functional similarity within the visual modality (conceptual
matching; Ptak, Lazeyras, Di Pietro, Schnider, & Simone,
2014). Hence, conceptual matching necessitates access to
a more abstract level of representation, which can be com-
mon even for objects that are dissimilar in shape. Typically,
this level is considered semantic in nature and is often
referred to as semantic memory (Gerlach, 2009). On the
basis of this, we predicted that RTs would be longer on
conceptual matching trials compared with perceptual
matching trials.

If we assume that structurally similar objects are located
nearer each other in psychological space than are struc-
turally dissimilar objects and that discriminability increases
as a function of distance in psychological space, then struc-
turally dissimilar objects are easier to tell apart than struc-
turally similar objects (Nosofsky, 1986). This “distance
assumption” regarding similarity and psychological space
may also apply to neural space. Imagine that (UN) every per-
ceived object gives rise to a unique pattern of activation in
the brain; (B) the more similar two objects are, the more
will their respective brain activations overlap; E (C) dis-
criminability depends on how overlapping activation
patterns are, with less overlap increasing the discriminabil-
ità. If so, discriminability can be understood as distance in
neural rather than psychological space, and it so happens
that visually similar objects do give rise to more overlapping

Figura 1. Examples of three
stimulus pairs characterized as
either low, intermediate,
or high in SS. Also illustrated is
the correct answer to each of
four types of the stimulus
pairs depending on whether
they were presented during
the perceptual or conceptual
matching condition.

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

T
T

F
/

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

T

F

/

.

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

Gerlach, Zhu, and Joseph

975

activation patterns in ventral posterior brain areas than less
similar objects do (Weber, Thompson-Schill, Oshersona,
Haxby, & Parsons, 2009). On the basis of this distance
assumption, we predicted that the efficiency of perceptual
matching would decrease as a linear function of increased
SS between image pairs representing different objects. As
an example, it should be easier to decide that an “apple”
and a “banana” are different objects than that a “dog”
and a “fox” are different objects. With respect to conceptual
matching, we predicted that increasing SS would have the
reverse effect on performance. This prediction is based on
the observation that structurally similar objects often share
similar functional features and cluster categorically (Hills,
Maouene, Maouene, Sheya, & Smith, 2009; Rosch, 1999).
For animals, Per esempio, similar shape is a product of
similar evolutionary constraints; four legs are good for
movement on land but apparently rather hopeless for
flying. For artifacts, similar shape also often implies similar
function (Randall, Moss, Rodd, Greer, & Tyler, 2004; Rogers
& McClelland, 2004). Hence, similarity in shape often
conveys similarity in other respects too. This means that
SS can act as a proxy to semantic category membership,
making the conceptual matching process more efficient
for structurally similar objects than for structurally dissimilar
objects. As an example, because a dog and a fox are highly
structurally similar, chances are that they belong to the
same superordinate category “animals.” Hence, it is not
even necessary to identify each object on a structural level
as a dog and a fox before they can be categorized as ani-
mals. It is sufficient to see that they are similar in overall
shape or share a couple of features (per esempio., legs). On the other
hand, if two objects are not structurally similar, they must
be structurally individuated before category assignment. As
an example, because an apple and a banana are structurally
dissimilar but still belong to the same category “fruits,"
each object must be identified structurally as a particular
object before they can be properly categorized. These pre-
dictions regarding opposing effects of SS on perceptual
and conceptual matching are entirely consistent with the
literature mentioned above concerning category effects
in visual object processing.

In addition to behavior, we also explored the neural
correlates of perceptual and conceptual matching by
means of fMRI recordings during the tasks. Given that
both perceptual and conceptual matching necessitate
structural processing of the stimuli and given that the
same stimuli were presented during perceptual and con-
ceptual matching, we did not expect perceptual matching
to cause greater activation than conceptual matching
across all SS levels in ventral posterior areas of the brain
(defined here as regions posterior to y = −40 and infe-
rior to z = 15; Evans et al., 1993; including the calcarine
sulcus, lingual gyrus, inferior and middle occipital gyri,
and the posterior/middle fusiform gyrus). Tuttavia, be-
cause conceptual matching necessitates access to a more
abstract level of representation (semantic memory) di
perceptual matching, we did expect conceptual matching

to cause greater activation than perceptual matching
across all similarity levels in areas associated with semantic
processing such as the inferotemporal cortex (Gerlach,
Legge, Gade, & Paulson, 2000) and/or left inferior frontal
gyrus (Thompson-Schill, 2003; see also Binder, Desai,
Graves, & Conant, 2009).

Besides this effect of conceptual matching, we also ex-
pected to find a positive relationship between increasing
levels of SS and activation in ventral posterior parts of the
brain during perceptual matching. This follows from the
behavioral predictions and is further based on the assump-
tion that ventral posterior parts of the brain are involved
in structural rather than semantic processing. This assump-
tion is supported by several lines of evidence: (UN) these
regions are not involved in semantic processing of both
words and pictures—as would be assumed if they sup-
ported conceptual knowledge—but seem to respond to
the similarity structure of pictures only (Devereux, Clarke,
Marouchos, & Tyler, 2013), (B) they are sensitive to changes
in shape but not to changes in basic-level semantic catego-
ries associated with those shape changes (Kim, Biederman,
Lescroart, & Hayworth, 2009), (C) they are implicated in
visual agnosia (per esempio., Ptak et al., 2014)—a disorder charac-
terized by impaired processing of object structure con-
currently with preserved semantic knowledge, (D) Essi
are involved in structural differentiation of objects (Gerlach,
2009; Liu, Steinmetz, Farley, Smith, & Joseph, 2008), E
(e) they do indeed exhibit a positive correlation between
SS and activation (Collins, Zhu, Bhatt, Clark, & Joseph,
2012; Liu et al., 2008; Joseph & Farley, 2004; Joseph &
Gathers, 2003). In the present formulation, the ventral pos-
terior regions process structural information about objects
and accumulate evidence for differences in structure to
make perceptual decisions but accumulate evidence for
similarity in structure to make conceptual decisions. For
low similarity pairs, evidence for differences in structure is
high so the decision that they are different objects is easier
compared with high similarity pairs for which evidence
about differences is low. In questo caso, additional structural
processing is needed to differentiate the objects. Therefore,
the posterior ventral regions will be engaged more for
high similarity than for low similarity pairs for perceptual
matching.

In contrasto, we expected to find the inverse relation-
ship between SS and activation in the same areas during
conceptual matching, questo è, increasing degrees of activa-
tion as SS diminishes. This hypothesis is based on the
rationale that, although conceptual matching is ultimately
a semantic task, it also necessitates structural processing.
Consequently, if high SS can act as a proxy to category
membership, in that structurally similar objects are more
likely to belong to the same category than structurally dis-
tinct objects, then conceptual matching can largely be
based on structural information. Tuttavia, as described
above, if conceptual matching cannot be based largely on
SS, then objects will need to be identified at the semantic
level. In turn, this will require additional access to structural

976

Journal of Cognitive Neuroscience

Volume 27, Numero 5

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

F
/

T
T

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

T

.

/

F

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

informazione, which will engage posterior ventral regions
even more strongly. Di conseguenza, the low similarity pairs will
induce more posterior ventral activation than high simi-
larity pairs for conceptual matching.

In summary: In terms of behavior, we predicted that
performance would deteriorate as a function of increased
SS during perceptual matching but improve as a function
of increased SS during conceptual matching. In terms of
brain activation, we predicted that both perceptual and
conceptual matching would activate ventral posterior
parts of the brain because processing of VLTM represen-
tations is required for both types of matching. In addi-
zione, we expected activation in these areas to (UN) increase
during perceptual matching as the SS of the objects to be
compared increased and (B) decrease during conceptual
matching as the SS of the objects increased (an interaction
between task and SS; Guarda la figura 2). This outcome would
indicate not only that these regions are integral for pro-
cessing VLTM representations (cioè., activation is modulated
by the similarity manipulation) but also that these regions
process VLTM representations based on higher-level task
demands of perceptual versus conceptual matching (cioè.,
differential modulation by similarity in the same region).
An alternative account is that the processing in these re-
gions is involved in comparing image-based descriptions,
questo è, the physical similarity between stimuli rather than
the similarity between representations stored in VLTM.
Two images that are not similar will not have as many
features in common as two images that are more similar,
which will have more image features in common. If these
ventral posterior regions are simply computing similarity
based on image information, then similarity modulation
of fMRI signal in these regions would be the same regard-
less of the higher-level task demand given that the same
images are used in the perceptual and conceptual con-
ditions (Guarda la figura 2). Finalmente, we expected the left infero-

Figura 2. (UN) Primary and (B) alternative hypotheses for modulation
of ventral posterior activation in this study. The primary hypothesis
predicts that fMRI signal in ventral posterior regions will increase as a
function of SS for the perceptual matching task and will decrease as
a function of similarity for the conceptual matching task. The alternative
hypothesis predicts that ventral posterior activation will increase as a
function of SS for both perceptual and conceptual matching tasks.

lateral temporal cortex and/or left inferior frontal gyrus to
be associated with a main effect of conceptual matching, COME
an index of semantic processing.

Although our prediction concerning a negative influ-
ence of SS on perceptual differentiation is based on many
prior findings, we are unaware of any prior attempts to
directly test whether SS can affect superordinate catego-
rization positively. If it can, it will challenge the standard
view that the organization of basic level categories may
be driven by shared structural features among their mem-
bers, but superordinate categories are not (Hills et al.,
2009; Cutzu & Tarr, 1997). Infatti, Hills et al. (2009) note
that the existence of superordinate categories has been
taken as prima facie evidence in favor of more abstract
and theory-like representations of categories over repre-
sentations in terms of mere feature distributions.

METHODS

Participants

Twenty-four right-handed healthy adults participated.
They all had normal or corrected-to-normal vision, E
none of them reported neurological or psychiatric diag-
noses or pregnancy. All participants provided informed
consent before participating, and all procedures were
approved by local institutional review board.

Because of excessive head movement, data from two
participants had to be excluded from the analyses of
behavioral and imaging data. The mean age of the remain-
ing 22 participants was 22.5 years (SD = 4.1 years, range =
18–31 years; 12 men). Inoltre, RTs were not recorded
appropriately for six participants because of technical
failure. Hence, these participants also had to be excluded
from the RT analyses (but not the analyses based on error
rates), causing RT analyses to be based on 16 participants
only.

Tasks and Stimuli

In all tasks, the participants had to compare two stimuli.
In the conceptual matching task, they had to decide
whether the stimuli came from the same category (animals
that were not birds [mammals, pescare, reptiles, amphibians,
or insects], birds, fruits, or vegetables), whereas in the
perceptual matching task, the participants had to decide
whether the stimuli represented the same object (per esempio.,
two different images of a dog). For conceptual matching
same-response trials (per esempio., apple and banana) and per-
ceptual matching different-response trials (per esempio., apple and
banana), the object pairs were identical. For conceptual
matching different-response trials (per esempio., apple and broccoli)
and perceptual matching same-response trials (per esempio., dog
and dog), all stimulus pairs differed. For conceptual match-
ing same-trials and perceptual matching different-trials,
object pairs were characterized by different degrees of
SS: either low SS1 (per esempio., banana and apple), intermediate

Gerlach, Zhu, and Joseph

977

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

F
/

T
T

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

T

F

/

.

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

SS2 (per esempio., toad and alligator), or high SS3 (per esempio., elephant
and rhinocerous; Guarda la figura 1). The stimuli were black
and white line drawings of common objects, presented
two at a time, one above and one below a fixation cross,
which appeared in the middle of the screen. They were
selected from previous norming studies ( Joseph, 1997;
Joseph & Proffitt, 1996) in which participants rated the
similarity of the two items of a pair in terms of 3-D volumetric
structure. In these prior studies, participants were instructed
to consider the general volumetric configuration of the
objects as opposed to simply the outline. They were also
explicitly told to ignore stored knowledge about the ob-
jects such as texture, colore, size, and taxonomic category.
The rating scale was a horizontal bar spanning a length of
1000 pixels at the top of the screen anchored by the labels
“least similar” at the left end and “most similar” at the right
end. A vertical marker appeared in the center of this bar
(at 500 pixels from the left end) at the start of each rating
trial. Participants moved this marker along the scale using
the mouse and then clicked a mouse button to indicate the
degree of similarity between the two objects. The number
of pixels that the marker was displaced from the left end of
the scale served as the similarity rating, and this value could
range from 0 A 1000 pixels; così, high similarity was asso-
ciated with values closer to 1000.

The distribution of SS ratings from the prior studies
determined the assignment of object pairs to each of the
three similarity levels (SS1–SS3) that characterized the
conceptual matching same-response trials/perceptual match-
ing different-response trials. A one-way ANOVA conducted
across items revealed that the similarity ratings for the
SS1, SS2, and SS3 stimuli were indeed significantly different
from each other (F(2, 48) = 51.3, p = .0001) with SS1 hav-
ing the lowest rating (M = 400.1, SD = 18.7), SS2 having a
higher rating (M = 544, SD = 19.2), and SS3 having the
highest rating (M = 677.0, SD = 19.9). There were no sim-
ilarity ratings available for conceptual matching different-
response trials or for perceptual matching same-response
trials; Tuttavia, SS can be assumed to be quite high for
perceptual matching same-response trials as these stimuli
pairs depict the same object although in different versions.
The assignment of objects to the four categories was
based on the similarity ratings available from the original
rating experiments (Joseph, 1997; Joseph & Proffitt,
1996). In these studies, participants completed pairwise
ratings for subsets of the object pairs but not for all pos-
sible stimulus pairings. So there were only ratings avail-
able for birds paired with other birds but not birds with
fruits or other kinds of animals, and so forth.

Design and Procedures

The experiment was composed of six experimental con-
ditions: conceptual matching SS1–SS3 and perceptual
matching SS1–SS3. Each of the six experimental condi-
tions was composed of 72 trials. In each of the three
conceptual matching conditions, 48 del 72 trials were

same-response trials (per esempio., apple and banana), and the
remaining 24 trials were different-response trials (per esempio.,
apple and broccoli). In each of the three perceptual match-
ing conditions, 48 del 72 trials were different-response
trials (apple and broccoli), and the remaining 24 trials were
same-response trials (per esempio., dog and dog). IL 432 trials
were distributed across three functional runs, con 12 task
blocks (2 different matching conditions × 3 SS levels ×
2 repetitions) interleaved with 11 rest blocks per run.
The order of conditions within a run was counterbalanced
across participants. Task blocks lasted 36 sec, and rest
blocks lasted 12 sec each. Each task block included 12 trials
(either eight perceptual matching different-response trials
and four perceptual matching same-response trials or
eight conceptual matching same-response trials and four
conceptual matching different-response trials) presented
in random order.

Each trial lasted for 3000 msec. It began with a query
question (for 1 sec), either “Same category?” or “Same
object?” depending on the matching condition. Questo
was followed by the target object pairs, which were dis-
played for 400 msec, followed by a screen with a centrally
presented “?” displayed for 1600 msec. The objects were
presented one above the other with a fixation crosshair
in the center. Each object subtended a vertical visual
angle of 4°. Responses were collected via an MR-compatible
response pad held in the right hand. Participants were
instructed to press the “yes” button with their index finger
if the objects in a pair came from the same category
(conceptual matching trials) or depicted the same object
(perceptual matching trials) or else press the “no” button
with their middle finger.

Before entering the scanner, individuals were trained
to identify which pairs of objects constituted a match
or mismatch trial in the context of the conceptual and
perceptual matching conditions and also which stimuli
belonged to which categories (per esempio., that tomato belonged
to the category vegetables). Primo, participants were told
that there were four categories (“animals” that were not
birds, “birds,” “fruits,” and “vegetables”). They were then
given practice viewing each object in each of its versions
and were told the category assignment for those objects
so that they could learn the association of each object
with its category for the purposes of this experiment.
Then, they practiced making decisions (“same object”
and “same category”) with unlimited time to respond
for 16 trials. Then, they completed 36 timed practice trials
before the actual experiment in the scanner. During train-
ing and during the actual scanning session, participants
were asked to respond as accurately and quickly as pos-
sible. No feedback was given on performance. During
the scanning session, stimuli were presented using a
high-resolution rear-projection system, and participants
viewed the stimuli via a reflection mirror mounted on the
head coil. A desktop computer running E-Prime (Version 1.1
SP3; Psychology Software Tools, Pittsburgh, PAPÀ) controlled
stimulus presentation and the recording of responses.

978

Journal of Cognitive Neuroscience

Volume 27, Numero 5

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

T
T

F
/

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

T

/

.

F

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

The timing of stimulus presentation was synchronized
with the magnet trigger pulses.

Image Acquisition

A 3-T Siemens Trio magnetic resonance imaging system at
the University of Kentucky Medical Center equipped for
EPI was used for data acquisition. Four hundred forty-
eight EPI images were acquired (repetition time = 3000 msec,
echo time = 30 msec, flip angle = 81°), each consisting of
40 contiguous axial slices (matrix = 64 × 64, in-plane
resolution = 3.5 × 3.5 mm2, thickness = 3.5 mm, gap =
0.6 mm). A high-resolution T1-weighted magnetization
prepared rapid gradient echo anatomical set (192 sagittal
slices, matrix = 224 × 256, field of view = 224 × 256 mm2,
slice thickness = 1 mm, no gap, echo time = 2.93 msec,
inversion time = 1100 msec, repetition time = 2100 msec)
was collected for each participant.

Analysis of Behavioral Data

RT and error rates were recorded from participants per-
forming the tasks in the scanner. To ensure that the RT
variable was normally distributed to meet the assump-
tions of a multivariate approach, the log transformation
of individual RTs was used (LogRTs). LogRTs from indi-
vidual trials more than 3 SDs from the overall group
mean were considered outliers (no outliers emerged).
Only correct LogRTs were submitted to analyses (89%
of the data). Each dependent variable was subjected to
a two-way repeated-measures ANOVA using a multi-
variate approach (OʼBrien & Kaiser, 1985), with repeated
factors Task (conceptual vs. perceptual matching) and SS
level (low vs. intermediate vs. high SS). Because SS levels
were only truly comparable across conceptual matching
same-response trials and perceptual matching different-
response trials, data from conceptual matching different-
response trials and perceptual matching same-response
trials were not included in these ANOVAs. Following Hertzog
and Rovineʼs (1985) recommendation, we use multivariate
planned comparisons to test whether the effect of similar-
ity for each matching condition shows a significant linear
trend. When the linear trend is not significant, we also
report any quadratic trends.

Analysis of fMRI Data

Preprocessing and statistical analysis used FMRIB Software
Library (v. 4.1.7; FMRIB, Oxford University, Oxford, United
Kingdom). For each participant, preprocessing included
motion correction with MCFLIRT, brain extraction using
BET, spatial smoothing with a 7-mm FWHM Gaussian
kernel, and temporal high-pass filtering (cutoff = 100 sec).
Statistical analyses were then performed at the single-
subject level (general linear model, FEAT v. 5.98). Each
scan was modeled with six explanatory variables (EVs; per-

ceptual and conceptual matching × 3 similarity levels)
versus baseline, with the height of each EV determined
by the average accuracy for that block, individualized for
each participant. Each EV was then convolved with a
double gamma hemodynamic response function and a
temporal derivative. Inoltre, six head motion param-
eters (three translations and three rotations) were also
included to control for head motion confounds.

For each participant, contrast maps were registered via
the participantʼs high-resolution T1-weighted anatomical
image to the adult Montreal Neurological Institute 152
template (12-parameter affine transformation; FLIRT)
yielding images with spatial resolution of 2 mm3. UN
mixed-effects group analysis (using FLAME) yielded the
group-level statistical parametric map of each contrast.
Higher level maps were thresholded by p < .01 (false dis- covery rate [FDR]-corrected). In accordance with the hypotheses presented in the Introduction, the following analyses were performed: (1) To test the prediction that ventral posterior regions are strongly recruited for both perceptual and con- ceptual matching, we examined the overlap in activa- tion between conceptual matching versus fixation and perceptual matching versus fixation. (2) To examine main effects of Task type (conceptual vs. perceptual matching), we contrasted the perceptual matching task with the conceptual matching task and vice versa. (3) To test the prediction that brain activation in ventral posterior areas would increase with increased SS dur- ing perceptual matching but decrease with increased SS during conceptual matching, we looked for inter- actions between task type and SS level. Because opposing effects of SS on conceptual and perceptual matching should be greatest at the most extreme ends of the similarity dimension, that is, at SS Levels 1 and 3, we set the weights for SS Level 2 to zero for each con- trast when examining interaction effects. This should maximize our ability to detect interaction effects. Hence, interaction effects were modeled with the following contrast weights: [−1, 0, 1, 1, 0, −1] for perceptual matching SS Levels 1, 2, and 3 and conceptual match- ing SS Levels 1, 2, and 3, respectively. For complete- ness, we also looked for areas where activation increased during conceptual matching but decreased during perceptual matching ([1, 0, −1, −1, 0, 1]) although such interaction effects were not anticipated. In regions associated with interaction effects, we con- ducted post hoc trend analyses (IBM Statistics, Chicago, IL) examining the BOLD signal across all SS levels. For ROIs isolated from the interaction contrasts, percent signal change relative to fixation was extracted for each event type in each participantsʼ first-level analysis (using FMRIB Software Libraryʼs Featquery tool). Percent sig- nal change for 3 similarity levels × 2 matching types (conceptual and perceptual) for each participant and Gerlach, Zhu, and Joseph 979 D o w n l o a d e d f r o m l l / / / / j t t f / i t . : / / h t t p : / D / o m w i n t o p a r d c e . d s f i r o l m v e h r c p h a d i i r r e . c c t . o m m / j e d o u c n o / c a n r a t r i t i c c l e e - p - d p d 2 f 7 / 5 2 7 9 / 7 5 4 / 1 9 9 7 4 4 8 / 5 1 9 7 2 8 o 2 c 8 n 2 _ 2 a / _ j 0 o 0 c 7 n 4 8 _ a p _ d 0 0 b 7 y 4 g 8 u . e p s t d o f n b 0 y 8 S M e I p T e m L i b b e r r a 2 r 0 i 2 3 e s / j / t . f u s e r o n 1 7 M a y 2 0 2 1 region was then submitted to repeated-measures ANOVAs. The motivation for this analysis was that pre- vious evidence suggests that parametric manipulations are not necessarily linearly related to BOLD signal changes (Birn, Saad, & Bandettini, 2001). Such non- linear effects have also been observed in manipula- tions of SS in prior studies (Liu et al., 2008; Joseph & Gathers, 2003). Consequently, although we expected that the voxelwise approach would be most sensitive in isolating differential activation between the two extreme ends of the similarity scale, the post hoc anal- ysis would enable us to confirm that similarity trends were linear or quadratic. In other words, the voxelwise approach did not model the intermediate similarity level, but the post hoc analysis examined the full effect of all three similarity levels. (4) To test the alternative hypothesis that the ventral pos- terior cortex is involved in processing image-based similarity (i.e., SS would exert the same effect in ven- tral posterior regions regardless of task type), we used a contrast that reflected both increasing similarity for perceptual matching and increasing similarity for con- ceptual matching ([−1, 0, 1, −1, 0, 1] for perceptual matching SS Levels 1, 2, and 3 and conceptual match- ing SS Levels 1, 2, and 3, respectively). If the ventral posterior cortex only processes imaged-based simi- larity, then this contrast will isolate activation in these regions, but the contrast described in (3) will not. To ensure that the contrasts described above (2–4) reflected activations rather than deactivations, we made the additional requirement that activation during the experimental conditions (perceptual and/or conceptual matching) should be significantly higher than the activa- tion during fixation ( p < .01, FDR corrected) by masking activation maps with maps of perceptual > fixation and/or
with maps of conceptual > fixation depending on the spe-
cific contrast, using fslmaths.

RESULTS

Behavioral Data

The analysis of errors revealed a main effect of Task (F(1,
21) = 5.3, P < .05) and a main effect of SS level (F(2, 20) = 15.4, p < .0001). These main effects were qualified by an interaction between task and SS level (F(2, 20) = 56.2, p < .0001). As trend analysis revealed a significant linear interaction (F = 108.9, p < .0001), simple trend tests were performed for perceptual and conceptual matching condi- tions separately. These analyses revealed significant linear trends across SS level for both matching types (F = 159, p < .0001, and F = 16.2, p < .001, for perceptual and con- ceptual matching, respectively). As seen in Figure 3A (left) and Table 1, the Task × SS level interaction reflects that the error rate increased as SS level increased for perceptual matching, whereas the error rate decreased as SS level increased for conceptual matching. The analysis of RT revealed a main effect of Task (F(1, 15) = 6.2, p < .05) and a main effect of SS level (F(2, 14) = 9.2, p < .001). These main effects were qualified by a Task × SS level interaction (F(2, 14) = 14.1, p < .0001). As trend analysis revealed a significant linear interaction (F = 23.9, p < .0001), simple trend tests were performed for perceptual and conceptual matching conditions sepa- rately. These analyses revealed a significant linear trend across SS level for conceptual (F = 25.8, p < .0001) but not for perceptual (F = 0.9, p = .34) matching. As seen in Figure 3B (left) and Table 1, the interaction reflects that RT decreased across conceptual matching conditions as SS level increased, whereas RTs were more constant across perceptual matching conditions and, in fact, did not differ significantly across SS level (F = 1.6, p = .22). To test whether error rate or RT differed for conceptual matching different-response trials or for perceptual matching same-response trials as a function of SS level, trials from these conditions were subjected to four sepa- rate repeated-measures ANOVAs. As expected, none of D o w n l o a d e d f r o m l l / / / / j t t f / i t . : / / h t t p : / D / o m w i n t o p a r d c e . d s f i r o l m v e h r c p h a d i i r r e . c c t . o m m / j e d o u c n o / c a n r a t r i t i c c l e e - p - d p d 2 f 7 / 5 2 7 9 / 7 5 4 / 1 9 9 7 4 4 8 / 5 1 9 7 2 8 o 2 c 8 n 2 _ 2 a / _ j 0 o 0 c 7 n 4 8 _ a p _ d 0 0 b 7 y 4 g 8 u . e p s t d o f n b 0 y 8 S M e I p T e m L i b b e r r a 2 r 0 i 2 3 e s / j . / t f u s e r o n 1 7 M a y 2 0 2 1 Figure 3. Error rate and LogRTs for each SS level (low, intermediate, and high) for each match condition (perceptual or conceptual). For each figure, the left shows error rate and RTs for perceptual-different and conceptual-same conditions, whereas the right shows error rate and RTs for the perceptual-same and conceptual-different conditions. Error bars represent 95% within-subject confidence intervals (Cousineau, 2005). 980 Journal of Cognitive Neuroscience Volume 27, Number 5 Table 1. Mean Percentage Error Rates and Mean RT (Log Transformed) for the Conceptual and Perceptual Match Conditions % Errors RT (Log) Conceptual matching Same responses, SS Level 1 21 (4) 2.834 (.020) Same responses, SS Level 2 16 (3) 2.825 (.012) Same responses, SS Level 3 12 (3) 2.779 (.016) Different responses across all levels 26 (3) 2.860 (.008) Perceptual matching Different responses, SS Level 1 10 (3) 2.770 (.021) Different responses, SS Level 2 18 (4) 2.785 (.019) Different responses, SS Level 3 31 (3) 2.780 (.018) Same responses across all levels 15 (2) 2.727 (.010) Within-subject 95% confidence interval for error rates and RTs are given in parentheses. these comparisons approached significance (all ps > .35),
because the assignment of pairs in these conditions to
similarity levels was arbitrary (see Figure 3A and B, right).

Imaging Data

We predicted that ventral posterior regions would be
strongly recruited for both perceptual and conceptual

matching. Figure 4A shows that, Infatti, a large expanse
of ventral posterior cortex was activated by both per-
ceptual and conceptual matching, according to the fMRI
voxelwise analysis. Ovviamente, many other regions were
also activated as would be expected from other task
demands such as response selection and execution. Noi
also predicted that conceptual matching would activate
additional regions involved in semantic processing com-
pared with perceptual matching. Tuttavia, no areas were
significantly more activated during conceptual than per-
ceptual matching.

The primary hypothesis was that brain activation in
ventral posterior areas would increase with increased
SS during perceptual matching but decrease with in-
creased SS during conceptual matching. This hypothesis
was confirmed according to the voxelwise analysis using
the contrast that represented the interaction of SS and
task (cioè., [−1, 0, 1, 1, 0, −1] for perceptual matching
SS Levels 1, 2, E 3 and conceptual matching SS Levels 1,
2, E 3, rispettivamente), as shown in Table 2 and Figure 4B.
These large and bilateral posterior and ventral activations
were separated into fusiform, inferior, middle, and supe-
rior portions using regions in the automated anatomical
labeling atlas (Tzourio-Mazoyer et al., 2002) as masks.
In addition to these areas, the interaction was also asso-
ciated with bilateral activation in the cuneus, bilateral
activations in the parietal cortex (precuneus), and activa-
tion of the left paracingulate cortex. The opposite inter-
action (increasing fMRI signal as a function of increasing
similarity for conceptual matching and decreasing fMRI
signal as a function of increasing similarity for perceptual

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

T
T

F
/

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

.

F

/

T

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

Gerlach, Zhu, and Joseph

981

Figura 4. Activation maps
illustrating (UN) the regions
associated with perceptual (red)
and conceptual (blue) matching
versus baseline and the overlap
of the two task types (purple)
E (B) the regions associated
with the interaction between
task type (perceptual vs.
conceptual matching) E
SS level (low, intermediate,
and high). All activation was
significant at p < .01, FDR corrected. Yellow arrows indicate the ROIs that were not further masked anatomically. The inset to the right shows the four major divisions of the left occipito-temporal cortex after masking by anatomical regions defined in the automated anatomical labeling atlas. These four regions were then analyzed separately (see results in Table 2). The same masking procedure was also used in the right occipito-temporal cortex to yield four ROIs (not shown in figure). Table 2. Areas Associated with the Interaction between Task Type and SS Level: Areas Where Activation Increased with Increased SS during Perceptual Matching and Decreased with Increased SS during Conceptual Matching Coordinates (x, y, z) BA Cluster Volumeb F Valuec Perceptual Matching (Trend) Conceptual Matching (Trend) F Valued for Simple Trends Regiona L. inferior occipital L. middle occipital 18 −30, −90, 5 L. fusiform 37 −36, −60, −16 L. inferior occipital 19 −39, −76, −9 L. superior occipital 18 −16, −99, 10 R. inferior occipital R. fusiform R. inferior occipital R. middle occipital R. superior occipital R. precuneus L. precuneus R. cuneus L. cuneus 37 19 18 18 7 37, −58, −17 40, −77, −9 33, −87, 6 23, −93, 9 30, −55, 44 7 −27, −56, 45 17 3, −76, 13 17 −10, −73, 10 L. paracingulate 32 −1, 9, 48 773 651 480 13 581 461 327 62 87 83 53 45 44 8.4*** 6.4** 5.9** 12.9*** 7.9** 5.6** 5.2** 6.2** 2.22 7.5** 10.5*** 5.1** 1.8 6.5** 3.9e,* 1.4 9.3*** 0.74 4.7e,*** 3.3e,* 3.2e,* 0.62 3.1* 2.6 3.1* 0.008 3.3* 8.1*** 5.8** 5.1** 13.0*** 6.4** 6.0** 8.5*** 2.1 4.3* 9.3*** 2.6 1.8 Threshold was set at p < .01, FDR corrected. Also shown are the results from post hoc multivariate planned comparisons performed on percent signal change in the areas showing significant interactions in BOLD signal. The degree of freedom for all F values is (1, 21). L = left; R = right. aRegions written in boldface designate the main peak activation within an area, and regions written in roman designate peaks within the region when separated into subregions. bNumber of voxels comprising the region. cThe F value associated with post hoc ANOVAs examining linear trend interactions (Task type × Structural similarity level). dF value associated with post hoc multivariate planned comparisons examining simple linear trends across the three structural similarity levels for perceptual and conceptual matching, respectively. eThe simple trend is quadratic. *p < .10. **p < .05. ***p < .01. D o w n l o a d e d f r o m l l / / / / j t t f / i t . : / / h t t p : / D / o m w i n t o p a r d c e . d s f i r o l m v e h r c p h a d i i r r e . c c t . o m m / j e d o u c n o / c a n r a t r i t i c c l e e - p - d p d 2 f 7 / 5 2 7 9 / 7 5 4 / 1 9 9 7 4 4 8 / 5 1 9 7 2 8 o 2 c 8 n 2 _ 2 a / _ j 0 o 0 c 7 n 4 8 _ a p _ d 0 0 b 7 y 4 g 8 u . e p s t d o f n b 0 y 8 S M e I p T e m L i b b e r r a 2 r 0 i 2 3 e s / j f t / . matching; i.e., the contrast [1, 0, −1, −1, 0, 1]) revealed no activation. Because the voxel-level analysis revealed an interaction of SS and task but did not determine whether the SS trend was significant for perceptual matching, conceptual matching, or both, post hoc trend analyses were con- ducted in the regions in Table 2. Trend analyses were conducted in the context of the 3 (Similarity levels) × 2 (Matching tasks: conceptual and perceptual) repeated- measures ANOVAs. These analyses were based on per- cent signal change for each SS level relative to fixation, for each of the tasks. Most regions showed significant linear interactions, as expected, given that the voxel-level interaction contrast was used to isolate the regions. Two exceptions were the left paracingulate cortex and the right precuneus. However, most importantly, in 8 of the 13 regions, activation decreased significantly linearly as SS increased during conceptual matching. During per- ceptual matching, activation also generally increased as SS increased. This effect was significantly linear for the left middle and left superior occipital gyri; significantly quadratic for the right inferior occipital cortex; and margin- ally quadratic in the left fusiform, right middle occipital and right superior occipital gyri (see Table 2 and Figure 5). The alternative hypothesis was that the ventral pos- terior cortex is involved in processing image-based simi- larity leading to the prediction that SS would exert the same effect in ventral posterior regions regardless of task u s e r o n 1 7 M a y 2 0 2 1 982 Journal of Cognitive Neuroscience Volume 27, Number 5 type. This contrast, however, did not reveal any signifi- cant activation according to the voxel-level analysis, and the hypothesis is thus rejected. Finally, we examined whether the activations revealed by the interaction of SS and task could be explained by task difficulty apart from the SS and task manipulations. In other words, is fMRI signal in the regions that showed similarity modulation driven by overall accuracy or RT? To address this, in each region that showed a significant linear or quadratic trend as a function of SS for either the perceptual or conceptual matching task, the average fMRI signal collapsed over SS level was correlated with average RT or error rate collapsed over SS level. None of these correlations were significant. Therefore, none of the re- gions that showed significant modulation by SS according to the trend analysis had a greater fMRI signal associated with longer RT or higher errors, apart from the task manip- ulation itself. DISCUSSION Although the main effects found in this study should be interpreted cautiously, as they were compromised by interactions, we note that RTs were generally longer for conceptual compared with perceptual matching trials, as predicted. This supports the assumption that the two matching conditions are tapping into only partly identical cognitive operations; both matching conditions require access to VLTM representations, but as opposed to per- ceptual matching, conceptual matching also necessitates access to semantic memory representations, which may cause RTs to be somewhat longer on conceptual than perceptual matching trials. The interpretation offered here for the behavioral dif- ference between perceptual and conceptual matching conditions is entirely compatible with the finding that SS exerted opposing effects on perceptual and conceptual matching. If we assume that structurally similar objects are located nearer each other in psychological space than are structurally dissimilar objects and that discriminability increases as a function of distance in psychological space, then structurally dissimilar objects are easier to tell apart than structurally similar objects (Nosofsky, 1986). On per- ceptual matching different-response trials, it is therefore relatively easy to decide that highly dissimilar images (e.g., banana and apple) must represent different objects be- cause they map onto points far apart in psychological space. For perceptual matching different-response trials characterized by some degree of SS (SS Levels 2 and 3), some uncertainty regarding whether the images represent the same object may exist, as initial processing may yield ac- tivation of more closely located points in psychological D o w n l o a d e d f r o m l l / / / / j t t f / i t . : / / h t t p : / D / o m w i n t o p a r d c e . d s f i r o l m v e h r c p h a d i i r r e . c c t . o m m / j e d o u c n o / c a n r a t r i t i c c l e e - p - d p d 2 f 7 / 5 2 7 9 / 7 5 4 / 1 9 9 7 4 4 8 / 5 1 9 7 2 8 o 2 c 8 n 2 _ 2 a / _ j 0 o 0 c 7 n 4 8 _ a p _ d 0 0 b 7 y 4 g 8 u . e p s t d o f n b 0 y 8 S M e I p T e m L i b b e r r a 2 r 0 i 2 3 e s / j . / t f u s e r o n 1 7 M a y 2 0 2 1 Figure 5. Plots of percent signal change for each of the six conditions in each region that showed a significant Task × Similarity interaction: Solid lines indicate significant linear or quadratic trends based on the simple effect of similarity for each task type. Dashed lines indicate marginal linear or quadratic trends based on the simple effect of similarity for each task type. Dotted lines indicate insignificant trends. Error bars represent 95% within-subject confidence intervals (Cousineau, 2005). Gerlach, Zhu, and Joseph 983 space. This uncertainty can only be resolved by sampling more visual information, which will cause RT to increase. If sufficient information cannot be sampled, for example, because of short stimulus exposure duration (in the pres- ent experiment: 400 msec) or limited response time (in the present experiment: 2 sec), the consequence will be increased error rates. This interpretation is in accordance with the finding that RTs and error rates, as predicted, generally increased with increased SS level on perceptual matching trials. If high SS can act as a proxy to category membership, as we argue is the case, the interpretation offered above can also account for the finding that RTs and error rates decreased with increased SS level on con- ceptual matching trials. As predicted, ventral posterior regions were strongly recruited for both perceptual and conceptual matching. On the basis of prior findings (Devereux et al., 2013; Kim et al., 2009; Liu et al., 2008; Joseph & Gathers, 2003) and the present finding that these regions were modulated by SS (discussed more below), we suggest that this activation reflects structural processing. We also found no areas that were more activated during percep- tual matching than during conceptual matching across all SS levels. Although care should be exercised in conclud- ing anything based on a null finding, we do note that this lack of effect is compatible with the assumption that perceptual matching draws on the same initial cognitive operations as does conceptual matching (access to VLTM representations). On the other hand, because conceptual matching, as opposed to perceptual matching, does re- quire access to semantic knowledge in addition to VLTM representations, we did expect to find some areas (left inferolateral temporal cortex and/or left inferior frontal gyrus, see the Introduction) to be more activated during conceptual matching than during perceptual matching across all SS levels. This expectation was not borne out as no regions were associated with higher activation dur- ing conceptual compared with perceptual matching. This is so despite the fact that conceptual matching generally did take longer time than perceptual matching, com- patible with the assumption that conceptual matching requires an additional step of (semantic) processing com- pared with perceptual matching. Although we do not want to place much weight on this lack of effect—as it also constitutes a null finding—it may reflect that people cannot refrain from semantic processing during percep- tual matching although such processing is not required ( Joseph, 1997; Joseph & Proffitt, 1996). In other words, semantic knowledge may have been accessed automati- cally following the operations necessary and sufficient for performing a perceptual match. As opposed to the null findings reported above for the main effects of Task type, we found several areas exhibit- ing an interaction between task type and SS consistent with increased activation as a function of increased simi- larity for perceptual matching and decreased activation as a function of increased similarity for conceptual matching. As expected, most of these activations were located in ventral posterior brain regions (Brodmannʼs areas [BA] 17, 18, 19, and 37). However, we also found activations in more dorsal parts of the brain (the precuneus, BA 7) and the paracingulate cortex (BA 32), which were not anticipated. Post hoc trend analysis revealed significant linear inter- actions between task type and SS level in all areas reported above except for the right precuneus and the left para- cingulate cortex. Hence, the activations associated with this interaction generally reflected areas where activation de- creased as a function of increasing SS across the three SS levels during conceptual matching but increased as a func- tion of increasing SS across the three SS levels during per- ceptual matching. We note that the linear effects were not significant for all simple main effects, especially not for the perceptual matching conditions. However, in some cases, the simple effect of similarity for perceptual matching re- flected quadratic trends that were also increasing, which is consistent with prior findings of similarity effects in some brain regions (Liu et al., 2008; Joseph & Gathers, 2003). In terms of function, the ventral posterior areas are likely to mediate structural processing, that is, the buildup of visual representations and matching of these with rep- resentations stored in VLTM (see Gerlach, 2009; Liu et al., 2008). Indeed, these areas have been found to exhibit a positive correlation between SS and degree of activation in tasks demanding perceptual differentiation (Liu et al., 2008; Joseph & Gathers, 2003). What we find in addition is that activation in these areas is also inversely affected by SS when objects are to be conceptually matched. With respect to the precuneus, this area is not commonly asso- ciated with structural processing. It has, however, been implicated in both visuospatial processing and spatial attention (Cavanna & Trimble, 2006) and especially in visuospatial tasks, which require shifts in attention be- tween different object features (Nagahama et al., 1999) or attentional shifts to different levels of processing (global shape or details) of complex visual stimuli (Fink et al., 1997). These suggestions can rather easily account for our findings. When participants are to compare two visual stimuli, which are only presented for a limited duration, these stimuli must be kept in visual STM (imagery) if a decision cannot be made before they disappear. Moreover, as the comparisons become harder during perceptual matching when SS increases and during conceptual match- ing when SS decreases, more information must be sampled to pass a judgment—a process that is likely to require attentional shifts from a global level (outline shape) to details or from some features to others. We note here that only the left and not the right precuneus exhibited a linear trend interaction, which was significant. Whereas we can offer no explanation for this difference, it is interesting that Fink et al. (1997) found that only the left precuneus exhib- ited a significant positive correlation between regional CBF and number of switches from one processing level to another (global/local). 984 Journal of Cognitive Neuroscience Volume 27, Number 5 D o w n l o a d e d f r o m l l / / / / j f / t t i t . : / / h t t p : / D / o m w i n t o p a r d c e . d s f i r o l m v e h r c p h a d i i r r e . c c t . o m m / j e d o u c n o / c a n r a t r i t i c c l e e - p - d p d 2 f 7 / 5 2 7 9 / 7 5 4 / 1 9 9 7 4 4 8 / 5 1 9 7 2 8 o 2 c 8 n 2 _ 2 a / _ j 0 o 0 c 7 n 4 8 _ a p _ d 0 0 b 7 y 4 g 8 u . e p s t d o f n b 0 y 8 S M e I p T e m L i b b e r r a 2 r 0 i 2 3 e s / j / t . f u s e r o n 1 7 M a y 2 0 2 1 The last area to be accounted for is the left paracingu- late cortex (BA 32). According to the post hoc trend anal- ysis, there were no significant linear or quadratic trends across SS levels during either perceptual or conceptual matching in this region. In addition, this area is usually found activated in tasks that demand a high level of atten- tion or executive control (Niendam et al., 2012) and error monitoring (Bush, Luu, & Posner, 2000). Therefore, it seems likely that this region was recruited for more gen- eral aspects of attention and performance monitoring rather than engaged specifically in processing structural VLTM representations. We found no support for the alternative hypothesis that ventral posterior regions would simply be computing image-based similarity. Given that the exact same pairs of objects were used for perceptual and conceptual match- ing, this hypothesis was designed to isolate regions in- volved in comparing the image-based similarity of the two objects. However, no regions survived the contrast that predicted increasing activation for increasing simi- larity in both perceptual and conceptual matching. In other words, there was no evidence that image-based information was processed independently from the task at hand. Ventral posterior regions, instead, were appar- ently driven by processing the similarity of VLTM repre- sentations, which had a differential effect depending on whether the similarity facilitated the decision (as in con- ceptual matching) or interfered with the decision (as in perceptual matching). This suggests that the process of matching image-based representations is strongly influ- enced by task demands in a top–down manner. Similarity among objects plays an important role in object classification because objects are often assigned category membership based on shared characteristics (Sloutsky, 2009; Nosofsky, 1986), and similarity in object structure seems especially important in this respect. Indeed, evidence suggests that SS can be used as a proxy for cate- gory assignment (Gerlach et al., 2000; Rosch, 1999) although categorization is ultimately a conceptual task. Hence, SS is beneficial for categorization in that objects that belong to categories with structurally similar objects are cate- gorized faster than objects that belong to categories with structurally dissimilar objects (Gale, Laws, & Foley, 2006; Kiefer, 2001; Price & Humphreys, 1989). However, when objects need to be differentiated from similar objects, as is required during identification, SS is a disadvantage (Gerlach & Toft, 2011; Gerlach, 2009). Although prior studies suggest that similarity exerts opposing effects on categorization and identification, much of this evidence has been circumstantial. Instead of being the target of direct experimental manipulation, similarity has been invoked post hoc as an explanation for observed effects. This introduces the risk of circularity where the degree of similarity among stimuli is being inferred from the task effects, which in turn are explained with reference to underlying difference among stimuli in similarity. More- over, as similarity has typically not been under experimen- tal control, it is also unclear which type of similarity may potentially have been driving the effects. Is it similarity in terms of structure, semantics, or both? This is difficult to disentangle based on behavioral studies alone because objects that tend to be similar in shape often tend to have similar functions (Randall et al., 2004). In this study, we addressed both of these limitations by parametrically manipulating the degree of SS of the objects that were to be matched and by examining which areas were modulated by task type (perceptual or con- ceptual matching). On the basis of the evidence consid- ered above, we predicted that objects with high SS would be categorized more efficiently than objects with low SS but that objects with high SS would be differentiated less efficiently than objects with low SS during perceptual matching. If such an interaction between task type and similarity level should indeed reflect structural rather than functional similarity, we would expect similar inter- action effects in brain regions associated with structural processing. On the basis of prior studies implicating ven- tral processing stream regions as sites of structural rather than semantic processing (Ptak et al., 2014; Devereux et al., 2013; Gerlach, 2009; Kim et al., 2009; Liu et al., 2008), we predicted that interactions between task type and similarity level would be associated with ventral pos- terior brain regions. The results of the present experi- ment support both predictions. In terms of behavior, performance improved as a function of increased SS during categorization (conceptual matching) but deterio- rated as a function of increased SS during perceptual dif- ferentiation (perceptual matching). This interaction was mirrored in the imaging data where activation in several ventral posterior areas (BAs 17, 18, 19, and 37) generally increased as a function of increasing SS during perceptual matching but decreased as a function of increasing SS during conceptual matching. Although some other areas also showed this interaction (cuneus, precuneus, and paracingulate cortex), the simple effects of similarity in these regions were, for the most part, not significant or only marginally significant (Figure 5). We found no activation in areas usually associated with conceptual/semantic processing such as the anterior/lateral temporal lobes or the left inferior frontal gyrus, neither for the main effect of conceptual > perceptual matching
nor for the interaction between match type and SS level.
Although we do not want to place too much weight on
these null findings, they are in stark contrast to the clear
interaction effects found in ventral posterior brain regions.
Given that these latter regions are associated with struc-
tural rather than conceptual/semantic processing, Questo
suggests that it is indeed similarity in terms of structure that
plays the dominant role in driving the behavioral effects
we observed. What is also striking about the present results
is that the same images were used for perceptual and
conceptual matching tasks, yet similarity modulation of
fMRI signal was in opposite directions for the two different
compiti. Therefore, a processing account based solely on

Gerlach, Zhu, and Joseph

985

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

F
/

T
T

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

T

F

/

.

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

processing image-based similarity cannot explain the pres-
ent findings.

In conclusion, the present findings provide strong sup-
port for the notion that SS among objects is a very sig-
nificant factor underlying visual object processing
performance and that it exerts opposing effects on classi-
fication depending on whether objects are to be perceptu-
ally differentiated or categorized. Although the negative
impact of SS on perceptual matching concurs with previ-
ous findings, we are unaware of any prior studies demon-
strating unequivocally that SS can impact positively on
superordinate categorization. This suggests that there is a
tight coupling between SS and taxonomic structure. UN
similar finding has recently been reported by Dilkina and
Lambon Ralph (2012) who examined feature lists distilled
from four different domains: perceptual, functional, ency-
clopedic, and verbal. They found that, although these
domains gave rise to different organizations of conceptual
spazio, based on how different concepts (per esempio., dog, tree,
knife, boat) tend to cluster within the given domain in
terms of shared features, clustering based on perceptual
(mainly visual) features correlated highly with taxonomic
structure (cioè., superordinate categories such as animal
and vehicle) and much more so than did clustering based
on features within the functional, encyclopedic, and verbal
domini. Our finding concerning the role of SS in super-
ordinate categorization takes this further by showing that
the arguably static association between shared perceptual
features and taxonomic structure found by Dilkina and
Lambon Ralph (2012) is not only correlational but causal
and dynamic in the sense that SS is used during “online”
categorization of instances according to taxonomy (super-
ordinate categories). Although we do not suggest that SS is
all there is to (visually based) superordinate categorization,
the present findings clearly suggest that SS is a major deter-
minant in this process. Our findings thus challenge the
standard view that the organization of superordinate cate-
gories is not driven by shared structural features (cf. Hills
et al., 2009; Cutzu & Tarr, 1997).

Ringraziamenti
C. G. would like to pay his gratitude to F. F. Fakutsi for the
stay at Nexø Neuroscience. This research was sponsored by
the National Institutes of Health (R01 HD052724 and R01
MH063817).

Reprint requests should be sent to Jane E. Joseph, Department of
Neurosciences, Medical University of South Carolina, 96 Jonathan
Lucas St., CSB 325, MSC 606, Charleston, SC 29425-6160, or via
e-mail: josep@musc.edu.

REFERENCES

Binder, J. R., Desai, R. H., Graves, W. W., & Conant, l. l. (2009).
Where is the semantic system? A critical review and meta-
analysis of 120 functional neuroimaging studies. Cerebral
Cortex, 19, 2767–2796.

Birn, R. M., Saad, Z. S., & Bandettini, P. UN. (2001). Spatial

heterogeneity of the nonlinear dynamics in the fMRI BOLD
risposta. Neuroimage, 14, 817–826.

Bush, G., Luu, P., & Posner, M. IO. (2000). Cognitive and

emotional influences in anterior cingulate cortex. Trends in
Cognitive Sciences, 4, 215–222.

Capitani, E., Laiacona, M., Mahon, B., & Caramazza, UN. (2003).
What are the facts of semantic category-specific deficits? UN
critical review of the clinical evidence. Cognitive
Neuropsychology, 20, 213–261.

Caramazza, UN. (1998). The interpretation of semantic category-
specific deficits: What do they reveal about the organization
of conceptual knowledge in the brain? Neurocase, 4,
265–272.

Cavanna, UN. E., & Trimble, M. R. (2006). The precuneus: UN

review of its functional anatomy and behavioural correlates.
Brain, 129, 564–583.

Collins, H. R., Zhu, X., Bhatt, R. S., Clark, J. D., & Joseph, J. E.
(2012). Process and domain specificity in regions engaged for
face processing: An fMRI study of perceptual differentiation.
Journal of Cognitive Neuroscience, 24, 2428–2444.

Cousineau, D. (2005). Confidence intervals in within-subject

designs: A simpler solution to Loftus and Massonʼs method.
Tutorial in Quantitative Methods for Psychology, 1, 42–45.

Cutzu, F., & Tarr, M. J. (1997). The representation of three-

dimensional object similarity in human vision. SPIE
proceedings from electronic imaging: Human vision and
electronic imaging II, 3106, 460–471.

Devereux, B. J., Clarke, A., Marouchos, A., & Tyler, l. K. (2013).
Representational similarity analysis reveals commonalities
and differences in the semantic processing of words and
objects. The Journal of Neuroscience, 33, 18906–18916.
Dilkina, K., & Lambon Ralph, M. UN. (2012). Conceptual structure

within and between modalities. Frontiers in Human
Neuroscience, 6, 333.

Evans, UN. C., Collins, D. L., Mills, S. R., Brown, E. D., Kelly, R. L.,
& Peters, T. M. (1993). 3D statistical neuroanatomical models
from 305 MRI volumes. Proceedings of IEEE-Nuclear Science
Symposium and Medical Imaging Conference 1993, 3,
1813–1817.

Fink, G. R., Halligan, P. W., Marshall, J. C., Frith, C. D.,

Frackowiak, R. S. J., & Dolan, R. J. (1997). Neural mechanisms
involved in the processing of global and local aspects of
hierarchically organized visual stimuli. Brain, 120, 1779–1791.
Gainotti, G. (2000). What the locus of brain lesion tells us about

the nature of the cognitive defect underlying category-
specific disorders: A review. Cortex, 36, 539–559.

Gale, T. M., Laws, K., & Foley, K. (2006). Crowded and sparse
domains in object recognition: Consequences for categorization
and naming. Brain and Cognition, 60, 139–145.
Gerlach, C. (2009). Category-specificity in visual object

recognition. Cognition, 111, 281–301.

Gerlach, C., Legge, I., Gade, A., & Paulson, O. B. (2000).
Categorization and category effects in normal object
recognition: A PET study. Neuropsychologia, 38, 1693–1703.

Gerlach, C., & Toft, K. O. (2011). Now you see it, now you

donʼt: The context dependent nature of category-effects in
visual object recognition. Visual Cognition, 19, 1262–1297.
Hertzog, C., & Rovine, M. (1985). Repeated-measures analysis
of variance in developmental research: Selected issues. Child
Development, 56, 787–809.

Hills, T. T., Maouene, M., Maouene, J., Sheya, A., & Smith, l.
(2009). Categorical structure among shared features in
networks of early-learned nouns. Cognition, 112, 381–396.

Humphreys, G. W., & Forde, E. M. (2001). Hierarchies,

similarity, and interactivity in object recognition: “Category-
specific” neuropsychological deficits. Behavioral and Brain
Scienze, 24, 453–476.

986

Journal of Cognitive Neuroscience

Volume 27, Numero 5

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

T
T

F
/

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

.

/

T

F

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

Humphreys, G. W., Riddoch, M. J., & Boucart, M. (1992). IL
breakdown approach to visual perception: Neuropsychological
studies of object recognition. In G. W. Humphreys (Ed.),
Understanding vision: An interdisciplinary perspective
(pag. 104–125). Oxford, UK: Blackwell.

Humphreys, G. W., Riddoch, M. J., & Quinlan, P. T. (1988).
Cascade processes in picture identification. Cognitive
Neuropsychology, 5, 67–104.

Joseph, J. E. (1997). Color processing in object verification.

Acta Psychologica, 97, 95–127.

Joseph, J. E., & Farley, UN. B. (2004). Cortical regions associated
with different aspects of object recognition performance.
Cognitive, Affective, & Behavioral Neuroscience, 4, 364–378.

Joseph, J. E., & Gathers, UN. D. (2003). Effects of structural
similarity on neural substrates for object recognition.
Cognitive, Affective, & Behavioral Neuroscience, 3, 1–16.

Joseph, J. E., & Proffitt, D. R. (1996). Semantic versus

perceptual influences of color in object recognition. Journal
of Experimental Psychology: Apprendimento, Memory, E
Cognition, 22, 407–429.

Kiefer, M. (2001). Perceptual and semantic sources of category-
specific effects: Event-related potentials during picture and
word categorization. Memory and Cognition, 29, 100–116.
Kim, J. G., Biederman, I., Lescroart, M. D., & Hayworth, K. J.

(2009). Adaptation to objects in the lateral occipital complex
(LOC): Shape or semantics? Vision Research, 49, 2297–2305.

Liu, X., Steinmetz, N. A., Farley, UN. B., Smith, C. D., & Joseph,

J. E. (2008). Mid-fusiform activation during object discrimination
reflects the process of differentiating structural descriptions.
Journal of Cognitive Neuroscience, 20, 1711–1726.

Mandler, J. M. (2000). Perceptual and conceptual processes in
infancy. Journal of Cognition and Development, 1, 3–36.
McRae, K., & Cree, G. S. (2002). Factors underlying category-

specific semantic deficits. In E. M. E. Forde & G. W.
Humphreys (Eds.), Category-specificity in mind and brain
(pag. 211–249). East Sussex, UK: Psychology Press.

Nagahama, Y., Okada, T., Katsumi, Y., Hayashi, T., Yamauchi,
H., Sawamoto, N., et al. (1999). Transient neural activity in
the medial superior frontal gyrus and precuneus time locked
with attention shift between object features. Neuroimage,
10, 193–199.

Niendam, T. A., Laird, UN. R., Ray, K. L., Dean, Y. M., Glahn, D. C.,

& Carter, C. S. (2012). Meta-analytic evidence for a
superordinate cognitive control network subserving diverse
executive functions. Cognitive, Affective, & Behavioral
Neuroscience, 12, 241–268.

Nosofsky, R. M. (1986). Attention, similarity and the

identification–categorization relationship. Journal of
Experimental Psychology: General, 115, 39–57.

OʼBrien, R. G., & Kaiser, M. K. (1985). MANOVA method for
analyzing repeated measures designs: An extensive primer.
Psychological Bulletin, 97, 316–333.

Price, C. J., & Humphreys, G. W. (1989). The effects of surface

detail on object categorization and naming. Trimestrale
Journal of Experimental Psychology: Section A, Umano
Experimental Psychology, 41, 797–827.

Ptak, R., Lazeyras, F., Di Pietro, M., Schnider, A., & Simone, S. R.
(2014). Visual object agnosia is associated with a breakdown
of object-selective responses in the lateral occipital cortex.
Neuropsychologia, 60, 10–20.

Randall, B., Moss, H. E., Rodd, J. M., Greer, M., & Tyler, l. K.

(2004). Distinctiveness and correlation in conceptual
structure: Behavioral and computational studies. Journal of
Experimental Psychology: Apprendimento, Memory, E
Cognition, 30, 393–406.

Rogers, T. T., & McClelland, J. l. (2004). Semantic cognition:
A parallel distributed processing approach. Cambridge, MA:
CON Premere.

Rosch, E. (1999). Principles of categorization. In E. Margolis &
S. Laurence (Eds.), Concepts: Core readings (pag. 189–206).
Cambridge, MA: CON Premere.

Sloutsky, V. M. (2009). Similarity, induction, naming and

categorization: A bottom–up approach. In S. Johnson (Ed.),
Neoconstructivism: The new science of cognitive development
(pag. 274–292). Oxford, UK: Oxford University Press.
Thompson-Schill, S. l. (2003). Neuroimaging studies of
semantic memory: Inferring “how” from “where.”
Neuropsychologia, 41, 280–292.

Tranel, D., Logan, C. G., Frank, R. J., & Damasio, UN. R. (1997).

Explaining category-related effects in the retrieval of
conceptual and lexical knowledge for concrete entities:
Operationalization and analysis of factors. Neuropsychologia,
35, 1329–1339.

Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello,

F., Etard, O., Delcroix, N., et al. (2002). Automated
anatomical labeling of activations in SPM using a macroscopic
anatomical parcellation of the MNI MRI single-subject brain.
Neuroimage, 15, 273–289.

Weber, M., Thompson-Schill, S. L., Oshersona, D., Haxby, J., &
Parsons, l. (2009). Predicting judged similarity of natural
categories from their neural representations.
Neuropsychologia, 47, 859–868.

D
o
w
N
l
o
UN
D
e
D

F
R
o
M

l

l

/

/

/

/
j

T
T

F
/

io
T
.

:
/
/

H
T
T
P
:
/
D
/
o
M
w
io
N
T
o
P
UN
R
D
C
e
.
D
S
F
io
R
o
l
M
v
e
H
R
C
P
H
UN
D
io
io
R
R
e
.
C
C
T
.
o
M
M
/
j
e
D
o
tu
C
N
o
/
C
UN
N
R
UN
T
R
io
T
io
C
C
l
e
e

P

D
P
D
2
F
7
/
5
2
7
9
/
7
5
4
/
1
9
9
7
4
4
8
/
5
1
9
7
2
8
o
2
C
8
N
2
_
2
UN
/
_
j
0
o
0
C
7
N
4
8
_
UN
P
_
D
0
0
B
7

4
G
8
tu
.
e
P
S
T
D
o
F
N
B
0

8
S
M
e
IO
P
T
e
M
l
io
B
B
e
R
R
UN
2
R
0
io
2
3
e
S

/
j

/

T

F

.

tu
S
e
R

o
N

1
7

M
UN

2
0
2
1

Gerlach, Zhu, and Joseph

987Structural Similarity Exerts Opposing Effects on Perceptual image
Structural Similarity Exerts Opposing Effects on Perceptual image
Structural Similarity Exerts Opposing Effects on Perceptual image
Structural Similarity Exerts Opposing Effects on Perceptual image
Structural Similarity Exerts Opposing Effects on Perceptual image
Structural Similarity Exerts Opposing Effects on Perceptual image
Structural Similarity Exerts Opposing Effects on Perceptual image

Scarica il pdf