ARTÍCULO DE INVESTIGACIÓN

ARTÍCULO DE INVESTIGACIÓN

Bibliometrics-based decision trees (BBDTs) based
on bibliometrics-based heuristics (BBHs):
Visualized guidelines for the use of
bibliometrics in research evaluation

un acceso abierto

diario

Lutz Bornmann

Administrative Headquarters of the Max Planck Society, Division for Science and Innovation Studies, Hofgartenstraße 8,
80539 Munich, Alemania

Palabras clave: bibliometría, heuristics, bibliometrics-based heuristic (BBH), bibliometrics-based
árbol de decisión (BBDT)

ABSTRACTO

Fast-and-frugal heuristics are simple strategies that base decisions on only a few predictor
variables. In so doing, heuristics may not only reduce complexity but also boost the accuracy
of decisions, their speed, y transparencia. en este documento, bibliometrics-based decision
árboles (BBDTs) are introduced for research evaluation purposes. BBDTs visualize bibliometrics-
based heuristics (BBHs), which are judgment strategies solely using publication and
citation data. The BBDT exemplar presented in this paper can be used as guidance to find
an answer on the question in which situations simple indicators such as mean citation rates are
reasonable and in which situations more elaborated indicators (es decir., [sub-]field-normalized
indicators) should be applied.

Citación: Bornmann, l. (2020).
Bibliometrics-based decision trees
(BBDTs) based on bibliometrics-based
heuristics (BBHs): Visualized
guidelines for the use of bibliometrics
in research evaluation. Quantitative
Science Studies, 1(1), 171–182. https://
doi.org/10.1162/qss_a_00012

DOI:
https://doi.org/10.1162/qss_a_00012

Recibió: 08 Abril 2019
Aceptado: 17 Noviembre 2019

Autor correspondiente:
Lutz Bornmann
bornmann@gv.mpg.de

Editor de manejo:
Juego Waltman

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

1.

INTRODUCCIÓN

Bibliometrics are frequently used in research evaluation. In some situations, peer review and
bibliometrics are combined in an informed peer review process. According to Jappe, Pithan,
and Heinze (2018), por ejemplo, 11 de 36 assessment units in the 2014 UK Research Ex-
cellence Framework (REF) were allowed to see citation benchmarks. Bibliometrics are consid-
ered “to break open peer review processes, and stimulate peers to make the foundation and
justification of their judgments more explicit” (Moed, 2017, pag. 13). In other situations, “desk-
top bibliometrics” are relied upon. The term desktop bibliometrics describes the application of
bibliometrics by decision makers (p.ej., deans or administrators) without involving experts (es decir.,
científicos) from the evaluated fields (Leydesdorff, Wouters, & Bornmann, 2016). Another char-
acteristic of “desktop bibliometrics” is the application of inappropriate indicators for measuring
actuación, since bibliometrics experts are not involved. Informed peer review processes and
“desktop bibliometrics” exist side by side in the research evaluation landscape.

Although many literature overviews of bibliometrics and guidelines for their use exist (ver,
p.ej., Hicks, Wouters, waltman, de Rijcke, & Rafols, 2015), the recommendations are fre-
quently not very clearly formulated, leaving ample room for ambiguities. Al mismo tiempo,
research evaluation situations themselves come with ambiguities and uncertainties: Qué
counts as indicator for quality in a given field, which indicators serve to measure quality,
and how precise are those measurements in a given field (p.ej., aprendizaje automático) and for a

Derechos de autor: © 2020 Lutz Bornmann.
Publicado bajo Creative Commons
Atribución 4.0 Internacional (CC POR 4.0)
licencia.

La prensa del MIT

Visualized guidelines for the use of bibliometrics in research evaluation

given task (p.ej., evaluate an individual scientist as opposed to a university)? What are the con-
sequences if certain indicators are used, including politically at the level of the department or
university concerned? En la práctica, often ad hoc decisions, local preferences and politics as
well as the demands produced by a specific evaluation situation might dictate how biblio-
metrics are relied upon.

In the current paper, a decision tree is presented that can be used to decide how to use
bibliometrics in research evaluation situations (with or without peer review). The decision tree
is precisely formulated and transparent, making it easy to understand, communicate about,
and actually use. The tree is grounded in available empirical evidence and applicable across
campos.

2. BIBLIOMETRICS-BASED HEURISTICS (BBHS)

The decision trees introduced in this paper are fast-and-frugal heuristics. Such heuristics are
simple decision strategies that ignore available information, basing decisions on only a few
relevant predictor variables. In so doing, fast-and-frugal heuristics can not only aid reducing
complexity but also make fast and transparent decisions; systematically ignoring (irrelevante)
information also aids making accurate decisions (Gigerenzer & Goldstein, 1996). Decisión
trees are grounded in the fast-and-frugal heuristics framework (Gigerenzer, Todd, & ABC
Research Group, 1999). That framework, originally developed within the cognitive and deci-
sion sciences, has fueled a large number of studies indicating that heuristics can help people
make smart decisions in business, law, medicine, and many other task environments. por ejemplo-
amplio, Luan and Reb (2017) show that a significant proportion of managers use fast-and-frugal
árboles de decisión (lexicographic heuristics) to make performance-based personnel decisions. En
these environments, the strategies achieved performance competitive with more complex ap-
se acerca (p.ej., multiple regression analyses).

Recientemente, Bornmann and Marewski (2019) extended the fast-and-frugal heuristics framework
to research evaluation and formulated a research program for investigating bibliometric-based
heuristics (BBHs). BBHs characterize decision strategies in research evaluations based on biblio-
metrics data (publications and citations). Other data (indicators) besides bibliometrics are not
consideró. BBHs might be especially qualified for research evaluation purposes, because cita-
tions and other bibliometric data are deeply rooted in the research process of nearly every re-
searcher: Researchers are being prompted to make all their results publicly available and embed
the results in published research by citing the corresponding publications—research stands on
the shoulders of giants (see Merton, 1965).

BBHs may or may not be integrated in peer review processes (Bornmann, Hug, &
Marewski, 2019). Initiatives such as the San Francisco Declaration on Research Assessment
(DORA; https://sfdora.org) demonstrate that the use of bibliometrics is prevalent in science
evaluation.1 Decision makers in science (p.ej., reviewers) do not have unlimited time. Más-
encima, just like all other humans, scientific decision makers have limited information-processing
capacities, putting natural constraints on their ability to tackle computationally demanding
evaluation tasks. Al mismo tiempo, evaluators often have limited knowledge of the subject area
at hand, and even if the decision maker is an expert in a field (p.ej., Toma de decisiones) he or she

1 The general recommendation of DORA is not to use “journal-based metrics, such as Journal Impact Factors,
as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s con-
tributions, or in hiring, promoción, or funding decisions” (see https://sfdora.org/read).

Estudios de ciencias cuantitativas

172

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

.

/

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

might not be an expert in the target area of research (p.ej., decisions with the take-the-
best heuristic; see below)—a truism, partially fueled by extreme specialization tendencies in
some fields.

En breve, decisions in science are made in the context of limited information processing
capacity, tiempo, and knowledge (see Marewski, Schooler, & Gigerenzer, 2010). To use a term
coined by Nobel Laureate Herbert Simon, decision makers’ rationality is bounded. Heuristics
are models of bounded rationality. They are rules of thumb which perform well under condi-
tions of limited time, conocimiento, and information-processing capacity (Katsikopoulos, 2011).
They do not use all the information available in a given decision environment but a selection
for making reasonable decisions that are ecologically adequate (p.ej., bibliometrics data in the
case of BBHs). De este modo, heuristics “involve partial ignorance” (Mousavi & Gigerenzer, 2017, pag. 376).
The more redundancy and intercorrelations there are in the complete information, the better
are the decisions based on selected information (Marewski et al., 2010).

Heuristics frequently consist of search rules, stopping rules, and decision rules: “A search
rule that specifies what information (p.ej., predictor variables) is searched for and how (es decir., en
what order), a stopping rule that delineates when information search comes to an end, y un
decision rule that determines how the acquired information is employed (p.ej., combined) a
classify objects (p.ej., patients)" (Bornmann & Marewski, 2019, pag. 424). If we transfer these
rules to the area of research evaluation, a one-cue heuristic (BBH) could be as follows:
Imagine a funding organization in biomedicine with the goal of selecting exceptional scientists
for a group leader position. The organization is especially interested in scientists with an ex-
cellent publication record who are selected in an informed peer review process: The reviewers
decide based on an extensive bibliometric report and by reading selected publications. Alguno
years ago, sin embargo, the organization was confronted with the problem of receiving too many
applications. The informed by peer review process did not have the capacity to (properly) re-
view all applications. De este modo, the organization decided to introduce the following one-cue BBH
for a preselection of applications. A smaller pool of the applications is then reviewed by the
colegas.

The preselection is based on a single indicator that targets an important goal of the orga-
nization: research excellence (expressed by an exceptional publication record). The three
building blocks for the one-cue BBH are as follows:

1. Search rule: Search all publications (articles and reviews) published by the applicants in

Web of Science (WoS, Clarivate Analytics). Download the publications.

2. Stopping rule: Send the publication lists to the applicants for validation. If publications
are missing, improve the search rules. Add to the validated lists information about
whether the publications belong to the 10% most frequently cited publications in the
corresponding subject category and publication year (es decir., whether they are highly cited
publicaciones).

3. Decision rule: Divide the number of highly cited publications by the number of years
since publishing the first publication (to generate age-normalized numbers: ver
Bornmann & Marx, 2014). Sort the applicants by the age-normalized number of highly
cited publications in descending order and select the top x% of applicants. These are
the applicants for reviewing by the peers.

The organization involved scientometric experts, experts from biomedicine, and represen-
tatives of the organization to (empirically) check that the BBH fulfills the desired objective. El
BBH is annually evaluated as to whether it should be improved (reformulated) or not.

Estudios de ciencias cuantitativas

173

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

The application of this and similar BBHs does not mean that decisions based on biblio-
metrics are recommended for all evaluation contexts. Bastante, the fast-and-frugal heuristics re-
search program assumes that any given heuristic is suitable only for certain, but not all, tarea
entornos: Heuristics are ecologically rational, not globally rational (Todd, Gigerenzer, &
ABC Research Group, 2012). That is not different for BBHs, which can and should only be
used in selected situations (p.ej., for the selection process of a specific funding organization—
see above). These situations can be identified, por ejemplo, by studies that compared assess-
ments based on bibliometrics and peer review. Para ilustrar este punto, Traag and Waltman
(2019) analyzed the agreement between metrics and peer review in the UK REF 2014. Su
model suggests that “for some fields, the agreement between metrics and peer review is similar
to the internal agreement of peer review. This is the case for three fields in particular: Clinical
Medicamento, Physics, and Public Health, Health Services & Primary Care.”

Using the same data set, Rodriguez Navarro and Brito (2019) conclude as follows: "el
present results indicate that in many research areas the substitution of a top percentile indica-
tor for peer review is possible”. Similar results have been published by Pride and Knoth (2018)
and Harzing (2017). Comparing scores from international university rankings and biblio-
métrica, Robinson-Garcia, Torres-Salinas, Herrera-Viedma, and Docampo (2019) conclude
that “ranking scores from whichever of the seven league tables under study can be explained
by the number of publications and citations received by the institution (pag. 232).” Thus, ranking
scores might be substituted by bibliometrics. The results of all these studies point out that sim-
plified decision rules based on bibliometrics may provide fast and accurate decisions that may
not deviate much from peers’ decisions or university rankings.

From this ecological view, BBHs in general are neither good nor bad. They can be assessed
only with respect to the evaluation situation in which they are applied (see Moed, Burger,
Frankfort, & van Raan, 1985). The better the functional match is between a certain BBH and its
evaluation situation, the higher the level of its ecological rationality (see Mousavi & Gigerenzer,
2017; waltman, 2018). Específicamente, the fast-and-frugal heuristics research program assumes that
people select between different heuristics as a function of the task environment at hand. Selecting
the adequate heuristic for a given task can aid in making clever decisions. Por eso, in addition to
describing how people make decisions, models of heuristics can also be interpreted prescriptively:
What heuristics should decisions makers, ideally, use in a given environment in order to produce
resultados deseables? In which environment does a certain heuristic perform well with respect to
criteria such as accuracy, frugality, velocidad, or transparency of decision making—or not?

Bibliometrics-based decision trees (BBDTs), which are introduced in this study, are visualized
BBHs (heuristics are usually formulated as text only). BBDTs consist of a sequence of nodes with
questions which are answered for a specific evaluation situation (see Katsikopoulos, 2011). Exits
at the nodes lead to appropriate bibliometrics for the situation. BBDTs can be seen as adaptive
bibliometrics toolboxes (including selected BBHs) which functionally match with certain eval-
uation tasks. The goal of BBDTs is prescriptive by recommending when (y cómo) one should
use which BBH or bibliometric indicator, respectivamente (see Raab & Gigerenzer, 2015).

The bibliometrics literature provides many hints as to which bibliometrics data and indica-
tors should be applied (or not) in certain environments. The following sections draw on that
literature and explain decision trees (BBDTs), which can be used to decide on the proper use
of appropriate heuristics (BBHs) in concrete evaluation situations. Although these BBDTs try to
include only rules that might be standard in the bibliometrics field, the standards are frequently
questioned (p.ej., Opthof & Leydesdorff, 2010) and lead to internal discussions (p.ej., van Raan,
van Leeuwen, Visser, van Eck, & waltman, 2010).

Estudios de ciencias cuantitativas

174

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

3. DECISION TREES

Decision trees can be defined as visualized lexicographic heuristics that are used for the cat-
egorization of cases (Kurz-Milcke & Gigerenzer, 2007; Martignon, Vitouch, Takezawa, &
Forster, 2011). The term “lexicographic” has its roots in the term “lexicon,” in which entries
are sorted by the order of their letters. Kelman (2011) defines lexical decisions as follows: “A
decision is made lexically when a subject chooses A over B because it is judged to be better
along a single, most crucial dimension, without regard to the other ‘compensating’ virtues that
B might have relative to A. De este modo, por ejemplo, one would have chosen some restaurant A over
B in a lexical fashion if one chose it because it were cheaper and did not care about other
traits, like its quality, proximity, level of service, and so on” (pag. 8). Take-the-best heuristics are
typical representatives of lexicographic heuristics that work with the following building blocks
(see Scheibehenne & von Helversen, 2009):

Search rule: The most important cue is searched among the available cues.
Stopping rule: The search is stopped if the option with the highest cue value is found.
Decision rule: This option is selected. If several options with equal cue values exist, el siguiente
important cue is considered (see Marewski et al., 2010; Raab & Gigerenzer, 2015).

In a series of computer simulations with 20 real-world data sets, Czerlinski, Gigerenzer, y
Goldstein (1999) showed that lexicographic heuristics outperform more complex inference
methods, such as multiple linear regressions. Al mismo tiempo, lexicographic take-the-best
heuristics can be explained well with the use of bibliometrics reports on single scientists in
research evaluation. Suppose the report includes the results of many analyses concerning pro-
ductivity, citation impact, colaboración, and theoretical roots. Because the decision makers
are interested in an outstanding scientist (most important cue) rooted in a specific theoretical
tradition (second most important cue) with frequent international collaborations (third most
important cue), the decision makers select the scientists with the most papers belonging to
el 1% most frequently cited within their field and publication year (Ptop 1%, targeting the most
important cue). Since two scientists perform similarly, the decision makers select the scientist
who is active in the desired theoretical tradition (and reject the other who is not). Because the
consideration of the theoretical tradition allows to discriminate between the scientists, the se-
lection process finishes (and does not consider the third aspect, international collaborations).

Decision trees are processes that can be described in terms of tree-structured decision rules
(Martignon et al., 2011). Decision trees consists of three elements: (a) Nodes represent cue-
based questions, (b) branches represent answers to these questions, y (C) exits represent de-
cisions and leavings of the tree (Phillips, Neth, Woike, & Gaissmaier, 2017).2 Decision trees
are always lexicographic; once a decision has been made based on a certain piece of infor-
formación, no other information is used in the decision-making process. In the example given
arriba (the selection of a single scientist), no indicators are used besides citation impact and
publication output (Phillips et al., 2017). Fast-and-frugal decision trees (FFTs)—as a subgroup
of decision trees—are defined by Martignon, Katsikopoulos, and Woike (2008) as trees that
have exactly two branches from one node, whereby one or both branches lead to exits.

The root node is the starting point in the use of decision trees. Various further levels follow
at which one cue is processed at each level. Two types of nodes exist: (a) the first type is
branch oriented. A node contains a question about the evaluated case. The answer then leads

2 Phillips et al. (2017) define decision trees as “supervised learning algorithms used to solve binary classi-
fication tasks.” In this study, another kind of decision tree is proposed that does not accord with this
definition.

Estudios de ciencias cuantitativas

175

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

to another node at further levels. (b) The second type is exit-oriented. The evaluated case is
categorized and the decision process stops. En principio, decision trees may consist of dozens
of nodes within a complex network of branches, which might be complicated to read.
Katsikopoulos (2011) recommends therefore that “for trees to be easy for people to understand
and apply, they should not have too many levels, nodos, or attributes” (pag. 13).

Because decision trees are visualizations with clearly understandable rules for application,
they are useful tools for situations in which application errors must be avoided or where it is
important that all stakeholders are aware of the decision process (p.ej., candidates for tenure
should know, in advance, on what dimensions and how they will be evaluated). Along the
same lines, one might also argue that decision trees are useful tools for making decisions that
come with important consequences. Finalmente, trees are also particularly suitable for situations of
time pressure, because they simplify decision processes and hence allow speeding them up.
For FFTs, specific software has been developed—written in the open-source R language—for
creating, visualizing, and evaluating decision trees (Phillips et al., 2017).

According to Phillips et al. (2017), FFTs have three important advantages: “First, FFTs tend
to be both fast and frugal as they typically use very little information … FFTs are heuristics by
virtue of ignoring information … Second, FFTs are simple and transparent, allowing anyone to
easily understand and use them … FFTs can make good predictions even on the basis of a
small amount of noisy data because they are relatively robust against a statistical problem
known as overfitting” (pag. 347). As is usual for heuristics in general, decision trees provide
an adaptive toolbox whereby each decision tree is appropriate for a given evaluation situation.
De este modo, it is necessary to specify for each decision tree the relevant situations, which means the
environments in which it allows successful decisions (Marewski et al., 2010).

4. MÉTODOS

The BBDT exemplar presented in this study has been developed based on literature overviews
of bibliometrics (p.ej., de Bellis, 2009; Todeschini & Baccini, 2016; van Raan, 2004; Vinkler,
2010). The author of this paper works as professional bibliometrician in research evaluation
whereby the development of the BBDT is based on his practical experiences. He also has ex-
tensively published standards of good practice for using bibliometric data in research evalu-
ación (p.ej., Bornmann, in press; Bornmann et al., 2014; Bornmann & Marx, 2014; Bornmann,
Mutz, Neuhaus, & Daniel, 2008). Some of these standards have been used to develop the
árbol de decisión. It is the general idea of the BBDT to guide the use of bibliometric indicators
in specific situations of research evaluation.

5. RESULTADOS

The BBDT presented in the following is a tool for making decisions in bibliometrics-based
research evaluations. The development of the BBDT followed the key question raised by
Phillips et al. (2017) in the context of decision making: “How to make good classifications,
and ultimately good decisions, based on cue information” (pag. 345)? The BBDT is prescriptive
and oriented toward ecological rationality (see Marewski et al., 2010). In the process of de-
veloping the BBDT, the experience has been that this process was not only interesting in view
of the application by the later user but was also interesting for the developer, because he had
to think about the evaluation situation, available indicators, evaluation goals, etc.. These aspects
are usually not considered in processes of new indicator developments in scientometrics, ser-
cause they focus on technical improvements of indicators.

Estudios de ciencias cuantitativas

176

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

The BBDT focuses on the use of citation impact indicators in research evaluation. Mucho
research in bibliometrics deals with the development of field-normalized indicators
(waltman, 2016). These indicators have been introduced because citation rates are dependent
on not only the quality of papers but also field-specific publication and citation practices
(Bornmann, in press). By considering expected field-specific citation rates as reference standards,
field-normalized citation indicators provide information on citation impact that is field-specifically
standardized and can be used in cross-field comparisons (p.ej., for the comparison of different
countries). Por ejemplo, the PPtop 10% indicator—the recommended field-normalized indicator
for institutional citation impact measurements (Hicks et al., 2015; Waltman et al., 2012)—is the
proportion of papers (published by an institution) that belong to the 10% most frequently cited
papers in the corresponding subject categories (and publication years). Since many units in sci-
ence publish research in different fields, field-normalized indicators are very necessary. El
most important disadvantages of field-normalized indicators are, sin embargo, their complexity (ser-
cause they are more complex than simple citation rates, their results are more difficult to inter-
pret) and their lost link to the underlying citation impact data (the field-normalized data can be
different from the citation data that can be found in the Scopus or WoS databases).

In a recent conference paper, waltman (2018) argues for using the difference between the
micro- and macro levels to decide whether field-normalized indicators should be used or not.
Only field-normalized indicators would have the necessary validity to be used at the macro
level on which experts view the world exclusively through indicators. The most important va-
lidity criterion mentioned by Waltman (2018) is the question of whether the units are active in
multiple fields (or not). This question focuses on not only the research of different units but also
the research within the units. The use of simple citation rates for research evaluation might
lead to distorted world views if units are active in multiple fields. Por ejemplo, universidades
with a focus on biomedical research might have an advantage in research impact measure-
ments against universities with other focuses (p.ej., engineering and technology), simply be-
cause they are active in fields with high publication activity and—on average—many cited
references listed in the papers (and not because of their high quality of papers).

Cifra 1 visualizes a BBDT which considers both aspects—(1) micro/macro level and (2) investigación
orientation in single or multiple fields—to decide whether field-normalized or nonnormalized indi-
cators should be used in a research evaluation situation. Because we can assume that evaluations at
the country and university levels always target research which has been done in multiple fields,
field-normalized indicators should be used in all situations (based on multidisciplinary classi-
fication systems such as journal sets proposed by Clarivate Analytics or Elsevier). Investigación-
focused institutions and single researchers can be active in single or multiple fields. Alguno
research-focused institutions are specialized in research topics of certain fields (p.ej., el
European Molecular Biology Laboratory, EMBL), and others have a broader research spectrum.
The same is true with single scientists—to a limited extent—since scientists are as a rule focused
on research in a single field. De este modo, for both units the decision in the concrete evaluation situ-
ation must be made on whether the focus is on one field only or on several fields.

En figura 1, another distinction is made between research (at research-focused institutions or by
single researchers) that is done in multiple subfields or not (given that these units do research in
only one field). Por ejemplo, economists can be active in various subfields, such as financial eco-
nomics or industrial organization (see Bornmann & Wohlrabe, 2019). These subfields can be con-
cerned by different citation rates, why Bornmann, Marx, and Barth (2013) propose to consider
these differences in research evaluation by calculating subfield-normalized citation rates (ver
Narin, 1987). En años recientes, subfield-normalized citation rates have been calculated based
on the following monodisciplinary classification systems: Medical Subject Headings (MeSH)

Estudios de ciencias cuantitativas

177

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Cifra 1. Decision tree (BBDT) for selecting (sub-)field-normalized or nonnormalized citation impact indicators in an evaluation situation.

(Boyack, 2004), Physics and Astronomy Classification Scheme (PACS) (Radicchi & Castellano,
2011), sections of the American Sociological Association (ASA) (Teplitskiy & Bakanic, 2016),
and Journal of Economic Literature (JEL) codes (Bornmann & Wohlrabe, 2019).

6. DISCUSIÓN
The “royal road” in research evaluation can be summarized as follows: Judgments are based on
peer review processes that include complete search and processing of information in decision-
haciendo. All information about a unit (p.ej., an institution or a research group) is made available to
decision makers who use all the information to make a preliminary recommendation or final
decisión (p.ej., on funding or hiring). The information is usually weighted according to its predic-
tive value for the evaluation task. The problem with these processes is, sin embargo, that they oc-
casion high costs and absorb the valuable time of evaluating researchers, reviewers, y
decision makers. Por ejemplo, el 2014 UK REF panel for physics consisted of 20 miembros.
According to Pride and Knoth (2018), 6,446 papers have been submitted as outputs from the
universidades. Because each paper should be read by two reviewers—which increases the number
of papers for reading to 6,446 × 2 = 12,892 paper instances—more than 600 papers had to be
read on average within less than one year.

According to Hertwig and Todd (2003), the tendency to use complex procedures including
all information seems to follow a certain belief: “Scientific theorizing, visions of rationality and
common wisdom alike appear to share a mutual belief: the more information that is used and
the more it is processed, the better (or more rational) the choice, judgment or decision will be”
(pag. 220). Sin embargo, the successful actions of people in daily life question this belief: “On a
daily basis, people make both important and inconsequential decisions under uncertainty,
often under time pressure with limited information and cognitive resources. The fascinating

Estudios de ciencias cuantitativas

178

Visualized guidelines for the use of bibliometrics in research evaluation

phenomenon is that most of the time, a majority of people operate surprisingly well despite not
meeting all requirements of optimization, be they internal (calculative abilities) or external (en-
formation access)" (Mousavi & Gigerenzer, 2017). Hertwig and Todd (2003) conclude there-
fore that “making good decisions need not rely on the standard rational approach of collecting
all available information and combining it according to the relative importance of each cue—
simply betting on one good reason, even one selected at random, can provide a competitive
level of accuracy in a variety of environments” (pag. 223).

Bibliometrics combines methods and data that can be used to make performance decisions
in science by focusing on only a part of the available information. Since the introduction of
bibliometrics decades ago, it has become more and more popular in research evaluation. Para
ejemplo, the results of Hammarfelt and Haddow (2018) show that about one-third of their
respondents in a survey stated that “they had used metrics for evaluation or self-promotion
in applications and CVs” (pag. 927). This large share of respondents is a surprising result, desde
they identified themselves as being in the humanities, where the missing coverage of the lit-
erature in bibliometrics databases makes the use of bibliometrics indicators problematic. Él
seems that even in environments in which the use of bibliometrics is highly problematic, es
use is popular.

Because bibliometrics is based on partial and nonrandom ignorance of other data or indicators
and can be applied in a time-efficient and effortless way, Bornmann and Marewski (2019) made a
connection between bibliometrics and heuristics and introduced BBHs. Heuristics are defined as
“simple decision-making strategies, also called ‘rules of thumb’, that make use of less than complete
information … more and more researchers are beginning to realise, especially in fundamentally
uncertain domains such as medicine, that expertise and good decision making involve the ignoring
of some information” (Wegwarth, Gaissmaier, & Gigerenzer, 2009, pag. 722). The heuristics research
program introduced by Gigerenzer, Todd, and ABC Research Group (1999) has already studied the
use of heuristics in various fields, including psychology, demography, economics, salud, transpor-
tation, y biología. The program is based on the bounded rationality view by Simon (1956, 1990),
who argues that people use simple strategies in situations where resources are sparse. Simon’s view
of problem solving is known as satisfying: People search for real-world solutions and avoid com-
plex solutions (which are time consuming and difficult to apply; see Tuccio, 2011).

BBHs are decision strategies that are solely based on publication and citation data. Estos
strategies ignore information about performance (p.ej., amount of third-party funds raised, como-
sessments of single publications by experts), which allows quick decisions in research evalu-
ación. An “ideal” BBH is an empirically validated strategy for performance decisions on certain
units (p.ej., researchers or papers) in a specific evaluation environment using a clearly defined
set of bibliometric indicators integrated in certain (buscar, stopping, and decision) normas. El
introduction of BBHs by Bornmann and Marewski (2019) should not be understood as a gen-
eral push for using bibliometrics in research evaluation. A diferencia de, it is a call to investigate the
evaluative use of bibliometrics more extensively. Answers are needed on the following (y
similar) preguntas: In which situations is it reasonable to use bibliometrics? When should bib-
liometrics be combined with peer review? Are there situations in which bibliometrics should
be avoided? Por ejemplo, if one has identified situations in which bibliometrics comes to the
same results as peer review, one might think about the replacement of peer review by biblio-
métrica (p.ej., in the UK REF). It does not help to demonize or push the use of bibliometrics in
general. We need research that shows in which situations it is useful and in which not.

This study introduces an exemplar BBDT that can be used in specific research evaluation
situations. Decision trees are prototypical noncompensatory algorithms that are applied

Estudios de ciencias cuantitativas

179

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

sequentially until a final decision is reached. They are graphical representations of a set of
rules that operate as lexicographic classifiers (see Martignon et al., 2011). Decision trees con-
sist of nodes (cue-based questions), branches (answers to questions), and exits (decisiones)
(Phillips et al., 2017). According to Luan and Reb (2017), empirical studies in many domains
have shown that people often decide with noncompensatory strategies that are similar to deci-
sion trees. One reason might be that compensatory strategies work better in “small world” situ-
ations in which much information is known and everything is calculable. Sin embargo, muchos
decisiones (especially in research evaluation) are not “small world” situations but “are character-
ized by unclear utilities, unknown probabilities, and often multiple goals. These conditions se-
verely restrict the effectiveness of compensatory strategies in finding the optimal solutions” (Luan
& Reb, 2017, pag. 31). “Ideal” BBDTs are appropriate visualizations of one or several “ideal” BBHs
(as explained above).

The BBDT presented in this study is practically oriented for deciding, in specific evaluation
situations, which indicator should be used. The decision tree can be characterized as a kind of
checklist, which is—according to Tuccio (2011)—“the simplest form of a heuristic as they
specify a procedure or rule“ (pag. 42). The proposed BBDT can be used as guidance to find
an answer to the question of in which situations simple indicators such as mean citation rates
are reasonable and in which situations more elaborate indicators (normalized indicators on the
field or subfield level) should be applied.

Most BBDTs are valid only for a certain time period. New bibliometrics indicators are pro-
posed that improve established indicators, and more appropriate statistical methods are pro-
posed for analyzing bibliometric data. De este modo, the improvement of BBDTs is an ongoing task that
should involve as many professional bibliometricians (and scientists concerned) as possible
(Marewski et al., 2010). In my opinion, the generation and continuous revision of BBDTs
could be handled by the International Society for Informetrics and Scientometrics (ISSI)—the
international association of scholars and professionals active in the interdisciplinary study sci-
ence of science, science communication, and science policy (see www.issi-society.org). ISSI
could implement BBDTs in software tools with well-designed interactive user interfaces that
guide the user through the different choices that need to be made.

To be clear, BBDTs are not introduced in this paper as new tools for application in “desktop
bibliometrics.” As outlined in section 1, the term desktop bibliometrics describes the applica-
tion of bibliometrics by decision makers (p.ej., deans or administrators) with the help of “click
the button tools” without involving experts in scientometrics and the evaluated fields. In gen-
eral, it should be the goal to include these experts in the processes of developing and estab-
lishing BBDTs to decide what is to be assessed, what analyses make sense, and what data
sources should be used. En principio, both groups could also be involved in the interpretation
of the results of BBDTs. BBDTs are intended to structure available standards in the field of
scientometrics and to facilitate the application of these standards.

Because bibliometrics-based research evaluations refer to a broad spectrum of data, anal-
yses, and tasks, further BBDTs should be developed for the practical application of biblio-
métrica (or BBHs). Por ejemplo, BBDTs could be developed that help a user choose which
bibliometric data source (WoS, Scopus, Dimensions, PubMed, Google Scholar, Microsoft
Académico, Crossref, etc.) to work with. Another option is to think of a BBDT that helps a user
choose, based on the aim one has, the type of unit one is focusing on, and the field in which
this unit is active, how to carry out a research evaluation (p.ej., based only on bibliometrics,
based only on peer review, or based on some combination of the two). A third option could be
a BBDT that helps a user to choose a bibliometric indicator for evaluating the impact of

Estudios de ciencias cuantitativas

180

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

journals (2-year journal impact factor, 5-year journal impact factor, Eigenfactor, Article
Influence Score, CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank,
etc.). A fourth option could be a BBDT for the choice of indicators that can be used for the
evaluation of individual researchers (see Bornmann & Marx, 2014).

EXPRESIONES DE GRATITUD

I thank Julian Marewski for helpful suggestions improving an earlier version of this paper.

CONFLICTO DE INTERESES

The author declares that there are no competing interests.

INFORMACIÓN DE FINANCIACIÓN

No funding has been received for this research.

DISPONIBILIDAD DE DATOS

Not applicable.

REFERENCIAS

Bornmann, l. (in press). Bibliometric indicators—methods for mea-
suring science. En R. williams (Ed.), Encyclopedia of Research
Métodos. Thousand Oaks, California, EE.UU: Sage.

Bornmann, l., Bowman, B. F., Bauer, J., Marx, w., Schier, h., &
Palzenberger, METRO. (2014). Bibliometric standards for evaluating
research institutes in the natural sciences. In B. Cronin & C. Sugimoto
(Editores.), Beyond Bibliometrics: Harnessing Multidimensional Indi-
cators of Scholarly Impact (páginas. 201–223). Cambridge, MAMÁ, EE.UU:
CON prensa.

Bornmann, l., Hug, S., & Marewski, j. norte. (2019). Bibliometrics-
based heuristics: What is their definition and how can they be
studied? Retrieved November 5, 2019, from https://arxiv.org/
abs/1810.13005

Bornmann, l., & Marewski, j. norte. (2019). Heuristics as conceptual
lens for understanding and studying the usage of bibliometrics in
research evaluation. cienciometria, 120(2), 419–459.

Bornmann, l., & Marx, W.. (2014). How to evaluate individual re-
searchers working in the natural and life sciences meaningfully? A
proposal of methods based on percentiles of citations.
cienciometria, 98(1), 487–509. https://doi.org/10.1007/s11192-
013-1161-y

Bornmann, l., Marx, w., & Barth, A. (2013). The normalization of
citation counts based on classification systems. Publications, 1(2),
78–86.

Bornmann, l., Mutz, r., Neuhaus, C., & Daniel, H.-D. (2008). Use
of citation counts for research evaluation: Standards of good
practice for analyzing bibliometric data and presenting and inter-
preting results. Ethics in Science and Environmental Politics, 8,
93–102. https://doi.org/10.3354/esep00084

Bornmann, l., & Wohlrabe, k. (2019). Normalisation of citation im-
pact in economics. cienciometria, 120(2), 841–884. https://doi.
org/10.1007/s11192-019-03140-w

Boyack, k. W.. (2004). Mapping knowledge domains: Characterizing
PNAS. Proceedings of the National Academy of Sciences of the
United States of America, 101, 5192–5199.

Czerlinski, J., Gigerenzer, GRAMO., & Goldstein, D. GRAMO. (1999). How good
are simple heuristics? In G. Gigerenzer, PAG. METRO. Todd, & ABC
Research Group. (Editores.), Simple heuristics that make us smart
(páginas. 97–118). Oxford, Reino Unido: prensa de la Universidad de Oxford.

de Bellis, norte. (2009). Bibliometrics and citation analysis: Desde
Science Citation Index to cybermetrics. Lanham, Maryland, EE.UU:
Scarecrow Press.

Gigerenzer, GRAMO., & Goldstein, D. GRAMO. (1996). Reasoning the fast and
frugal way: Models of bounded rationality. Psicológico
Revisar, 103(4), 650–669. https://doi.org/10.1037/0033-
295x.103.4.650

Gigerenzer, GRAMO., Todd, PAG. METRO., & ABC Research Group. (1999).
Simple heuristics that make us smart. Oxford, Reino Unido: Oxford Uni-
versity Press.

Hammarfelt, B., & Haddow, GRAMO. (2018). Conflicting measures and
valores: How humanities scholars in Australia and Sweden use
and react to bibliometric indicators. Journal of the Association
for Information Science and Technology, 69(7), 924–935.
https://doi.org/10.1002/asi.24043

Harzing, A.-W. (2017). Running the REF on a rainy Sunday afternoon:
Do metrics match peer review? Retrieved August 5, 2018, de
https://harzing.com/publications/white-papers/running-the-ref-on-a-
rainy-sunday-afternoon-do-metrics-match-peer-review; https://
openaccess.leidenuniv.nl/handle/1887/65202

Hertwig, r., & Todd, PAG. METRO. (2003). More is not always better: El
benefits of cognitive limits. In D. Hardman & l. Macchi (Editores.),
Thinking: Psychological Perspectives on Reasoning, Judgment
and Decision Making (páginas. 213–231). Hoboken, Nueva York; EE.UU: wiley.
Hicks, D., Wouters, PAG., waltman, l., de Rijcke, S., & Rafols, I.
(2015). Bibliometrics: The Leiden Manifesto for research metrics.
Naturaleza, 520(7548), 429–431.

Jappe, A., Pithan, D., & Heinze, t. (2018). Does bibliometric re-
search confer legitimacy to research assessment practice? A so-
ciological study of reputational control, 1972–2016. MÁS UNO,
13(6), e0199031. https://doi.org/10.1371/journal.pone.0199031

Estudios de ciencias cuantitativas

181

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

.

/

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Visualized guidelines for the use of bibliometrics in research evaluation

Katsikopoulos, k. V. (2011). Psychological heuristics for making in-
ferences: Definición, actuación, and the emerging theory and
práctica. Decision Analysis, 8(1), 10–29. https://doi.org/10.1287/
deca.1100.0191

Kelman, METRO. (2011). The heuristics debate. Oxford, Reino Unido: Oxford

Prensa universitaria.

Kurz-Milcke, MI., & Gigerenzer, GRAMO. (2007). Heuristic decision mak-

En g. Marketing, 3(1), 48–56.

Leydesdorff, l., Wouters, PAG., & Bornmann, l. (2016). Professional
and citizen bibliometrics: Complementarities and ambivalences
in the development and use of indicators – a state-of-the-art re-
puerto. cienciometria, 109(3), 2129–2150. https://doi.org/
10.1007/s11192-016-2150-8

Luan, S. h., & Reb, j. (2017). Fast-and-frugal trees as noncompensatory
models of performance-based personnel decisions. Organizational
Behavior and Human Decision Processes, 141, 29–42. https://doi.
org/10.1016/j.obhdp.2017.05.003

Marewski, j. NORTE., Schooler, l. J., & Gigerenzer, GRAMO. (2010). Five princi-
ples for studying people’s use of heuristics. Acta Psicológica
Sinica, 42(1), 72−87. https://doi.org/10.3724/SP.J.1041.2010.00072
Martignon, l., Katsikopoulos, k. v., & Woike, j. k. (2008).
Categorization with limited resources: A family of simple heuris-
tics. Revista de Psicología Matemática, 52(6), 352–361. https://
doi.org/10.1016/j.jmp.2008.04.003

Martignon, l., Vitouch, o., Takezawa, METRO., & Forster, METRO. (2011).
Naive and yet enlightened: From natural frequencies to fast
and frugal decision trees. In G. Gigerenzer, R. Hertwig, & t.
Pachur (Editores.), Heuristics: The Foundations of Adaptive Behavior
(páginas. 4-15). Oxford, Reino Unido: prensa de la Universidad de Oxford.

Merton, R. k. (1965). On the shoulders of giants. Nueva York, Nueva York,

EE.UU: Free Press.

Moed, h. F. (2017). Applied evaluative informetrics. Heidelberg,

Alemania: Saltador.

Moed, h. F., Burger, W.. j. METRO., Frankfort, j. GRAMO., & van Raan, A. F. j.
(1985). The use of bibliometric data for the measurement of uni-
versity research performance. Política de investigación, 14(3), 131–149.
Mousavi, S., & Gigerenzer, GRAMO. (2017). Heuristics are Tools for Un-
certeza. Homo Oeconomicus, 34(4), 361–379. https://doi.org/
10.1007/s41412-017-0058-z

Narin, F. (1987). Bibliometric techniques in the evaluation of re-
search programs. Science and Public Policy, 14(2), 99–106. https://
doi.org/10.1093/spp/14.2.99

Opthof, T., & Leydesdorff, l. (2010). Caveats for the journal and
field normalizations in the CWTS (“Leiden”) evaluations of re-
search performance. Journal of Informetrics, 4(3), 423–430.
Phillips, norte. D., Neth, h., Woike, j. K., & Gaissmaier, W.. (2017).
FFTrees: A toolbox to create, visualize, and evaluate fast-and-
frugal decision trees. Judgment and Decision Making, 12(4),
344–368.

Pride, D., & Knoth, PAG. (2018). Peer review and citation data in pre-
dicting university rankings, a large-scale analysis. In E. Méndez,
F. Crestani, C. Ribeiro, GRAMO. David, & j. Lopes (Editores.), Digital Libraries
for Open Knowledge. TPDL 2018. Lecture Notes in Computer
Ciencia, volumen. 11057. (páginas. 195–207). Basel, Suiza: Saltador.
Raab, METRO., & Gigerenzer, GRAMO. (2015). The power of simplicity: A fast-
and-frugal heuristics approach to performance science. Frontiers
in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.01672

Radicchi, F., & Castellano, C. (2011). Rescaling citations of publi-
cations in physics. Physical Review E, 83(4). https://doi.org/
10.1103/PhysRevE.83.046116

Robinson-Garcia, NORTE., Torres-Salinas, D., Herrera-Viedma, MI., &
Docampo, D. (2019). Mining university rankings: Publication
output and citation impact as their basis. Research Evaluation,
28, 232–240. https://doi.org/10.1093/reseval/rvz014

Rodriguez Navarro, A., & Brito, R. (2019). Like-for-like bibliometric
substitutes for peer review: Advantages and limits of indicators
calculated from the ep index. Retrieved June 26, 2019, de
https://arxiv.org/abs/1903.11119

Scheibehenne, B., & von Helversen, B. (2009). Useful heuristics. In T.
williams, k. Samset, & k. j. Sunnevåg (Editores.), Making Essential Choices
with Scant Information; Front-end Decision Making in Major Pro-
jects (páginas. 194–211). Basingstoke, Reino Unido: Palgrave Macmillan.

Simón, h. A. (1956). Rational choice and the structure of the envi-
ambiente. Revisión psicológica, 63(2), 129–138. https://doi.org/
10.1037/h0042769

Simón, h. A. (1990). Invariants of human-behavior. Annual Review
of Psychology, 41, 1–19. https://doi.org/10.1146/annurev.ps.41.
020190.000245

Teplitskiy, METRO., & Bakanic, V. (2016). Do peer reviews predict impact?
Evidence from the American Sociological Review, 1978 a 1982.
Socius, 2, 1–13. https://doi.org/10.1177/2378023116640278

Todd, PAG. METRO., Gigerenzer, GRAMO., & ABC Research Group. (2012).
Ecological rationality: Intelligence in the world. Nueva York, Nueva York,
EE.UU: prensa de la Universidad de Oxford.

Todeschini, r., & Baccini, A. (2016). Handbook of bibliometric indi-
cators: Quantitative tools for studying and evaluating research.
Weinheim, Alemania: wiley.

Traag, V. A., & waltman, l. (2019). Systematic analysis of agree-
ment between metrics and peer review in the UK REF. Palgrave
Comunicaciones, 5(1).

Tuccio, W.. A. (2011). Heuristics to improve human factors perfor-
mance in aviation. Journal of Aviation/Aerospace Education &
Investigación, 20(3), 39–53.

van Raan, A. F. j. (2004). Measuring science. Capita selecta of cur-
rent main issues. En H. F. Moed, W.. Glänzel, & Ud.. Schmoch (Editores.),
Handbook of Quantitative Science and Technology Research. El
Use of Publication and Patent Statistics in Studies of S&T Systems
(páginas. 19–50). Dordrecht, Los países bajos: Kluwer Academic
Publishers.

van Raan, A. F. J., van Leeuwen, t. NORTE., Visser, METRO. S., van Eck, norte. J.,
& waltman, l. (2010). Rivals for the crown: Reply to Opthof and
Leydesdorff. Journal of Informetrics, 4, 431–435.

Vinkler, PAG. (2010). The evaluation of research by scientometric in-

dicators. Oxford, Reino Unido: Chandos Publishing.

waltman, l. (2016). A review of the literature on citation impact

indicators. Journal of Informetrics, 10(2), 365–391.

waltman, l. (2018). Responsible metrics: One size doesn’t fit all. En
PAG. Wouters (Ed.), Proceedings of the Science and Technology
Indicators Conference 2018 in Leiden “Science, Tecnología
and Innovation Indicators in Transition” (páginas. 526–531). Leiden,
Los países bajos: University of Leiden.

waltman, l., Calero-Medina, C., Kosten, J., Noyons, mi. C. METRO.,
Tijssen, R. j. w., van Eck, norte. J., … Wouters, PAG. (2012). El
Leiden Ranking 2011/2012: Data collection, indicators, and in-
terpretation. Journal of the American Society for Information
Science and Technology, 63(12), 2419–2432.

Wegwarth, o., Gaissmaier, w., & Gigerenzer, GRAMO. (2009). Smart
strategies for doctors and doctors-in-training: Heuristics in med-
icine. Medical Education, 43(8), 721–728. https://doi.org/
10.1111/j.1365-2923.2009.03359.x

yo

D
oh
w
norte
oh
a
d
mi
d

F
r
oh
metro
h

t
t

pag

:
/
/

d
i
r
mi
C
t
.

metro

i
t
.

/

mi
d
tu
q
s
s
/
a
r
t
i
C
mi

pag
d

yo

F
/

/

/

/

1
1
1
7
1
1
7
6
0
8
4
8
q
s
s
_
a
_
0
0
0
1
2
pag
d

/

.

F

b
y
gramo
tu
mi
s
t

t

oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3

Estudios de ciencias cuantitativas

182ARTÍCULO DE INVESTIGACIÓN imagen
ARTÍCULO DE INVESTIGACIÓN imagen
ARTÍCULO DE INVESTIGACIÓN imagen

Descargar PDF