ARTICLE DE RECHERCHE
Counting methods introduced into the bibliometric
research literature 1970–2018: A review
Marianne Gauffriau
Copenhagen University Library/ The Royal Danish Library, Copenhagen, Denmark
un accès ouvert
journal
Mots clés: argument for introduction, counting method, internal validity, mathematical property,
research evaluation
Citation: Gauffriau, M.. (2021). Counting
methods introduced into the bibliometric
research literature 1970–2018: A review.
Études scientifiques quantitatives, 2(3),
932–975. https://doi.org/10.1162/qss_a
_00141
EST CE QUE JE:
https://doi.org/10.1162/qss_a_00141
Peer Review:
https://publons.com/publon/10.1162
/qss_a_00141
Informations complémentaires:
https://doi.org/10.1162/qss_a_00141
Reçu: 10 Janvier 2021
Accepté: 25 Avril 2021
Auteur correspondant:
Marianne Gauffriau
mgau@kb.dk
Éditeur de manipulation:
Ludo Waltman
droits d'auteur: © 2021 Marianne Gauffriau.
Publié sous Creative Commons
Attribution 4.0 International
(CC PAR 4.0) Licence.
La presse du MIT
ABSTRAIT
This review investigates (un) the number of unique counting methods, (b) to what extent
counting methods can be categorized according to selected characteristics, (c) methods and
elements to assess the internal validity of counting methods, et (d) to what extent and with
which characteristics counting methods are used in research evaluations. The review identifies
32 counting methods introduced from 1981 à 2018. Two frameworks categorize these
counting methods. Framework 1 describes selected mathematical properties and Framework 2
describes arguments for choosing a counting method. Twenty of the 32 counting methods are
rank dependent, fractionalized, and introduced to measure contribution, participation, etc.. de
an object of study. Suivant, three criteria for internal validity are used to identify five methods that
test the adequacy, two elements that test the sensitivity, and three elements that test the
homogeneity of counting methods. Enfin, a literature search finds that only three of the 32
counting methods are used by four research evaluations or more. Two counting methods are
used with the same characteristics as defined in the studies that introduced the counting
méthodes. The review provides a detailed foundation for working with counting methods, et
many of the findings provide bases for future investigations of counting methods.
1.
INTRODUCTION
The topic of the present review is counting methods in bibliometrics. The bibliometric research
literature has discussed counting methods for at least 40 années. Cependant, the topic remains
relevant, as the findings in the review show. Section 1.1 provides the background for the review,
and is followed by the study aims (Section 1.2) and research questions (Section 1.3).
1.1. Background
The use of counting methods in the bibliometric research literature is often reduced to the
choice between full and fractional counting. Full counting gives the authors of a publication
1 credit each, whereas fractional counting shares 1 credit between the authors of a publication.
Cependant, several studies document that there are not just two but many counting methods in
the bibliometric research literature, and that the distinction between full and fractional counting
is too simple to cover these many counting methods (for examples, see Todeschini & Baccini,
2016, pp. 54–74; Waltman, 2016, pp. 378–380; and Xu, Ding et al., 2016).
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
A counting method functions as one of the core elements of a bibliometric indicator,
such as the h-index fractionally counted (Egghe, 2008) or by first-author counting (Hu,
Rousseau, & Chen, 2010). For some indicators, the counting method is the only element,
such as the number of publications (full counting) on a researcher’s publication list.
Counting methods are not only relevant for how to construct and understand indicators but
also for bibliometric network analyses (Perianes-Rodriguez, Waltman, & Van Eck, 2016), field-
normalization of indicators (Waltman & Van Eck, 2015), and rankings (Centre for Science and
Technology Studies, n.d.).
There are bibliometric analyses where the choice of counting method makes no difference.
For sole-authored publications, the result is the same whether the authors are credited by full
counting (1 credit) or fractional counting (1/1 credit). Cependant, coauthorship is the norm in
most research fields, and the average number of coauthors per publication has been increasing
(Henriksen, 2016; Lindsey, 1980, p. 152; Prix, 1986, pp. 78–79; Wuchty, Jones, & Uzzi,
2007). Other objects of study reflect this trend; Par exemple, the number of countries per pub-
lication has also been increasing (Gauffriau, Larsen et al., 2008, p. 152; Henriksen, 2016).
Choosing a counting method is essential for the majority of bibliometric analyses that eval-
uate authors, institutions, des pays, or other objects of study. A study of a sample of 99 bib-
liometric studies finds that, for two-thirds of the studies, the choice of counting method can
affect the results (Gauffriau, 2017, p. 678). The effect of shifting between counting methods
can be seen in the Leiden Ranking, where it is possible to choose between full and fractional
counting for the indicators on scientific impact (Centre for Science and Technology Studies,
2019). A change from one counting method to the other alters the scores, and the order of the
institutions in the ranking may change.
Néanmoins, many bibliometric studies do not explicitly justify the choice of counting
method. More broadly, in bibliometric practice, there are no common standards for how to
describe counting methods in the methods section of studies (Gauffriau, 2017, p. 678;
Gauffriau et al., 2008, pp. 166–169). This implicit status of counting methods is also reflected
in bibliometric textbooks and handbooks from the most recent decade. Many do not have a
chapter or larger section dedicated to counting methods (for examples, see Ball, 2018; Cronin
& Sugimoto, 2014; Gingras, 2016; Glänzel, Moed et coll., 2019). Others have a chapter or larger
section dedicated to counting methods but include only common counting methods and/or
lack well-defined frameworks to describe the counting methods (for examples, voir
Sugimoto & Larivière, 2018, pp. 54–56; Todeschini & Baccini, 2016, pp. 54–74; Vinkler,
2010, Chapter 10; Vitanov, 2016, pp. 27–29).
The present review demonstrates that a consistent analysis of the majority of bibliometric
counting methods can reveal new knowledge about those counting methods. Given this, je
argue for the explicit use, analyse, and discussion of counting methods in bibliometric prac-
tice and research as well as in bibliometric textbooks and handbooks.
1.2. Aims and Relevance
The topic of this review is bibliometric counting methods. The aims are to investigate counting
methods in the bibliometric research literature and to provide insights into their common char-
acteristics, the assessment of their internal validity, and how they are used. The review presents
different categorizations of counting methods and discusses the counting methods based on
these categorizations. Ainsi, the review does not focus on counting methods individually
but rather on the general characteristics found across counting methods. The general character-
istics provide insights into the counting methods and how they overlap or differ from each other.
Études scientifiques quantitatives
933
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
Three previous reviews of counting methods (Todeschini & Baccini, 2016, pp. 54–74;
Waltman, 2016, pp. 378–380; Xu et al., 2016) are fairly comprehensive; cependant, the present
review includes still more counting methods. In their handbook, Todeschini and Baccini present
a list of counting methods and provide a definition for each counting method, but they do not in a
consistent manner analyze characteristics across counting methods. Waltman uses a division
into full, fractional, and other counting methods. The present review develops Waltman’s
approach further, resulting in a more well-defined categorization of counting methods. Xu
et al.’s categorization of counting methods analyzes data distributions. Although the review does
not use this categorization, it discusses the categorization as one route for future research.
Counting methods are often categorized with regard to their mathematical properties (pour
examples, see Rousseau, Egghe, & Guns, 2018, sec. 5.6.3; Waltman, 2016, pp. 378–380; Xu
et coll., 2016). In addition to a categorization based on selected mathematical properties
(Gauffriau, Larsen et al., 2007), the review applies another approach (Gauffriau, 2017), lequel
builds on qualitative text analysis. I adopt this approach to describe why counting methods
are introduced into the bibliometric research literature.
Previous studies either have documented that many different counting methods exist in the
bibliometric research literature or have analyzed selected counting methods using well-
defined frameworks. The present review does both by covering more counting methods than
previous reviews and by providing detailed insight into the general characteristics of these
counting methods. En outre, the review considers three criteria for assessing the internal
validity of bibliometric indicators (Gingras, 2014), applying these to identify methods and
elements that can be used to assess the internal validity of the counting methods. Enfin,
the review investigates the use of the counting methods in research evaluations. The results
of the review are a unique resource for informing the use of counting methods and inspiring
further investigations of counting methods.
1.3. Research Questions: RQs 1–4
The aims presented in Section 1.2 lead to four interconnected research questions (RQs):
RQ 1: How many unique counting methods are there and when were they introduced
into the bibliometric research literature?
RQ 1 is useful to understand the magnitude and timeliness of this review’s aims. As dis-
cussed in Section 1.1, counting methods often remain implicit in bibliometric analyses, même
though there are many counting methods to choose from and a change from one counting
method to another may alter the scores for the objects of study. The review provides an over-
view of how many counting methods there are in the bibliometric research literature. Making
this information available is the first step in facilitating the explicit and informed choice of
counting methods in bibliometric analyses.
RQ 2: To what extent can the counting methods identified by RQ 1 be categorized
according to selected frameworks that focus on characteristics of the counting methods?
RQ 2 explores whether the counting methods identified by RQ 1 share characteristics. Comme
Section 1.1 mentions, the simplified dichotomy of full or fractional counting is often seen in
the bibliometric research literature. The analysis for RQ 2 uses two fine-grained frameworks to
provide both a more detailed categorization of the counting methods’ mathematical properties
Études scientifiques quantitatives
934
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
and a categorization of why the counting methods were introduced into the bibliometric
research literature. These categorizations do not focus on only a few counting methods, mais
rather, provide knowledge about a large number of counting methods.
RQ 3: Which methods and elements from the studies that introduce the counting
methods identified by RQ 1 can be used to assess the internal validity of those counting
méthodes?
Where RQ 2 focuses on the shared characteristics of the counting methods identified by
RQ 1, RQ 3 supplements this with information about the assessment of the internal validity
of counting methods (c'est à dire., as drawn from the studies that introduce the counting methods). Comme
discussed in Section 1.1, the counting method is a core element in the construction of a
bibliometric indicator. Donc, not only the characteristics of the counting methods but
also the internal validity of the counting methods are important.
RQ 4: To what extent are the counting methods identified by RQ 1 used in research
evaluations and to what extent is this use compliant with the definitions in the studies
that introduce the counting methods?
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
As mentioned previously, the use of counting methods is often reduced to a choice be-
tween full and fractional counting. RQ 4 investigates the use of counting methods in more
detail. The use of the counting methods should comply with the design of the counting
méthodes. If one or more of the characteristics identified under RQ 2 changes between the
point of introduction and the point of use of a counting method then the internal validity of
the counting method may be compromised.
2. MÉTHODES
Section 2 presents the methods used to address the RQs presented in Section 1.3. Organized
by RQ, Tableau 1 summarizes the methods, as well as the related data, tools, and results.
Sections 2.1–2.4 provide detailed presentations of the methods, discuss the rationale for
applying the methods, and show how the results from each RQ inform the subsequent RQs.
2.1. RQ 1: Literature Search for Counting Methods
RQ 1 serves to illustrate the magnitude and timeliness of the present review. The results of RQ 1
go on to form the basis for RQs 2–4.
RQ 1: How many unique counting methods are there and when were they introduced
into the bibliometric research literature?
To answer RQ 1, a literature search is employed to identify counting methods in the bib-
liometric research literature. The literature search concentrates on studies that introduce
counting methods rather than studies that use counting methods.
To be included in the review, studies must introduce counting methods defined by an equa-
tion or similar to guide calculation. Where a study presents only minor variations to the equa-
tion of an existing counting method, that variant approach is not included as a separate
counting method. Section 3.2 gives a few examples of such variations. Variations of existing
counting methods are also not included in cases where the variations add weights to
Études scientifiques quantitatives
935
Tableau 1.
Summary of RQs and their research methods, data, tools, and results
RQ
RQ 1
Method
Literature search based on
citing and cited studies.
Données
Peer-reviewed studies in English
published 1970–2018.
Tools
Google Scholar.
RQ 2 Categorizations of counting
Le 32 counting methods identified
Two frameworks: The first
méthodes.
by RQ 1.
describes selected
mathematical properties of
counting methods, et le
second describes arguments for
choosing a counting method.
Results
Thirty-two counting methods introduced
into the bibliometric research literature
over the period 1981–2018. No unique
counting methods are introduced in the
period 1970–1980.
Thirty counting methods are categorized
according to the first framework, and all
32 counting methods are categorized
according to the second framework.
RQ 3
Identification of methods
and elements useful for
assessing internal validity
of counting methods.
Le 32 counting methods identified
Three internal validity criteria for
by RQ 1.
bibliometric indicators.
The results from RQ 2.
Five methods and five elements related to
the assessment of the internal validity of
counting methods are identified.
RQ 4
Literature search based on
Le 32 counting methods identified
citing studies.
by RQ 1.
Google Scholar.
The results from RQ 2.
A sample of research evaluations that
use counting methods identified by
RQ 1.
Three of the 32 counting methods are
each used in a minimum of four
research evaluations. For one counting
method, the use does not comply with
the characteristics of the counting
method as defined by the study that
introduced the counting method.
Q
toi
un
n
t
je
t
un
je
t
je
v
e
S
c
e
n
c
e
S
toi
d
e
s
t
je
9
3
6
C
o
toi
n
t
je
n
g
m
e
t
h
o
d
s
je
n
t
r
o
d
toi
c
e
d
je
n
t
o
t
h
e
b
je
b
je
je
o
m
e
t
r
je
c
r
e
s
e
un
r
c
h
je
je
t
e
r
un
t
toi
r
e
1
9
7
0
2
0
1
8
–
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
publication counts in the form of citations, Journal Impact Factors, publication type weights,
etc.. En outre, the counting methods must be applicable to publications with any number
of authors. For counting methods introduced with different objects of study in different studies—
Par exemple, institutions in one study and countries in another study—only the first study is
included.
In addition to the counting methods included in this review, “hybrids” also exist in the literature.
Hybrids are counting methods that sit somewhere between two known counting methods. Pour
example, Combined Credit Allocation sits between complete-fractionalized and Harmonic
counting (Liu & Fang, 2012, p. 41) and the First and Others credit-assignment schema sits
between straight and Harmonic counting (Weigang, 2017, p. 187). Hybrids are not included
in the review.
The literature search is restricted to peer-reviewed studies in English from the period 1970–
2018. Prior to 1970, discussions about counting methods in bibliometrics seem to be in the
context of specific studies; cependant, from approximately 1970 onwards, some of the discus-
sions about counting methods start to offer general recommendations in relation to choosing a
counting method for a bibliometric analysis. These general recommendations derive primarily
from changing norms for coauthorship and the launch of the Science Citation Index (Cole &
Cole, 1973, pp. 32–33; Lindsey, 1980; Aussi, 1976), but other factors such as institutional and
national research policies and programs may have had an effect as well. Counting methods
introduced after 2018 are not included in the review as the use of these is difficult to assess
(RQ 4) at the present time (c'est à dire., less than two years after their introduction into the bibliometric
research literature).
Studies that introduce counting methods into the bibliometric research literature are found
via a search of cited and citing studies (Harter, 1986, pp. 58–60; 186). The search begins with
studies cited by or citing Gauffriau et al.’s work on counting methods (Gauffriau, 2017;
Gauffriau et al., 2007, 2008; Gauffriau & Larsen, 2005). From these cited and citing studies,
studies are selected for the review. The cited and citing studies of these selected studies are
then searched to add yet more studies to the review, et ainsi de suite. This search approach is chosen
because the terminology for counting methods is not well defined. Many terms are general,
such as “count” or “number of,” making adequate keywords difficult to identify.
Citations to 10 selected studies are searched using Google Scholar, Scopus, and Web of
Science. Google Scholar proves to have the best coverage. This finding is supported by large
scale studies of Google Scholar (Delgado López-Cózar, Orduña-Malea, & Martín-Martín, 2019,
sec. 4.3).
Two sources are used to find cited and citing studies, respectivement. Cited studies are found
via the reference lists in the selected studies. Citing studies are found via Google Scholar. Le
titles and the first lines of the abstracts as presented in Google Scholar’s list of results are
skimmed to find relevant studies among the citing studies. En outre, a search in the citing
studies for the terms “count,” “counting,” “fraction,” “fractionalized,” and “fractionalised” is
conducted via Google Scholar’s “Search within citing articles.” The results are skimmed to find
relevant studies. The final search for citing studies was completed in December 2018.
The result of the literature search cannot claim a 100% coverage of all counting methods
that exist in the bibliometric research literature; cependant, when citations to and from the
studies included in this review are checked, a significant redundancy is encountered (c'est à dire.,
studies already included in the review). Ainsi, it is assumed that the review covers the
majority of counting methods discussed in the bibliometric research literature during the period
1970–2018.
Études scientifiques quantitatives
937
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
2.2. RQ 2: Categorizations of Counting Methods According to Their Characteristics
RQ 1 identifies counting methods in the bibliometric research literature. RQ 2 explores whether
the counting methods identified under RQ 1 share characteristics. Shared characteristics can
facilitate general knowledge about counting methods rather than specific knowledge of only
a few counting methods. The results of RQ 2 are used in the analyses related to RQs 3 et 4.
RQ 2: To what extent can the counting methods identified by RQ 1 be categorized ac-
cording to selected frameworks that focus on characteristics of the counting methods?
The list of 32 counting methods created under RQ 1 does not provide information about the
types of counting methods. Donc, RQ 2 applies two frameworks to categorize the
counting methods identified by RQ 1. “Framework” is a unifying term for generalizable
approaches that describe and facilitate the use, analyse, and discussion of counting
méthodes. These approaches compile elements such as consistent terminology, definitions,
and categorizations of counting methods.
The first framework (Framework 1) describes selected mathematical properties of the count-
ing methods (Gauffriau et al., 2007). The second framework (Framework 2) is based on a qual-
itative text analysis and describes four groups of arguments for the choice of counting method
in a bibliometric analysis (Gauffriau, 2017). En tant que tel, the two frameworks have different foun-
dations and they address different characteristics of the counting methods. The frameworks are
described in detail in Sections 2.2.1 et 2.2.2.
2.2.1.
Framework 1: Selected mathematical properties of counting methods
“Framework 1: Selected mathematical properties of counting methods” builds on measure the-
ory (Halmos, 1950) and provides well-defined definitions and a detailed terminology for
counting methods (Gauffriau et al., 2007). Ainsi, the framework offers a more precise terminol-
ogy compared to the dichotomy of full and fractional counting. Framework 1 was developed
dans 2007, and the definitions and terminology were applied to common counting methods from
the bibliometric research literature (Gauffriau et al., 2007). The name “Framework 1: Selected
mathematical properties of counting methods” is introduced for the context of the present
revoir.
A foundation for measure theory is sets and subsets (Halmos, 1950, pp. 9–15). In relation to
counting methods, sets are, Par exemple, authors, institutions, or countries. These sets can be
subsets of each other, such as institutions in countries. And the sets are subsets of the set
“the world” as represented by a database (Gauffriau et al., 2007, pp. 180–183). Ainsi, measure
theory facilitates an explicit analysis of aggregation levels of counting methods. En outre, un
counting method is additive if the score calculated as a sum of two or more sets is equal to the
score for the union of the same sets. The sets must be disjoint (Gauffriau et al., 2007, pp. 185–186;
Halmos, 1950, p. 30). And counting methods can be normalized (c'est à dire., fractionalized), meaning
that a publication is equal to 1 (Gauffriau et al., 2007, pp. 187–188; Halmos, 1950, p. 171).
The review uses the framework to categorize counting methods according to two mathe-
matical properties: whether a counting method is rank dependent or rank independent, et
whether it is fractionalized or nonfractionalized. The mathematical properties are discussed in
more detail later in this section.
Below, firstly, the framework’s terminology for counting methods is introduced (Gauffriau
et coll., 2007). Secondly, the mathematical properties of the counting methods are presented
Études scientifiques quantitatives
938
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
(Gauffriau et al., 2007). And lastly, assumptions made about the mathematical properties are
discussed to enable the use of the framework in the present review.
Terminology
Counting method
A “counting method is defined by the choice of basic unit [of analysis], objet [of study] et
score function” (Gauffriau et al., 2007, p. 178; square brackets added).
“Objects of study,” “basic units of analysis,” and “score function” are described below.
Tableau 2 illustrates how five score functions work in counting methods, where countries are
both units of analysis and objects of study. The basic units of analysis are credited, et le
objects of study are scored by collecting the credits from the basic units of analysis assigned to
the object of study. For the illustration, the table presents a publication with three addresses: un
from Country X and two from Country Y.
I restrict the literature search to unique score functions rather than unique counting
methods introduced into the bibliometric research literature. One score function can be applied
in several counting methods. C'est, score functions can be used with different combinations of
basic units of analysis and objects of study. This means that a score function can describe a class
of counting methods. Cependant, from the literature search, it is clear that score functions are
introduced as counting methods with specific basic units of analysis and objects of study, souvent
authors (c'est à dire., at the microlevel). Donc, the term counting method is used for the score
functions identified in the literature search.
Objects of study and basic units of analysis
Objects of study are the objects presented in the results of bibliometric analyses, tel que
chercheurs, institutions, or countries. Objects of study can be found in publications, mais ils
may also be objects not directly visible in the publication, such as unions of countries (par exemple., le
Union européenne (EU) and the United Kingdom).
Basic units of analysis are found in publications
Objects of study are scored by collecting credits from the basic units of analysis. It is common
to find bibliometric analyses in which the basic units of analysis and the objects of study are at
the same aggregation level, such as authors (microlevel). Cependant, this is not the case in the
calculation of the score for the EU as an object of study. Credits are given to countries
belonging to the EU; thus, countries are the basic units of analysis. Ainsi, the objects of study
and the basic units of analysis are at different aggregation levels: unions of countries (supra-
level) and countries (macrolevel), respectivement.
Tableau 2.
Illustration of five score functions applied in counting methods with countries as units of analysis and objects of study
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Score functions
Complete
Basic units of analysis in the publication
Country Y
1 credit
Country X
1 credit
Country Y
1 credit
Objects of study
Country X
Score: 1
Country Y
Score: 2
Complete-fractionalized
1/3 credit
Straight
Whole
1 credit
1 credit
Whole-fractionalized
1/2 credit
1/3 credit
0 credit
1/2 credit
1/4 credit
1/3 credit
Score: 1/3
Score: 2/3
0 credit
1/2 credit
1/4 credit
Score: 1
Score: 1
Score: 0
Score: 1
Score: 1/2
Score: 1/2
Études scientifiques quantitatives
939
Counting methods introduced into the bibliometric research literature 1970–2018
Score function
A score function describes how the objects of study are scored. The basic units of analysis are
credited individually before the objects of study collect the credits. Five common score func-
tions are presented below:
(cid:129) Complete
A credit of 1 is given to each basic unit of analysis in a publication. An object of study
collects the credits from the basic units of analysis assigned to the object of study.
(cid:129) Complete-fractionalized
A credit of 1/n is given to each basic unit of analysis where n is the number of basic units
of analysis in a publication. An object of study collects the credits from the basic units
of analysis assigned to the object of study.
(cid:129) Straight
A credit of 1 is given to the basic unit of analysis ranked first in a publication. All other
basic units of analysis in the publication are credited 0. An object of study collects the
credits from the basic units of analysis assigned to the object of study.
Instead of first authors (c'est à dire., the basic unit of analysis ranked first in the publication),
last authors or reprint authors can also be credited (for examples, see Gauffriau et al.,
2007, p. 676). The review does not discuss these alternatives further.
(cid:129) Whole
A credit of 1 is given to each basic unit of analysis, assigned one-to-one to a unique object of
étude, in a publication. If a unique object of study is represented by more basic units of anal-
ysis in a publication, these basic units of analysis share 1 credit in whatever way. An object of
study collects the credits from the basic units of analysis assigned to the object of study.
(cid:129) Whole-fractionalized
A credit of 1/m is given to each basic unit of analysis, assigned one-to-one to a unique
object of study, where m is the number of unique objects of study related to a publica-
tion. If a unique object of study is represented by more basic units of analysis in a pub-
lication, these basic units of analysis share 1/m credit in whatever way. An object of
study collects the credits from the basic units of analysis assigned to the object of study.
When the terminology is reduced to full and fractional counting, the differences between
complete and whole score functions are not immediately visible. Both are called full counting.
Neither are the differences between complete-fractionalized, straight, and whole-fractionalized
score functions. All three are variations of fractional counting.
A note on terminology
In measure theory, which is the theoretical basis for Framework 1, the term normalized is used
(Halmos, 1950, p. 171) for the property where the credit of 1 is shared (c'est à dire., divided
among the basic units of analysis of a publication). The present review uses the alternative
term fractionalized because this has become the norm in the bibliometric research literature
discussing counting methods. The term normalized typically refers to field-normalization
(Waltman, 2016, secs. 6 et 7).
Mathematical properties
The five score functions above have definitions based on five mathematical properties intro-
duced below. Tableau 3 shows how the mathematical properties form score functions, Et ainsi,
classes of counting methods (Gauffriau et al., 2007, p. 198). Detailed explanations follow
Études scientifiques quantitatives
940
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
Tableau 3. Decision tree for the different score functions and classes of counting methods1
Defined for
all objects
Oui
Based on a fixed
crediting scheme
Oui
Additive
Rank-independent
Oui
Non
Non
Oui
Not applicable
Non
Non
Non
Non
Not applicable
Fractionalized
Non
Oui
Non
Oui
Non
Oui
Non
Oui
Non
Oui
Classes of counting methods
described in the literature
Complete
Complete-fractionalized
Straight
Whole
Whole-fractionalized
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
below the table. The five mathematical properties are used to form assumptions, which are
necessary for the analysis undertaken in relation to RQ 2:
(cid:129) Defined for all objects/not defined for all objects
All classes of counting methods in Table 3 except whole-fractionalized counting are
defined for all objects of study. To test whether a counting method is defined for all
objects of study, some of the objects of study can be merged to form a union. If this
does not change the score for the objects of study not included in the union, then the
score function is defined for all objects of study. The United Kingdom as an object of
study can be used as illustration. To find all publications affiliated with the United
Kingdom in Web of Science, it is necessary to search for publications from England,
Écosse, Wales, and Northern Ireland. Take a publication with 10 unique country
affiliations in which the UK is represented by only one of England, Écosse, Wales,
or Northern Ireland. Following whole-fractionalized counting, the score for each coun-
try affiliated with the publication is 1/10. Now take another publication, again with 10
unique country affiliations. In this publication, the UK is represented by three countries,
such as England, Écosse, and Wales. Dans ce cas, the three countries are merged, et
the score for each country affiliated with the publication becomes 1/8.
(cid:129) Based on a fixed crediting scheme/not based on a fixed crediting scheme
All classes of counting methods in Table 3 except whole and whole-fractionalized
counting have fixed crediting schemes. Whole and whole-fractionalized counting are
not based on fixed crediting schemes, as a change of objects of study may also change
the credits given to basic units of analysis. If the objects of study and the basic units of
1 (Gauffriau et al., 2007, p. 189). The term “normalized” is changed to “fractionalized” and the column “In
Section” is removed. Reprinted by permission from Springer Nature Customer Service Centre GmbH:
Springer-Nature, Scientometrics, Publication, cooperation and productivity measures in scientific research,
Marianne Gauffriau et al., 2007.
Études scientifiques quantitatives
941
Counting methods introduced into the bibliometric research literature 1970–2018
analysis are institutions, then unique institutions in the affiliation section of a publication
will be credited. If the basic units of analysis are kept and the objects of studies changed
to countries, then unique countries in a publication will be credited via their institutions.
If more than one institution from a country contributes to the publication, then the
institutions share the credit for that country. Ainsi, the basic units of analysis cannot
be credited independently of the objects of study. If a counting method is based on a
fixed crediting scheme, then the counting method is additive (see next item).
(cid:129) Additive/nonadditive
Complete, complete-fractionalized, and straight counting are additive. The score for the
objects of study can be calculated via credits to basic units of analysis at the same
aggregation level (Par exemple, macrolevel) or to basic units of analysis at lower aggre-
gation levels (Par exemple, meso- or microlevel) and the score will remain the same
given that there is a one-to-one relation between the aggregation levels. If countries
are objects of study, then it makes no difference whether the basic units of analysis
are institutions or countries, providing that each address in the affiliation section of a
publication has only one institution and one country. If a counting method is additive,
then the counting method is defined for all objects (see first item).
(cid:129) Rank-independent/rank-dependent
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Complete and complete-fractionalized counting are rank independent. The order of the
basic units of analysis—for example, the order of countries in the affiliation section of a
publication—does not influence how the basic units of analysis are credited. All basic
units of analysis get the same credit. Straight counting is rank dependent because only
the first basic unit in the affiliation section of a publication is credited. All other basic units
of analysis get 0 credit. This property of rank-independency/rank-dependency is not
applicable to whole and whole-fractionalized counting, as these counting methods are
not based on fixed crediting schemes. Par exemple, if countries are the objects of study,
then for a publication with 10 country affiliations, where affiliation numbers two, six and
seven are Denmark, the credit can be attributed to the affiliation ranked second, sixth, ou
seventh in whatever way. Ainsi, rank-dependency cannot be applied. Neither can rank-
independency be applied (c'est à dire., where all basic units of analysis receive the same credit).
(cid:129) Fractionalized/nonfractionalized
Complete-fractionalized, straight, and whole-fractionalized counting are fractionalized
because, with these methods, the basic units of analysis in a publication share a total
credit of 1. The rationale is that a publication equals 1 credit. Complete and whole
counting are not fractionalized (c'est à dire., the credits for the basic units of analysis of a pub-
lication can sum to more than 1). Note that fractionalized and additive are two different
properties. Par exemple, whole-fractionalized counting is fractionalized and nonaddi-
tive, whereas complete counting is nonfractionalized and additive.
In the review, the use of the framework with these five mathematical properties to catego-
rize counting methods incorporate the assumptions below.
Assumptions about mathematical properties for counting methods
The analyses presented in the review focus on two of the five properties: rank independent/
rank dependent and fractionalized/nonfractionalized. The following assumptions explain why
the review focuses on these two properties.
As already mentioned, score functions are introduced into the bibliometric research litera-
ture as counting methods, often with authors as basic units of analysis and objects of study
Études scientifiques quantitatives
942
Counting methods introduced into the bibliometric research literature 1970–2018
(c'est à dire., at the microlevel). Without information about how the score functions work at, pour
example, the meso- or macrolevel, it is difficult to decide the score functions’ status for the
first three mathematical properties: defined for all objects/not defined for all objects, based on
a fixed crediting scheme/not based on a fixed crediting scheme, and additive/nonadditive. Pour
example, complete-fractionalized and whole-fractionalized counting differ regarding the three
properties (see Table 3), but at the microlevel, the calculations of scores are identical. At other
aggregation levels, the calculations differ for the two counting methods.
Using Framework 1, cependant, the first three mathematical properties can help when mak-
ing assumptions about the counting methods at the microlevel that use rank to determine
credits for the basic units of analysis. As mentioned in the introduction to the five mathematical
properties, counting methods with rank-dependent score functions have a fixed crediting
scheme. If based on a fixed crediting scheme, the score functions are additive. If additive,
the score functions are defined for all objects.
For score functions introduced as counting methods at the microlevel that are not rank-
dependent, it is difficult to decide if the score function is rank independent (Par exemple,
complete and complete-fractionalized counting) ou, rather, if the rank independent/rank
dependent property is not applicable (Par exemple, whole and whole-fractionalized counting).
In the review, such counting methods are assumed to be rank independent, Et ainsi, based
on a fixed crediting scheme, additive, and defined for all objects.
For all counting methods included in the review the status for the property fractionalized/
nonfractionalized is explicitly evident in the studies that introduce the counting methods.
Based on the above assumptions, the results of the present review focus on the properties
rank dependent/rank independent and fractionalized/nonfractionalized. Ainsi, the categoriza-
tion of counting methods is rank dependent and fractionalized (see Section 3.2.1), rank
dependent and nonfractionalized (see Section 3.2.2), rank independent and nonfractionalized
(see Section 3.2.3), and rank independent and fractionalized (see Section 3.2.4).
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
2.2.2.
Framework 2: Four groups of arguments for choosing a counting method for a study
“Framework 2: Four groups of arguments for choosing a counting method for a study” pro-
poses a categorization of arguments for choosing a counting method for a study. The cate-
gorization is developed from the arguments for counting methods in a sample of 32 études
published in 2016 in peer-reviewed journals and supplemented with arguments for counting
methods from three older studies (Gauffriau, 2017). The name “Framework 2: Four groups of
arguments for choosing a counting method for a study” is introduced for the context of the
present review.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
I use Framework 2 to categorize counting methods according to the arguments for why a
counting method is introduced into the bibliometric research literature. The studies found in
relation to RQ 1 that introduce counting methods argue for why the new counting methods are
needed. These arguments are assigned to the four groups of arguments in Framework 2. Ce
use is a slight modification compared to the original intention of Framework 2, in which the
arguments relate to choosing a counting method for a study—not introducing a new counting
method. Cependant, I assume that a counting method is introduced with the aim of being used
in other studies. Ainsi, the arguments for the introduction and for the use of a counting
method are seen as compatible.
Limited resources made it impossible to engage two people to assign arguments to Groups 1–4,
which would allow a calculation of intercoder reliability. Plutôt, the arguments and assignment
Études scientifiques quantitatives
943
Counting methods introduced into the bibliometric research literature 1970–2018
Tableau 4.
Categorization of arguments for counting methods for publication and citation indicators2
Category
Groupe 1: The indicator measures the (impact of )…
… participation of an object of study
… production of an object of study
… contribution of an object of study
… output/volume/creditable to/performance of an object of study
… the role of authors affiliated with an object of study
Groupe 2: Additivity of counting method
Additivity of counting method
Groupe 3: Pragmatic reasons
Availability of data
Prevalence of counting method
Simplification of indicator
Insensitive to change of counting method
Groupe 4: Influence on/from the research community
Counting method(s)
Whole
Whole, complete-fractionalized
Whole, complete-fractionalized
(rank independent and rank dependent)
Whole, complete-fractionalized
Straight, last author, reprint author
Whole, complete-fractionalized
Whole, straight, reprint author
Whole
Whole
Whole
Incentive against collaboration
Complete-fractionalized
Comply with researchers’ perceptions of how their publications
Whole
and/or citations are counted
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
to groups are reported in the Supplementary Material, Section 3, to make the categorization as
transparent as possible.
Tableau 4 presents the categorization with Groups 1–4 (Gauffriau, 2017, p. 679). Descriptions
of the four groups follow Table 4.
The four groups of arguments for choosing a counting method:
(cid:129) Groupe 1: The indicator measures the (impact of ) contribution/participation/… of an
object of study
The arguments for counting methods relate to the concept that the study attempts to
measure by using the counting method to design an indicator. Par exemple, quelques
studies in the sample argue that whole counting is suitable for indicators measuring
the object of study’s participation in a research endeavor.
(cid:129) Groupe 2: Additivity of counting method
The arguments for counting methods relate to mathematical properties of the counting
method itself: namely, to ensure that the counting method is additive and to avoid
double counting of publications.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
2 (Gauffriau, 2017, p. 679)—postprint version: https://arxiv.org/abs/1610.02547v2.
Études scientifiques quantitatives
944
Counting methods introduced into the bibliometric research literature 1970–2018
(cid:129) Groupe 3: Pragmatic reasons
The conceptual/methodological arguments included in Groups 1 et 2 are not taken
into account but instead are pragmatic reasons for the choice of a counting method.
Whole counting is quite common in Group 3. This may be explained by this counting
method being the readily available approach in the databases often used to calculate
bibliometric indicators (c'est à dire., Web of Science and Scopus). In these databases, a search
for publications from Denmark returns the number of publications in which Denmark
appears at least once in the list of affiliations. This corresponds to whole counting.
(cid:129) Groupe 4: Influence on/from the research community
The arguments in Group 4 are not related to what an indicator measures (c'est à dire., as in
Groupe 1), but rather, to the impact of the indicator on the research community under
evaluation (and vice versa). Par exemple, one of the studies analyzed to create the
framework argued for whole counting when the objects of study are researchers because
a researcher should have 1 credit for each publication in his or her publication list
(Waltman & Van Eck, 2015, p. 891). The argument is that this is how a researcher intu-
itively counts his or her publications, and that this intuitive counting approach should be
reflected in the evaluation.
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
In the review, the categorization of counting methods uses all four groups of arguments.
The analysis focuses on arguments for why the counting methods are introduced into the bib-
liometric research literature.
2.3. RQ 3: Internal Validity of Counting Methods
RQ 2 focuses on shared characteristics of the counting methods identified by RQ 1. RQ 3 adds
information about the assessment of internal validity of the counting methods by the studies
that introduce the counting methods.
RQ 3: Which methods and elements from the studies that introduce the counting
methods identified by RQ 1 can be used to assess the internal validity of those counting
méthodes?
To answer RQ 3, methods for and elements of the assessment of internal validity of the
counting methods in the studies that introduce the counting methods (RQ 1) are identified.
There are no standards commonly applied for such assessments of internal validity, et
only a few of the studies that introduce counting methods explicitly include assessments of
the internal validity of the counting methods. Cependant, all the studies include analyses of
the counting methods. These analyses set out methods that may be used to assess the
internal validity of the counting methods. As well, the counting methods may have
elements that themselves can indicate weak internal validity of the counting methods.
It is not possible to evaluate in a consistent and manageable manner how well these
methods and elements work as assessments of internal validity of counting methods.
Plutôt, RQ 3 evaluates how well each of the methods and elements corresponds to three
internal validity criteria for well-constructed bibliometric indicators: adequacy, sensitivity,
and homogeneity (Gingras, 2014, pp. 112–116).
Internal validity is defined as follows: “… I concentrate on criteria directly related to the
internal validity of the indicator evaluated through its adequacy to the reality behind the
concept it is supposed to measure” (Gingras, 2014, p. 112). Internal validity is one facet
Études scientifiques quantitatives
945
Counting methods introduced into the bibliometric research literature 1970–2018
of the concept “validity,” which in this review focuses on the validity of the counting
method itself and not on external conditions when applying the counting method. Other
facets of validity take external conditions into account, such as sampling error, operationa-
lization, and population properties (Fidler & Wilcox, 2018, sec. 1.2). I analyze that introduce
counting methods and, donc, do not take validity related to external conditions into
account.
In the review, the three internal validity criteria are used on counting methods instead of
indicators. Counting methods, cependant, function as core elements or the only element in bib-
liometric indicators. Ainsi, if a counting method does not comply with the criteria for internal
validity, the same conclusion could be reached for bibliometric indicators using that counting
method.
Guidance is provided for how to apply the criteria at an overarching level (Gingras, 2014,
pp. 112–116). Cependant, implementation in a specific case, such as the present review, concernant-
quires several choices, as described below. Apart from the study introducing the three criteria,
two studies have applied the three criteria to evaluate bibliometric indicators (Wildgaard,
2015, sec. 6.3, 2019, sec. 14.4.1). The present review is the first to use three criteria to eval-
uate counting methods.
Three validity criteria for well-constructed bibliometric indicators:
(cid:129) Adequacy
According to the adequacy criterion, an indicator should be an adequate proxy for the
object that the indicator is designed to measure. The indicator and the object should
have the same characteristics, such as order of magnitude. The relationship between
object and indicator is tested via an independent and accepted measure for the object
(Gingras, 2014, pp. 112–115).
The counting methods identified by RQ 1 are assigned, using Framework 2, to argu-
ments for the introduction of the counting methods. In the implementation of the ade-
quacy criterion, the methods below may be used to document that the counting methods
are adequate proxies for their aims, c'est, the arguments for the counting methods:
(cid:1) Compare to other counting methods or bibliometric indicators: The scores obtained by
the counting method are compared to scores obtained by existing counting methods or
bibliometric indicators when applied to empirical publication sets or publication sets
constructed for exemplification. Some publication sets are as small as one publication.
(cid:1) Principles to guide the definitions of the counting methods: A list of principles are
stated explicitly and used in the definition of the counting method.
(cid:1) Quantitative models for distributions of scores: A quantitative model is used to test
whether the counting method gives scores, fitting the model, to the objects of study.
(cid:1) Surveys or other empirical evidence: Surveys or other empirical evidence about coau-
thorship practice are used to define target values for the credits for basic units of analysis.
(cid:1) Compare groups of objects of study: The scores obtained by the counting method are
compared for groups of objects of study with different characteristics.
There are more methods for the assessment of the adequacy of a counting method, mais
each of these methods was found in only one study and, donc, not included in the
list above. One example is the comparison of scores obtained by the counting method
where the order of the authors in a publication is kept versus where the order is shuffled
(Trueba & Guerrero, 2004, figue. 4).
Études scientifiques quantitatives
946
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
As mentioned, the present review’s analysis does not assess how well these methods
work in the studies that introduce counting methods. Plutôt, the analysis assesses
whether the methods are appropriate to test the counting methods as adequate proxies
for their aims.
(cid:129) Sensitivity
According to the sensitivity criterion, an indicator should reflect changes over time in
the object that the indicator is designed to measure (Gingras, 2014, pp. 115–116).
Section 1 explains that the increasing average number of coauthors per publication
is a driver behind the discussion about counting methods. In relation to the sensitivity
criterion, two elements are defined. Where present, these elements highlight counting
methods that are less flexible to an increasing number of authors per publication:
(cid:1) Time specific evidence: Surveys or other empirical evidence about coauthorship prac-
tice that are not updated over time and, donc, do not reflect changes over time in
the average number of coauthors per publication.
(cid:1) Fixed credits for selected basic units of analysis: A fixed share of the credit for the selected
basic unit of analysis, such as the first author. As the number of authors per publication
increases, such fixed credits leave less credit for the nonselected authors of the publica-
tion. This is only true for fractionalized counting methods where 1 credit is shared
among the authors of a publication. Donc, the analysis considers this element in
relation to fractionalized counting methods only.
Counting methods with one or both of the above elements are less flexible to an increasing
number of authors per publication and, donc, they do not comply with the sensitivity
criterion.
(cid:129) Homogeneity
According to the homogeneity criterion, an indicator should measure only one dimen-
sion and avoid heterogeneous indicators, such as the h-index, which combines publica-
tion and citation counts in one indicator. When a heterogeneous indicator
increases/decreases, it is not immediately clear whether one or more elements cause
the change. Ainsi, the indicator becomes difficult to interpret (Gingras, 2014, p. 116).
Some of the counting methods are homogeneous, whereas others are complex, mixing
many elements. A mix of elements can make it difficult to instantly understand how the
scores of the counting method are obtained and which elements account for how much
of the score. The implementation of the homogeneity criterion investigates elements that
work against the criterion:
(cid:1) Parameter values selected by bibliometrician: The equation for the counting method
has parameter(s) where the bibliometrician selects the values of the parameter(s) pour
each analysis individually.
(cid:1) External elements: The equation for the counting method is dependent on elements
external to the publications that are included in an analysis (Par exemple, an author’s
position as principal investigator, an author’s h-index, an author’s number of
publications).
(cid:1) Conditional equations: To calculate credits for all basic units of analysis, a conditional
equation is needed. One part of the equation is dedicated to specific basic units of anal-
ysis or specific publications (Par exemple, first authors or publications with local authors)
and another part of the equation is dedicated to the remaining basic units of analysis or
publications.
Études scientifiques quantitatives
947
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
Counting methods with one or more of the above elements are heterogeneous, as the elements
lead to several dimensions being present in the same counting method.
Of the three validity criteria, the homogeneity criterion is the most difficult to apply to
counting methods. A mix of different elements that have the same measure unit does not
count as heterogeneous but as composite (Gingras, 2014, p. 122). This said, the difference
between heterogeneous and composite is described by Gingras using an example, lequel
makes an exact interpretation difficult. It is a matter for debate whether some of the condi-
tional equations included in the review’s analysis use the same measure unit—for example,
author contributions for first authors and other authors, respectively—and, donc, si
these equations identify true heterogeneous counting methods.
2.4. RQ 4: Use of the Counting Methods in Research Evaluations
RQ 4 investigates to what extent the counting methods identified by RQ 1 are used in re-
search evaluations. The research evaluations should comply with the design of the counting
methods in the studies that introduce the counting methods. If one or more of the character-
istics identified under RQ 2 change from the introduction to the use of the counting methods,
then the introducing study’s guidance about how to use the counting method may be
compromised.
RQ 4: To what extent are the counting methods identified by RQ 1 used in research
evaluations and to what extent is this use compliant with the definitions in the studies
that introduce the counting methods?
RQ 4 is addressed through a literature search aimed at identifying research evaluations that
use the counting methods identified by RQ 1. The literature search is restricted to peer-
reviewed studies in English. The peer-review criterion ensures some level of quality check
and increases the likelihood that researchers have authored the studies. Ainsi, reports from
university management, PowerPoint presentations, sales materials, etc.. are not included.
The literature search does not distinguish between studies where the research evaluations
are the primary result and studies where the research evaluations are part of the results.
Counting methods can be used in many contexts, such as in the development of new count-
ing methods or investigations of the mathematical properties of the counting methods. Le
focus for the present review is research evaluations covering a minimum of 30 chercheurs
where researchers are the objects of study. If institutions or countries are the objects of study,
the institutions or countries cannot be represented by fewer than 30 chercheurs. Counting
methods that are difficult to apply on larger publication sets are probably less well-suited
for research evaluations. Autrement dit, the emphasis is on scalable counting methods.
To find studies that use the counting methods identified by RQ 1, citations in Google Scholar
to the counting methods are searched. As discussed in Section 2.1, Google Scholar covers more
publications relevant to the review than either Web of Science or Scopus. En outre, to avoid
research evaluations with almost identical implementations of a counting method, the same
author cannot represent several research evaluations for the same counting method. Some
counting methods are used in several studies by the same author (Par exemple, see Abramo,
D’Angelo, & Rosati, 2013, p. 201; Abramo, Aksnes, & D’Angelo, 2020, p. 7). Including all of
these research evaluations would give this implementation more weight than implementa-
tions represented by one research evaluation.
Études scientifiques quantitatives
948
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
The number of research evaluations that use each counting method is reported using the
following intervals: Zero research evaluations use the counting method, one to three research
evaluations use the counting method, and four or more research evaluations use the counting
method.
For counting methods with four or more research evaluations, samples of five research
evaluations, if available, are selected randomly for inclusion in an analysis of the use of the
counting methods. With five research evaluations per counting method, it is possible to get an
indication of whether or not the characteristics from the introduction of the counting methods,
as identified under RQ 2, are kept in the research evaluations. With one to three research
evaluations per counting method the results of the analysis would not be sufficiently robust.
3. RÉSULTATS
Section 3 reports the results for RQs 1–4 based on the methods presented in Section 2.
Sections 3.1–3.4 present the results for each of the RQs 1–4. The Supplementary Material,
Section 1, offers a schematic overview of results under all RQs.
3.1. RQ 1: Thirty-Two Unique Counting Methods in the Bibliometric Research Literature
RQ 1: How many unique counting methods are there and when were they introduced
into the bibliometric research literature?
Four score functions are introduced prior to 1970 and fall outside the time frame covered by the
literature search. Recall that score functions are counting methods where different basic units of
analysis and objects of study can be applied. The four score functions are complete, complet-
fractionalized, straight, and whole counting (see definitions in Section 2.2.1). The review uses
these pre-1970 score functions as reference points in some of the following analyses.
Beyond the four pre-1970 score functions, another 32 unique score functions are identified.
These were introduced into the bibliometric research literature during the period 1981–20183.
There were no unique score functions introduced during the period 1970–1980. The majority,
ou 17, of the score functions were introduced in the most recent decade (2010–2018), comme
illustrated in Figure 1.
All the score functions are introduced as counting methods, which are score functions with
specific units of analysis and objects of study. Ainsi, the term counting methods is used for the
score functions identified in the literature search.
3.2. RQ 2: Categorizations of Counting Methods According to Frameworks 1 et 2
RQ 2: To what extent can the counting methods identified by RQ 1 be categorized ac-
cording to selected frameworks that focus on characteristics of the counting methods?
The RQ 2 categorizations build on two frameworks. The frameworks are independent of each
other. In the presentation of the results, Framework 1 with selected mathematical properties takes
priority. This framework can be seen as a further development of the binary division into full and
3 Two counting methods were published online first in 2018 et, thus, included in the period covered by the
revoir. The two studies introducing the counting methods were assigned to journal issues in 2019 (Bihari &
Tripathi, 2019; Steinbrüchel, 2019).
Études scientifiques quantitatives
949
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
Chiffre 1. Number of unique counting methods introduced into the bibliometric research literature 1970–2018.
fractional counting—a division often seen in discussions about counting methods. Framework 2
describes arguments for choosing a counting method. With Framework 2 as secondary, le
framework adds extra information to the categories created via Framework 1. Cependant, it is pos-
sible for either of the frameworks to be given priority or for the categorizations of the counting
methods to be presented separately for each framework. To support different categorizations, le
Supplementary Material, Section 1, simply lists all 32 counting methods chronologically.
In the presentation below, beginning with the largest category, the counting methods are
divided into four categories based on Framework 1: rank dependent and fractionalized
(Section 3.2.1), rank dependent and nonfractionalized (Section 3.2.2), rank independent
and nonfractionalized (Section 3.2.3), and rank independent and fractionalized (Section 3.2.4).
Two counting methods do not fit these properties (Section 3.2.5) and two arguments for intro-
ducing counting methods do not currently comply with Framework 2 (Section 3.2.6).
Most of the counting methods have a name. Counting methods without a name are in the
review named after the author(s) of the study introducing the counting methods (c'est à dire., [author,
publication year]).
3.2.1. Rank-dependent and fractionalized counting methods
Twenty-one of the 32 counting methods identified by RQ 1 are rank dependent and fractional-
ized, meaning that the basic units of analysis in a publication share 1 credit but do not receive
equal shares. Among the pre-1970 counting methods, straight counting has these properties.
In addition to the rank-dependent counting methods, the results include counting methods
where the credits for basic units of analysis are shared unevenly based on characteristics other
than rank, such as an author’s position as principal investigator, an author’s h-index, or an
author’s number of publications.
Chiffre 2 and List 1 present the 21 counting methods. In Figure 2, a 10-author publication
example illustrates the 14 counting methods4 where rank determines the credits for the basic
4 The counting method Weighted fractional output is in Figure 2 with its two versions: Intramural is used
for publications where first and last author are from the same institution. Otherwise, extramural is used
(Abramo et al., 2013, p. 201). In the counting method Network-based model, the bibliometrician must
select a distribution factor (Kim & Diesner, 2014). Two distribution factors (d = 0.25 and d = 0.59) sont
selected for the illustration in Figure 2.
Études scientifiques quantitatives
950
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Chiffre 2. How authors of a publication with 10 authors share the credit. Rank-dependent and fractionalized counting methods.
units of analysis. Chiffre 2 is followed by List 1, with seven counting methods where charac-
teristics other than rank determine the credits. For the counting methods in List 1, more infor-
mation than the number and rank of authors in a publication is needed to calculate the credits.
This extra information is, Par exemple, an author’s position as principal investigator, un
author’s h-index, or an author’s number of publications. Ainsi, it is not possible to do a generic
calculation for a publication with 10 authors and show the seven counting methods in
Chiffre 1. Plutôt, List 1 describes these counting methods.
Eighteen of the 21 counting methods are defined with authors as both basic units of analysis
and objects of study. One counting method is defined with authors as basic units of analysis
and institutions as objects of study (Howard, Cole, & Maxwell, 1987, p. 976). One study,
which introduces two counting methods, has authors and countries as basic units of analysis
and objects of study (Egghe, Rousseau, & Van Hooydonk, 2000, p. 146).
The arguments for 18 of the 21 counting methods can be linked to Group 1 in Framework 2:
“The indicator measures the (impact of ) contribution/participation/… of an object of study.”
En outre, two of the 21 counting methods (Assimakis & Adam, 2010; Howard et al.,
1987, p. 976) aim to measure productivity5, an approach that is not included but can
5 The term “productivity” is debated. Souvent, it is used as a simple concept, as is the case in the two referenced
études. This simple interpretation of productivity can be added to Group 1 in Framework 2. Cependant,
Abramo and D’Angelo (2014) argue for productivity as a complex concept requiring input and output indi-
cators to calculate the productivity of the objects of study. They introduce a counting method relating to
output (voir la figure 2). The argument for their counting method is to measure author contributions (Abramo
et coll., 2013, p. 200). This argument is assigned to Group 1 in Framework 2: “The indicator measures the
(impact of ) contribution/participation/… of an object of study.” Thus, in Abramo and D’Angelo’s interpre-
tation of productivity the counting method is one of the steps in calculating productivity (Abramo &
D’Angelo, 2014, pp. 1135–1136).
Études scientifiques quantitatives
951
Counting methods introduced into the bibliometric research literature 1970–2018
be added to Group 1 in Framework 2. The final study of the 21 studies argues “Credit is
allocated among scientists based on their perceived contribution rather than their actual con-
tribution” (Shen & Barabasi, 2014, p. 12,329). This argument is assigned to Group 4 dans
Framework 2: “Comply with researchers’ perceptions of how their publications and/or cita-
tions are counted.”
As mentioned above, in addition to the 14 counting methods comprising Figure 2, là
are seven counting methods where the credits for basic units of analysis are shared unevenly
based on characteristics other than rank. List 1 describes these counting methods.
All but one of the counting methods in List 1 were introduced after Framework 1 était
published. In the framework, rank is determined based on the information in a publication
(Gauffriau et al., 2007, pp. 179; 188). Ainsi, the definition of rank in the framework does
not cover the counting methods where the credits are distributed based on characteristics
other than rank. This review assumes credits distributed based on characteristics other than
rank to be a variation of the property rank dependent. Cependant, the counting methods in
List 1 are defined with authors as both basic units of analysis and objects of study. Il
would require information about how the counting methods are defined at other aggrega-
tion levels to find out whether or not credits distributed based on characteristics other than
for these counting
rank can be seen as a variation of
méthodes.
the property rank dependent
In List 1, there are some examples of studies that add small changes to existing counting
méthodes. As discussed in Section 2.1, the review does not consider these studies as presenting
distinct representations of counting methods.
List 1: Fractionalized counting methods. The credits to the basic units of analysis are dis-
tributed based on characteristics other than rank.
(cid:129) [Boxenbaum et al., 1987]
The credit of 1 for a publication is divided between the authors in such a way that the
senior author receives twice the credit of nonsenior authors (Boxenbaum, Pivinski, &
Ruberg, 1987, pp. 566–568).
(cid:129) Pareto weights
The credit of 1 for a publication is divided between the authors. An author receives the
greater credit if the number of actual citations is more in line with the author’s average
number of citations per publication (c'est à dire., neither higher nor lower; Tol, 2011, pp. 292–293;
296–297).
Persson suggests a modification of Tol’s counting method where the weight assigned to
an author can change from one publication to the next (Persson, 2017). The review does
not discuss these alternatives further.
(cid:129) Shapley value approach
The credit of 1 for a publication is divided between the authors according to their
Shapley value, a concept from game theory. An author’s weight is calculated by aver-
aging the marginal contribution of the author in all possible coauthor combinations. Le
marginal contribution is based on the author’s number of citations or on other impact
scores for the author (Papapetrou, Gionis, & Mannila, 2011).
(cid:129) Contribution to an article’s visibility, first approach
The credit of 1 for a publication is divided between the authors, with weights dependent
on an indicator (Par exemple, the h-index) calculated for each author (Egghe, Guns, &
Rousseau, 2013, pp. 57–59).
Études scientifiques quantitatives
952
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
(cid:129) [Shen & Barabasi, 2014]
The credit of 1 for a publication is divided between the authors, with weights dependent
on the author’s share of authorships in the cocitation network of the publication and also
on the number of cocitations. The more publications and citations an author has in the
research field, the more credit will be assigned to her/him (Shen & Barabasi, 2014).
Other studies suggest modifications to Shen and Barabasi’s counting method. In one
étude, author ranks in the publications are taken into account (Wang, Guo et al., 2017);
in another, publication years and whether or not publications are highly cited are taken
into account (Bao & Zhai, 2017). The review does not discuss these alternatives further.
(cid:129) Relative intellectual contribution
In publications where the authors state their contributions guided by the CRediT6 tax-
onomy, the types of contributions can be weighted and these weights credited to the
contributing authors. In total, all author contributions to a publication sum to 1
(Rahman, Regenstein et al., 2017).
(cid:129) [Steinbrüchel, 2019]
The credit of 1 for a publication is divided equally between those authors who are prin-
cipal investigators. All other authors of the publication are credited 0 (Steinbrüchel,
2019, pp. 307–308).
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
3.2.2. Rank-dependent and nonfractionalized counting methods
A much smaller category, with six counting methods, has the properties rank dependent and
nonfractionalized, meaning that the sum of credits for basic units of analysis in a publication
can sum to more than 1 credit and the basic units of analysis do not receive equal shares. Le
pre-1970 counting methods are not represented in this category.
In addition to the rank-dependent counting methods, the results include counting methods
where the credits for basic units of analysis are shared unevenly based on characteristics other
than rank, such as an author’s position as principal investigator, an author’s h-index, or an
author’s number of publications.
The six counting methods are defined with authors as basic units of analysis and objects of
étude.
The arguments for the introductions of four of the six counting methods are from Group 1 dans
Framework 2: “The indicator measures the (impact of ) contribution/participation/… of an
object of study.” The two remaining counting methods (Ellwein, Khachab, & Waldman,
1989, p. 320) aim to measure productivity7, an approach that is not included in, but can be
added to, Groupe 1 in Framework 2.
Chiffre 3 and List 2 present the six counting methods. Chiffre 3 shows five of the counting
méthodes. An example with a 10-author publication provides a visual representation of the
counting methods. For one of the counting methods, the credits are distributed based on
characteristics other than rank, as discussed in relation to List 1. List 2, with only one item,
describes this counting method.
6 CRediT is a taxonomy for describing the contributions made by authors to research publications: https://
casrai.org/credit/.
7 Ellwein et al. (1989) use the simple interpretation of the term productivity. See Footnote 5.
Études scientifiques quantitatives
953
Counting methods introduced into the bibliometric research literature 1970–2018
Chiffre 3. How authors of a publication with 10 authors share the credit. Rank-dependent and nonfractionalized counting methods.
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
List 2: Nonfractionalized counting method. Credits to the basic units of analysis are distrib-
uted based on characteristics other than rank.
(cid:129) Contribution to an article’s visibility, second approach
The h-index (or another indicator) is calculated for each author of a publication and for
the union of the authors’ publications. An author receives a share of the credit for the
publication equal to her or his h-index divided by the h-index for the union. The sum of
credits to the authors of a publication may exceed 1 (Egghe et al., 2013, pp. 57–59).
3.2.3. Rank-independent and nonfractionalized counting methods
The next category includes three counting methods, which are rank independent and nonfrac-
tionalized. All basic units of analysis in a publication receive equally sized credits and the total
credit for a publication can sum to more than 1. Among the pre-1970 counting methods, com-
plete counting has these properties.
The three counting methods are defined with authors as basic units of analysis and objects
of study.
The first counting method aims to give a balanced representation of productivity across re-
search disciplines (Kyvik, 1989, pp. 206–209). This type of argument is not yet included in
Framework 2. See the Supplementary Material, Section 2.3, for a further analysis. For the
two remaining counting methods (de Mesnard, 2017; Tscharntke, Hochberg, et coll., 2007),
the argument for the introduction of the counting method is assigned to Group 1 dans
Framework 2: “The indicator measures the (impact of ) contribution/participation/… of an ob-
ject of study.”
Chiffre 4 provides a visual representation of the three counting methods. A 10-author pub-
lication is used as an example for the illustration. In Figure 4, the Equal Contribution method
results in scores identical to scores obtained by complete-fractionalized counting. Cependant,
complete-fractionalized counting has no limit for how small a fraction of the credit, from a
publication to a basic unit of analysis, can be. The Equal Contribution method gives a
Études scientifiques quantitatives
954
Counting methods introduced into the bibliometric research literature 1970–2018
Chiffre 4. How authors of a publication with 10 authors share the credit. Rank-independent and nonfractionalized counting methods.
minimum 5% of the credit from a publication to a basic unit of analysis; thus, unlike scores
obtained by complete-fractionalized counting, the total credit for a publication can sum to
plus que 18.
3.2.4. Rank-independent and fractionalized counting methods
For completeness, the category of rank independent and fractionalized counting methods is
included. Cependant, none of the 32 counting methods identified by RQ 1 belong to this cat-
egory. Among the pre-1970 counting methods, complete-fractionalized counting has the prop-
erties of being rank independent and fractionalized. The basic units of analysis in a publication
share 1 credit evenly.
3.2.5. Two counting methods do not comply with Framework 1
Two of the counting methods identified by RQ 1 do not comply with the selected properties
from Framework 1: rank dependent/rank independent and fractionalized/nonfractionalized.
These are the Online fractionation approach (Nederhof & Moed, 1993) and the Norwegian
Publication Indicator (NPI) (Sivertsen, 2016, p. 912). As documented below, there are different
reasons for why the two counting methods do not fit Framework 1.
The two counting methods are analyzed under RQs 3 et 4. These analyses are not
affected by the counting methods not fitting Framework 1.
On-line fractionation approach
The description of Framework 1 in Section 2.2.1 mentions that whole counting and
whole-fractionalized counting do not comply with the property of being rank dependent or
rank independent. Whole-fractionalized counting is introduced in 1993, under the name
8 In the bibliometric research literature, the Danish Bibliometric Research Indicator is described with different
calculations (Nielsen, 2017, p. 3; Schneider, 2009, p. 372; Wien, Dorch, & Larsen, 2017, pp. 905–907).
Applying authors as basic units of analysis and objects of study, the calculation of the Danish
Bibliometric Research Indicator overlaps with the Equal Contribution method with the modification that
10% of the credit is the minimum credit from a publication to a basic unit of analysis. Cependant, le
Danish Bibliometric Research Indicator has authors as basic units of analysis and institutions as objects of
étude. Ainsi, the calculation of the Danish Bibliometric Research Indicator differs from the Equal Contribution
method (see a further discussion in Section 3.2.5).
Études scientifiques quantitatives
955
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
On-line fractionation approach (Nederhof & Moed, 1993). En tant que tel, it is well documented
that the On-line fractionation approach does not comply with the Framework 1 property of
being either rank dependent or rank independent (Gauffriau et al., 2007, p. 188).
Néanmoins, the basic units of analysis and objects of study can be determined. The On-
line fractionation approach is defined with countries as basic units of analysis and objects of
étude. The argument for the introduction of the counting method is assigned to Group 3 dans
Framework 2: “Pragmatic reasons.” The counting method is easier to use on larger publication
sets compared to complete-fractionalized counting (Nederhof & Moed, 1993, p. 41).
The counting method used in the Norwegian Publication Indicator (NPI)
The other counting method that does not comply with Framework 1 is the Norwegian
Publication Indicator (NPI) (Sivertsen, 2016, p. 912). In the NPI, authors are the basic units
of analysis and institutions are the objects of study. An institution’s score for a publication is
calculated by first adding up the complete-fractionalized credits of the authors from the insti-
tution to a sum for the institution. Suivant, the square root of the sum is calculated. Applying the
square root to a sum for basic units of analysis as done in the NPI does not comply with measure
théorie, which is the theoretical foundation for the mathematical properties of Framework 1 (voir
Section 2.2.1)9.
En outre, neither does the NPI comply with Framework 2. Similar to Kyvik’s counting
method (Kyvik, 1989, pp. 206–209), the argument for the NPI is to give a balanced represen-
tation of productivity across research disciplines (Sivertsen, 2016, p. 912). This argument has
not yet been covered by Framework 2.
The NPI does not fit with either of the Frameworks 1 et 2. Ainsi, the Supplementary
Material, Section 2, is a case study that uses the two frameworks to analyze the NPI, et
through this analysis, identify potential for developing the frameworks further.
3.2.6. Counting methods that do not currently comply with Framework 2
Two arguments for introducing counting methods do not currently comply with Framework 2.
Both arguments are reported in the sections above.
The first argument is to give a balanced representation of productivity across research disci-
plines used in two studies (Kyvik, 1989, pp. 206–209; Sivertsen, 2016, p. 912). This argument
has not yet been covered by Framework 2; cependant, in the Supplementary Material, Section 2.3,
a case study discusses how Framework 2 can be developed to include the argument.
The second argument is to measure productivity, an approach that is not included but can
be added to Group 1 in Framework 2 as mentioned in Sections 3.2.1 et 3.2.2. This argument
is used by three studies (Assimakis & Adam, 2010, p. 422; Ellwein et al., 1989, p. 320; Howard
et coll., 1987, p. 976).
The counting methods are analyzed in relation to RQs 3 et 4. These analyses are not
affected by the counting methods not currently fitting Framework 2.
9 The Danish Bibliometric Research Indicator follows similar steps in the calculation. An institution’s score
from a publication is calculated by first adding up the complete-fractionalized credits of the authors from
the institution to a sum for the institution. Suivant, institutions with less than 10% of the credit from a publica-
tion each have their credit raised to 10% of the credit from the publication (Agency for Science and Higher
Éducation, 2019, sec. Fraktionering (in Danish)). This practice is discussed further for the NPI in the
Supplementary Material, Section 2.
Études scientifiques quantitatives
956
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
3.3. RQ 3: Methods and Elements to Assess Internal Validity of Counting Methods
RQ 3: Which methods and elements from the studies that introduce the counting
methods identified by RQ 1 can be used to assess the internal validity of those counting
méthodes?
RQ 3 applies three criteria for well-constructed bibliometric indicators: Adequacy, sensitivity,
and homogeneity. The adequacy criterion identifies methods that may be used to assess the
adequacy of counting methods in the studies that introduce counting methods. The sensitivity
and homogeneity criteria identify elements that indicate weak sensitivity and define hetero-
geneity in the equations of the counting methods and, as such, work against the two criteria.
RQ 3 does not evaluate how well these methods and elements work as assessments of in-
ternal validity of the counting methods in the studies that introduce counting methods. Ainsi,
RQ 3 does not answer whether the 32 counting methods identified by RQ 1 are internally valid
ou non. Plutôt, RQ 3 evaluates how well each of the methods and elements corresponds to the
relevant criteria for internal validity: adequacy, sensitivity, and homogeneity.
Tableau 5 presents the schematic overview from the Supplementary Material, Section 1, dans
relation to RQ 3. The table shows which counting methods apply to the methods and elements
to assess adequacy, sensitivity, and homogeneity. Sections 3.3.1–3.3.3 report the results for
each of the three criteria.
3.3.1. Adequacy—five methods
According to the adequacy criterion, an indicator should be an adequate proxy for the object the
indicator is designed to measure. Adequacy is tested through an independent and accepted
measure for the object. The analysis identifies five methods in the studies that introduce the
counting methods identified by RQ 1 that may be used to assess the adequacy of the counting
méthodes:
(cid:129) Compare to other counting methods or bibliometric indicators
(cid:129) Principles to guide the definitions of the counting methods
(cid:129) Quantitative models for distributions of scores
(cid:129) Surveys or other empirical evidence
(cid:129) Compare groups of objects of study
Below, the results report how well each of the methods assesses adequacy. There are ex-
amples of studies that explicitly use the methods to assess the adequacy of the counting
méthodes. These include Sivertsen’s use of a quantitative model (Sivertsen, 2016, p. 911)
and Shen and Barabási’s use of groups of objects of study comprised of Nobel laureates
versus their coauthors (Shen & Barabasi, 2014, p. 12,326). Cependant, all the studies include
analyses of the counting methods. These analyses include methods that, in the review, sont
interpreted as assessments of the internal validity of the counting methods.
Compare to other counting methods or bibliometric indicators
When the adequacy of counting methods is assessed by comparisons to other counting
methods or bibliometric indicators, these other counting methods or bibliometric indicators
should constitute independent and accepted measures of the aims of the counting methods.
The aims of the counting methods are analyzed in relation to RQ 2 via Framework 2.
Études scientifiques quantitatives
957
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Q
toi
un
n
t
je
t
un
je
t
je
v
e
S
c
e
n
c
e
S
toi
d
e
s
t
je
Tableau 5. Overview of the 32 counting methods identified by RQ 1 in relation to the three criteria adequacy, sensitivity, and homogeneity
Compare to
other counting
methods or
bibliometric
indicators
Non
Counting method
Harmonic counting
(Hodge & Greenberg,
1981; Hagen, 2008)
Methods to support adequacy
Principles to
guide the
definitions of
the counting
méthodes
Oui
Quantitative
models for
distributions
of scores
Non
Surveys
ou autre
empirical
evidence
Non
Elements that work
against sensitivity
Fixed
credits for
selected
basic units
of analysis
Non
Time-
specific
evidence
Non
Compare
groups of
objets
of study
Non
Elements that work
against homogeneity
Parameter
valeurs
selected by
bibliometrician
Non
External
elements
Non
Conditional
equations
Non
Proportional counting
Non
Oui
Non
Non
Non
Non
Non
Non
Non
Non
(Hodge & Greenberg,
1981; Van Hooydonk,
1997)
[Howard et al., 1987]
[Boxenbaum et al., 1987]
[Kyvik, 1989]
Exponential function
(Ellwein et al., 1989)
Second-and-last function
(Ellwein et al., 1989)
On-line fractionation
approche (Nederhof
& Moed, 1993)
Correct credit distribution
(Lukovits & Vinkler,
1995)
Pure geometric counting
(Egghe et al., 2000)
Noblesse Oblige
(Egghe et al., 2000;
Zuckermann, 1968)
Refined weights (Trueba
& Guerrero, 2004)
Sequence determines
credit (Tscharntke
et coll., 2007)
Oui
Non
Oui
Oui
Oui
Oui
Non
Oui
Oui
Oui
Oui
9
5
8
Non
Non
Non
Non
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Oui
Non
Oui
Oui
Oui
Oui
Non
Oui
Non
Non
Non
Non
Non
Non
NA
NA
NA
Non
Non
Non
Non
Oui
Oui
Non
Non
Oui
Non
Non
Non
Non
Non
Oui
Oui
Non
Non
Non
Oui
Non
Oui
Non
Oui
Non
Oui
Non
Oui
Non
Non
Oui
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Non
Non
Oui
N
NA
Non
Oui
Oui
Non
Non
Non
Non
Non
Non
Oui
Oui
Oui
C
o
toi
n
t
je
n
g
m
e
t
h
o
d
s
je
n
t
r
o
d
toi
c
e
d
je
n
t
o
t
h
e
b
je
b
je
je
o
m
e
t
r
je
c
r
e
s
e
un
r
c
h
je
je
t
e
r
un
t
toi
r
e
1
9
7
0
2
0
1
8
–
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Tableau 5.
(a continué )
Q
toi
un
n
t
je
t
un
je
t
je
v
e
S
c
e
n
c
e
S
toi
d
e
s
t
je
Compare to
other counting
methods or
bibliometric
indicators
Oui
Oui
Oui
Oui
Oui
Oui
Oui
Oui
Oui
Non
Counting method
Equal contribution
(Tscharntke et al., 2007)
Weight coefficients
(Zhang, 2009)
Golden productivity index
(Assimakis & Adam,
2010)
Tailor based allocations
(Galam, 2011)
Pareto weights (Tol, 2011)
Shapley value approach
(Papapetrou et al., 2011)
A-index (Stallings et al.,
2013)
Weighted fractional
output, intramural
or extramural
(Abramo et al., 2013)
Absolute weighing factor
(Aziz & Rozing, 2013)
Contribution to an article’s
visibility, first approach
(Egghe et al., 2013)
Contribution to an article’s
Non
visibility, second
approche (Egghe et al.,
2013)
[Shen & Barabasi, 2014]
Network-based model
(Kim & Diesner, 2014)
Non
Oui
9
5
9
Methods to support adequacy
Principles to
guide the
definitions of
the counting
méthodes
Non
Quantitative
models for
distributions
of scores
Non
Surveys
ou autre
empirical
evidence
Non
Elements that work
against sensitivity
Fixed
credits for
selected
basic units
of analysis
NA
Time-
specific
evidence
Non
Compare
groups of
objets
of study
Non
Elements that work
against homogeneity
Parameter
valeurs
selected by
bibliometrician
Non
External
elements
Non
Conditional
equations
Oui
Non
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Non
Oui
Oui
Oui
Oui
Oui
Non
Non
Non
Non
Non
Non
Non
Non
Non
NA
Oui
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Non
Non
Non
Non
Non
Oui
Oui
Non
Non
Non
Oui
Oui
Oui
Oui
Non
Non
Non
Oui
Oui
Non
Oui
Non
Oui
Non
NA
Non
Oui
Non
Non
Non
Non
Oui
Oui
Oui
Non
Oui
Non
Non
Non
Oui
Oui
Non
Non
Oui
C
o
toi
n
t
je
n
g
m
e
t
h
o
d
s
je
n
t
r
o
d
toi
c
e
d
je
n
t
o
t
h
e
b
je
b
je
je
o
m
e
t
r
je
c
r
e
s
e
un
r
c
h
je
je
t
e
r
un
t
toi
r
e
1
9
7
0
2
0
1
8
–
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Compare to
other counting
methods or
bibliometric
indicators
Oui
Methods to support adequacy
Principles to
guide the
definitions of
the counting
méthodes
Non
Quantitative
models for
distributions
of scores
Oui
Surveys
ou autre
empirical
evidence
Non
Elements that work
against sensitivity
Fixed
credits for
selected
basic units
of analysis
NA
Time-
specific
evidence
Non
Compare
groups of
objets
of study
Oui
Elements that work
against homogeneity
Parameter
valeurs
selected by
bibliometrician
Non
External
elements
Non
Conditional
equations
Non
Oui
Oui
Oui
Oui
Oui
Non
Non
Oui
Non
Non
Non
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Oui
Non
Non
Non
Non
Oui
Non
Non
Non
Non
NA
Non
NA
Non
Non
Non
Non
Non
Non
Non
Non
Oui
Non
Oui
Non
Non
Non
Non
Non
Non
C
o
toi
n
t
je
n
g
m
e
t
h
o
d
s
je
n
t
r
o
d
toi
c
e
d
je
n
t
o
t
h
e
b
je
b
je
je
o
m
e
t
r
je
c
r
e
s
e
un
r
c
h
je
je
t
e
r
un
t
toi
r
e
1
9
7
0
2
0
1
8
–
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Tableau 5.
(a continué )
Q
toi
un
n
t
je
t
un
Counting method
Norwegian Publication
Indicator (Sivertsen,
2016)
[Zou & Peterson, 2016]
Relative intellectual
contribution
(Rahman et al., 2017)
Parallelization bonus
(de Mesnard, 2017)
[Steinbrüchel, 2019]
[Bihari & Tripathi, 2019]
je
t
je
v
e
S
c
e
n
c
e
S
toi
d
e
s
t
je
9
6
0
Counting methods introduced into the bibliometric research literature 1970–2018
In the studies that introduce the counting methods, 25 of the 32 counting methods are
analyzed by making comparisons with other counting methods or bibliometric indicators.
An example are studies that introduce those counting methods for which the aim is that first
or last authors provide the largest contribution/participation/… to a publication (Groupe 1 dans
Framework 2). Complete-fractionalized counting does not reflect this aim because all coau-
thors are credited equally. Donc, comparisons involving the complete-fractionalized
counting method that result in weak correlations may be regarded as evidence of the adequacy
of the counting methods emphasizing first- or last-authors contributions (for examples, voir
Abramo et al., 2013, p. 207; Assimakis & Adam, 2010, pp. 424–425).
To successfully use comparisons with other counting methods or bibliometric indicators to
assess adequacy, the bibliometrician should first evaluate the relevance of the other counting
methods or bibliometric indicators in relation to aim of the counting method under assessment.
En outre, according to the adequacy criterion, not only should the other counting
methods or bibliometric indicators used in comparisons be accepted measures of the aims
of the counting methods, they should also be independent of the counting method under as-
sessment. Cependant, it can be debated whether or not other counting methods or bibliometric
indicators are independent from the counting methods under assessment, as both build on
publications and/or citations. Biases in the other counting methods or bibliometric indicators
may also very well be present in the counting methods under assessment. This potential bias
should be taken into account if adequacy of counting methods is assessed by comparisons to
other counting methods or bibliometric indicators.
Principles to guide the definitions of the counting methods
When principles to guide the definitions of the counting methods are used to support the ade-
quacy of the counting methods, these principles constitute an ideal description of the counting
methods and, as such, represent independent and accepted measures of the aims of the counting
méthodes. The aims of the counting methods are analyzed in relation to RQ 2 via Framework 2.
Six of the 32 counting methods have principles to guide their definitions. The six counting
methods aim to measure (the impact of ) the contribution/participation/… of an object of study
(Groupe 1 in Framework 2). The principles are used to design the counting methods but
not necessarily to assess the counting methods. Five out of the six studies emphasize the prin-
ciples rank-dependency and/or fractionalization (Hodge & Greenberg, 1981; Lukovits &
Vinkler, 1995, pp. 92–93; Stallings et al., 2013, pp. 9681–9682; Trueba & Guerrero, 2004,
pp. 182–183). The remaining study’s principles focus on division of tasks (de Mesnard, 2017).
Decisions about whether the principles to guide definitions of the counting methods are
independent and accepted measures of the aims of the counting methods are in some cases
based on thorough analyses (de Mesnard, 2017) and in other cases on personal experiences
(Hodge & Greenberg, 1981). In the latter case, it is difficult to assess if the principles are
appropriate for assessing the adequacy of the counting methods.
Quantitative models for distributions of scores
When quantitative models for distributions of scores are used to assess the adequacy of the
counting methods, the scores for the objects of study are tested against distributions, lequel
should constitute independent and accepted measures of the aims of the counting methods.
The aims of the counting methods are analyzed in relation to RQ 2 via Framework 2.
Four of the 32 counting methods have quantitative models for distributions of scores. These
distributions are the Gini-coefficients close to 0.5 (Kyvik, 1989, pp. 209–210), Lotka’s law (Egghe
Études scientifiques quantitatives
961
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
et coll., 2013, pp. 59–62; Kyvik, 1989, p. 211), and equal scores across objects of study (Kyvik,
1989, pp. 207–208; Sivertsen, 2016, p. 911). For two of the counting methods, the aim is to give
a balanced representation of productivity across research disciplines. This aim is reflected in the
quantitative model (Kyvik, 1989, pp. 207–208; Sivertsen, 2016, p. 911). Whether the other
quantitative models can work as independent and accepted measures of the aims of the counting
methods depends on the validity of the models. Many studies in the bibliometric literature
analyze Lotka’s law, and the Gini-coefficient is investigated in economics, bibliométrie, et
other research fields. En tant que tel, we have general knowledge about these quantitative models.
This said, in the studies that use the models to assess the adequacy of counting methods, le
relation between the aim of the counting methods and the model must be made clear.
Surveys or other empirical evidence
When surveys or other empirical evidence are used in the assessment of adequacy of counting
méthodes, the credits to the basic units of analysis are evaluated against empirical data about
how coauthors in a publication share credits. The idea is to define target values for the credits
for the basic units of analysis. These target values should constitute independent and accepted
measures of the aims of the counting methods. The aims of the counting methods are analyzed
in relation to RQ 2 via Framework 2.
Four of the 32 counting methods use results from surveys (Kim & Diesner, 2014, pp. 593–
595; Lukovits & Vinkler, 1995, pp. 93–94; Zou & Peterson, 2016, pp. 904–906), and one study
uses a calculation of the ratio of senior researchers to nonsenior researchers to determine
credits for basic units of analysis (Boxenbaum et al., 1987, pp. 566–568). All four counting
methods aim to measure (the impact of ) the contribution/participation/… of an object of study
(Groupe 1 in Framework 2).
Surveys and other empirical evidence can be well suited to create independent and accepted
measures of the aims of counting methods. Cependant, the study designs that inform the sur-
veys and evidence are important to take into account. Certainly, surveys or other empirical
evidence are created at a specific point in time, and this can impact the sensitivity of the
counting methods (see Section 3.3.2).
Compare groups of objects of study
When comparisons between groups of objects of study are used to assess adequacy, scores for
groups of objects of study are compared, such as early career versus senior researchers. These
comparisons between groups of objects of study should rely on independent and accepted
measures of the aims of the counting methods. The aims of the counting methods are analyzed
in relation to RQ 2 via Framework 2.
Sixteen of the 32 counting methods use the compare groups of objects of study approach to
assess the adequacy of the counting methods. The comparisons are between institutions or
des pays (Par exemple, Howard et al., 1987), research fields or disciplines (Par exemple,
Sivertsen, 2016, p. 911), publication sets from different databases (Par exemple, Shen &
Barabasi, 2014, p. 12,326), high-impact and other researchers (Par exemple, Abramo et al.,
2013, pp. 204–206), principal investigator and student (Par exemple, Egghe et al., 2013, p. 64),
or award-winners and other researchers (Par exemple, Aziz & Rozing, 2013, pp. 4–6).
The studies that make comparisons between groups of objects of study to assess adequacy
have different aims for introducing the counting methods. The studies represent all four groups
from Framework 1. For some of the counting methods, the comparisons can be related to
evaluations of the adequacy of the counting methods, such as in comparisons of principal
Études scientifiques quantitatives
962
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
investigator versus student, where the expectation would be that the principal investigator
would have the higher score (Egghe et al., 2013, p. 64). Dans d'autres cas, the relation is less clear,
and assessment of validity of adequacy may not be the intention of the studies making the
comparisons.
3.3.2. Sensitivity—two elements
According to the sensitivity criterion, an indicator should reflect changes over time in the
object that the indicator is designed to measure. For counting methods, it is important that they
are able to adapt to the increasing average number of coauthors per publication. The analysis
identifies two elements in the counting methods identified by RQ 1 that make counting
methods less adaptable to increasing numbers of authors per publication:
(cid:129) Time-specific evidence
(cid:129) Fixed credits for selected basic units of analysis
Below, the results report for each of these two elements the effect resulting from an
increasing number of authors per publication. The studies that introduce the counting
methods do not analyze the issue of increasing numbers of authors per publication.
Time-specific evidence
This element overlaps with the method: surveys or other empirical evidence. As discussed in
Section 3.3.1, counting methods where the adequacy is tested against the results of surveys or
other empirical evidence about how coauthors of a publication share credits may eventually
become obsolete due to changes in coauthor practices not being reflected in the empirical
evidence. One study creates new evidence (Zou & Peterson, 2016, pp. 904–906).
Cependant, some of the evidence from the four studies using surveys or other empirical data
dates back to the 1980s (Boxenbaum et al., 1987, p. 567; Kim & Diesner, 2014, p. 594;
Lukovits & Vinkler, 1995, p. 94; Vinkler, 1993, pp. 217–223). A further limitation of the ev-
idence is that it relates to investigations of smaller numbers of authors per publication. Pour
example, two studies include publications with up to five authors (Kim & Diesner, 2014,
p. 594; Lukovits & Vinkler, 1995, p. 94). The four counting methods using empirical evidence
aim to measure (the impact of ) the contribution/participation/… of an object of study (Groupe 1
in Framework 2). To support this aim, the empirical evidence must be updated regularly to
reflect the current average number of coauthors per publication.
Fixed credits for selected basic units of analysis
Three counting methods use fixed credits for selected authors only, independent of the number
of coauthors. As the average number of coauthors increases, the credits for each of the other
authors will decrease. The differences in credits assigned to the selected versus other authors may
become extreme and, donc, may not comply with the sensitivity criterion. The three counting
methods that apply fixed credits for selected basic units of analysis (Abramo et al., 2013, p. 201;
Assimakis & Adam, 2010, pp. 422–423; Egghe et al., 2000, p. 146) all aim to measure (the im-
pact of ) the contribution/participation/… of an object of study (Groupe 1 in Framework 2). The use
of fixed credits may not reflect this aim if the average number of coauthors increases.
3.3.3. Homogeneity—three elements
According to the homogeneity criterion, an indicator should measure only one dimension and
avoid heterogeneous indicators. The analysis investigates different elements that contribute to
Études scientifiques quantitatives
963
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
the equations for counting methods. Where there are several different elements in the equa-
tion, it is not immediately clear how these different elements affect the scores obtained by the
counting methods. Such elements do not support homogeneity.
The analysis identifies three elements in the counting methods identified by RQ 1 that con-
tribute to heterogeneity and, donc, work against the homogeneity of the counting methods:
(cid:129) Parameter values selected by bibliometrician
(cid:129) External elements
(cid:129) Conditional equations
Below are the results for each of these three elements. None of the studies that introduce the
counting methods analyze homogeneity.
Parameter values selected by bibliometrician
Seven of the 32 counting methods have one or more parameter values for the bibliometrician
to select individually in each analysis. A change of parameter values will change the distribu-
tion of credits among the basic units of analysis of a publication. This ensures that the counting
methods can be adapted to accommodate credit distribution traditions in various research
fields. For an example, see the illustration of “Network-based model (Kim & Diesner, 2014)»
in Figure 2 in Section 3.2.1.
The most common situation seen in the five counting methods is that the bibliometrician
selects the value of a parameter, and that this value can vary between 0 et 1 (Egghe et al.,
2000, p. 146; Ellwein et al., 1989, p. 321; Kim & Diesner, 2014, p. 591; Lukovits & Vinkler,
1995, pp. 92–95). Two counting methods include several parameter values to be selected by the
bibliometrician (Galam, 2011, p. 371; Trueba & Guerrero, 2004, pp. 184–185). The effect of
selecting a given parameter value as opposed to another parameter value is not immediately clear
in the score obtained by the counting method; donc, the counting method is heterogeneous.
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
External elements
The counting methods defined in Sections 3.2.1 et 3.2.2 as rank-dependent counting
méthodes, in which the credits for basic units of analysis are shared unevenly based on
characteristics other than rank, use external elements, such as an author’s position as principal
investigator, an author’s h-index, or an author’s number of publications.
Eight of the 32 counting methods include external elements. In five counting methods,
these external elements are author-level bibliometric indicators, such as the h-index.
Sometimes the bibliometrician can chose between several indicators (Egghe et al., 2013,
pp. 58–59; Papapetrou et al., 2011, pp. 554–555) and in other cases, specific indicators are
used in the definitions of the counting methods (Shen & Barabasi, 2014, pp. 12,325–12,327;
Tol, 2011, pp. 292–293). Other external elements are whether or not an author is a principal
investigator (Boxenbaum et al., 1987; Steinbrüchel, 2019, pp. 307–308) and the type and ex-
tent of author contributions to a publication cf. the CRediT taxonomy10 (Rahman et al., 2017,
p. 278). At present, author contributions are not often an element made explicit by the pub-
lications included in an analysis. Cependant, it should be noted that, increasingly, journal pub-
lications include author contribution statements. External elements require background
information about the author, and the effect of this background information on the score
10 See Footnote 6.
Études scientifiques quantitatives
964
Counting methods introduced into the bibliometric research literature 1970–2018
obtained by the counting methods is not immediately clear. Donc, the counting methods
are heterogeneous.
Conditional equations
Most counting methods have one equation, which is applied to all basic units of analysis and
toutes les publications. But some counting methods divide basic units of analysis or publications into
groups according to specific characteristics and then use conditional equations on each group.
Thirteen of the 32 counting methods apply conditional equations. Nine of the 13 counting
methods use author rank, such as first author, to divide the basic units of analysis into groups
and apply conditional equations to the groups (for examples, see Assimakis & Adam, 2010,
p. 422; Galam, 2011, p. 371; Trueba & Guerrero, 2004, pp. 184–185). Three counting
methods use the number of authors per publication to create groups (Aziz & Rozing, 2013,
p. 2; Kyvik, 1989, p. 206; Zhang, 2009, p. 416). In one counting method, one group has
publications with first and last authors from the same institution, et, in the other group, d'abord
and last authors are from different institutions (Abramo et al., 2013, p. 201). These groupings
of basic units of analysis or publications mean that it is not immediately clear how the counting
methods’ scores are obtained. Donc, the counting methods are heterogeneous.
3.4. RQ 4: Three Counting Methods Are Used in Four or More Research Evaluations
RQ 4: To what extent are the counting methods identified by RQ 1 used in research
evaluations and to what extent is this use compliant with the definitions in the studies
that introduce the counting methods?
RQ 4 employs a literature search to identify research evaluations that use the counting
methods identified by RQ 1. The focus is on research evaluations covering a minimum of
30 chercheurs. Some counting methods are used by the same author in several research eval-
situations. Dans de tels cas, only one of the research evaluations is counted. The Supplementary
Material, Section 1, provides a detailed schematic overview of the results related to RQ 4.
Fifteen of the counting methods are not used in research evaluations covering a minimum
de 30 chercheurs, et 14 counting methods are used in one to three research evaluations. Only
three counting methods are used in four or more research evaluations: harmonic counting,
Hodge & Greenberg’s counting method, and Sequence determines credit (Hodge &
Greenberg, 1981; Howard et al., 1987; Tscharntke et al., 2007). For each of these three count-
ing methods, a random sample of five research evaluations using the counting methods is
selected for a further analysis of how the counting methods are used. The Supplementary
Material, Section 4, lists the research evaluations included in the analysis.
The research evaluations that use the three counting methods should draw on the same
characteristics for the counting methods as presented at the introduction of the counting
méthodes (see Section 3.2). If one or more of these characteristics change between the intro-
duction and use of the counting method, then the research evaluation’s use of the counting
method may compromised.
In the samples of research evaluations using Harmonic counting and Sequence determines
credit, the counting methods are used with the same characteristics as in the studies that
introduce the counting methods. As in the introduction of the counting methods, the objects
of study and the basic units of analysis are authors and the arguments for the use of the counting
methods are from Group 1: “The indicator measures the (impact of ) contribution/participation/
… of an object of study.”
Études scientifiques quantitatives
965
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
The research evaluations’ arguments for using Howard et al.’s counting method are also
from Group 1, a situation that is in agreement with the study introducing the counting
method. Cependant, in the introduction of the counting method, authors are the basic units
of analysis and institutions are the objects of study. In the research evaluations, the basic units
of analysis and the objects of study are authors, institutions, or countries. The research eval-
uations that have basic units of analysis and objects of study other than those present at the
introduction of the counting method do validate the counting methods by comparisons with
other counting methods (see Section 3.3.1 for more about this method for assessment of
adequacy). Cependant, one study that has countries as objects of study does not validate
the counting method at all (Tsai & Lydia Wen, 2005). In this research evaluation, the use
of Howard et al.’s counting method may be compromised, as the use is not validated by
either the study introducing the counting method or the research evaluation using the
counting method.
4. DISCUSSION
Section 4 has two parts. Section 4.1 discusses the methods used in the present review and the
limitations resulting from the methodological choices made. Section 4.2 gives interpretations
of the results presented in the review.
4.1. Discussion of the Methods—Limitations
Section 4.1 discusses the review’s methods and their limitations. Results related to an RQ
inform subsequent RQs (see Table 1, Section 2); donc, limitations related to the methods
used in relation to RQ 1 have consequences for all the following RQs, and the limitations for
RQ 2 affect RQs 3 et 4.
4.1.1. RQ 1: Literature search covers counting methods in the bibliometric research literature
The literature search aimed at identifying counting methods and undertaken in relation to RQ 1
forms the basis for the review. The literature search covers peer-reviewed studies in English
from the period 1970–2018. Including more publication types and more languages could lead
to the identification of additional counting methods, such as counting methods used in local
university reports. To include the period 2019–2020 would most likely result in more counting
méthodes; cependant, counting methods introduced after 2018 are excluded from the review
because the use of these will be difficult to assess in relation to RQ 4 (c'est à dire., less than 2 des années après
their introduction into the bibliometric research literature).
The literature search identifies 32 counting methods, lequel, in relation to RQ 2, are then
assigned to categories. Including the period 2019–2020 could add counting methods to the
analysis and, thus, more detail to the results. Cependant, the proportion of counting methods in
each of the categories is consistent over time, in that none of the categories include counting
methods from one decade alone (see the Supplementary Material, Section 1). This suggests
that adding a few new counting methods would be unlikely to change the overall results.
Even though the result of the literature search may be supplemented with more counting
méthodes, the review scrutinizes more counting methods than previous reviews. The present
review demonstrates that the majority of the counting methods were introduced in the most
recent decade. If this trend continues, future counting methods may change the proportion of
counting methods in each of the categories used to structure this review.
Études scientifiques quantitatives
966
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
4.1.2. RQ 2: Two frameworks selected among other possible frameworks
RQ 2 uses two selected frameworks to categorize the counting methods identified by RQ 1
according to the characteristics of those counting methods. Framework 1 describes selected
mathematical properties of counting methods and Framework 2 describes arguments for
choosing a counting method for a bibliometric analysis.
The literature search did uncover other frameworks that may be suitable for the analysis of
many different counting methods. En effet, drawing on theories, méthodes, and concepts from
other research fields, the number of potentially relevant frameworks is very large. Cependant,
using only one or a few frameworks in an analysis serves to prevent overly complex results.
The present review uses two frameworks. To illustrate this, the potential of a framework not
used in the review is discussed below.
Xu et al.’s framework divides counting methods into linear, curve, and “other” counting
méthodes (Xu et al., 2016, pp. 1974–1977). A closer look at the counting methods in the pres-
ent review with regard to what Xu et al. define as curved counting methods reveals that Zou
and Peterson’s counting method (Zou & Peterson, 2016, p. 906) is the nonfractionalized ver-
sion of pure geometric counting (Egghe et al., 2000, p. 146). In both counting methods, le
second author gets half the credit given to the first author, the third author gets half the credit
given to the second author, et ainsi de suite. En outre, Howard et al.’s counting method (Howard
et coll., 1987, p. 976) has similar characteristics. The credits are reduced by one third going from
the first author to the second author, et ainsi de suite. A further analysis utilizing Xu et al.’s frame-
work may reveal other similarities that do not emerge from applying Frameworks 1 et 2.
Cependant, Xu et al.’s framework does not fit with as many counting methods as Frameworks
1 et 2, as “citation-based credit assignment methods”—for example, Pareto weights—are not
included in Xu et al.’s framework (Xu et al., 2016, p. 1974).
The two frameworks selected for the review represent a further development of the well-
known dichotomy of full versus fractional counting (Framework 1) and focus on the argument
for the introduction of the counting methods (Framework 2). Ainsi, the frameworks illustrate
that different approaches can be used to describe counting methods. Framework 1, Xu et al.’s
framework, and other frameworks used in relation to counting methods draw on mathematical
properties, which are highly relevant for the analysis of counting methods. Cependant, applying
Framework 2 in the review shows that approaches other than mathematical can add useful
nuance to our knowledge about counting methods.
4.1.3. RQ 3: Homogeneity criterion may be developed further
RQ 3 applies three validity criteria (adequacy, sensitivity, and homogeneity) that are devel-
oped for and tested on bibliometric indicators (Gingras, 2014, pp. 116–119; Wildgaard,
2015, sec. 6.3, 2019, sec. 14.4.1). Although guidance for how to apply the criteria exists at
the overall level, implementation in a specific case, such as the present review, requires
several choices.
The homogeneity criterion is difficult to use in relation to counting methods. The criterion
guidance explains that a mix of different elements with the same measure unit does not present
as heterogeneous but as composite (Gingras, 2014, p. 122). Cependant, the difference between
heterogeneous and composite is described with a less than ideal example that makes accurate
interpretation of the guidance difficult.
The homogeneity criterion may be interpreted more strictly than is done in the review (voir
example in Section 2.3), leading to the definition of fewer elements for indicating
Études scientifiques quantitatives
967
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
heterogeneous counting methods. Or the difference between heterogeneous and composite
may be ignored, leading to the definition of more elements for indicating heterogeneous
counting methods.
4.1.4. RQ 4: Selective focus on peer-reviewed research evaluations
RQ 4 conducts a literature search to identify research evaluations that use the counting
methods identified by RQ 1. This means that research evaluations that do not cite studies iden-
tified by RQ 1 are not found in the RQ 4 literature search. For some of the well-known count-
ing methods, this could result in underrepresentation in the RQ 4 search results: Research
evaluations may mention the name of the counting method without a reference at all, or they
may cite later studies describing the counting method rather than the original study that intro-
duced the counting method.
Three counting methods identified by RQ 1 have several names and/or studies that introduce
the counting methods. This is sometimes seen in bibliometric studies and can lead to misinter-
pretations (Gauffriau et al., 2008, pp. 166–169). The three counting methods are: Harmonic
counting, Proportional counting (also known as Arithmetic counting), and Noblesse Oblige
(Egghe et al., 2000, p. 146; Hagen, 2008; Hodge & Greenberg, 1981; Van Hooydonk, 1997;
Zuckermann, 1968). A literature search of the alternative names and/or studies that introduce the
counting methods is conducted. The search leads to a few additional research evaluations that
use the counting methods identified by RQ 1. In the results related to RQ 4, Proportional
counting would change from the interval “zero research evaluations use the counting method”
to “one to three research evaluations use the counting method.” This change would have no
impact on the results of the review (see the Supplementary Material, Section 1).
En outre, the review excludes research evaluations in reports and gray literature and
only includes peer-reviewed studies in English. These limitations mean that results regarding
the use of the counting methods may be underreported. A broader literature search could be
conducted for selected languages or by introducing limitations other than the ones chosen in
the present review to manage the literature search. It is worth noting, cependant, that including
all publication types and all languages would be impractical, resulting in a huge search. Le
result set of such a search would pose considerable challenges for analysis, especially for qual-
itative approaches such as those employed in relation to RQ 4.
A final point is that the focus of the review is on the applied use of counting methods in
larger research evaluations, and thereby, the scalability of the counting methods. This focus
can be adjusted to accommodate other types of use cases, such as investigations of how
existing counting methods inform the development of new counting methods or tests of the
mathematical properties of the counting methods. As with the choice of frameworks for cate-
gorizations of counting methods, there are many possibilities for analyses exploring the use of
counting methods.
4.2. Discussion of the Results
Section 4.1 discussed the review’s methods and their limitations. Section 4.2 discusses inter-
pretations of the results related to RQs 1–4.
4.2.1. RQ 1: New introductions of counting methods underline the relevance of analyses of
counting methods
The results related to RQ 1 show that counting methods in the bibliometric research literature
should neither be reduced to a simplified choice between two counting methods—full and
Études scientifiques quantitatives
968
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
fractional counting—nor be implicit in bibliometric analyses. The literature search identifies
32 counting methods, and the majority (17 counting methods) have been introduced in the
most recent decade, a situation that underlines the relevance of the review.
4.2.2. RQ 2: Consistent analyses of counting methods reveal categories of counting methods
The results related to RQ 2 demonstrate that consistent analyses of counting methods provide
new knowledge and allow a more nuanced understanding of counting methods.
Below, three main observations based on the results of applying Frameworks 1 et 2 sont
discussed. The observations do not relate to counting methods introduced in a specific decade;
rather, they are valid for counting methods from the 1980s as well as for counting methods
from the 2010s. Following the three observations, counting methods not fitting the frame-
works are discussed.
Observation 1
The first observation is that all counting methods are introduced with specific basic units of
analysis and objects of study, often authors. Recall that complete counting and whole counting
methods are identical at the microlevel but often result in different scores at the meso- ou
macrolevels. Autrement dit, this difference is not visible if two counting methods are only
defined at the microlevel. Ainsi, not all definitions of the counting methods necessarily hold
if the counting methods are applied at aggregation levels other than the aggregation levels for
which they are specifically defined (often the microlevel). The use of the counting methods
would be facilitated if they were to be introduced as score functions, which could be com-
bined with different basic units of analysis and objects of study.
Observation 2
The second observation is about rank-dependent counting methods, excluding rank-
dependent counting methods based on other characteristics than rank. The majority (19 de
le 32 counting methods) are rank dependent. Encore, most of these counting methods are
defined at the microlevel.
Among the pre-1970 counting methods, straight counting is rank dependent. Older studies
have shown that the difference in scores obtained by straight counting and other pre-1970
counting methods levels out at the meso- and macrolevels. Obviously, when straight counting
is used at the microlevel, it is important to be the first author of a publication to receive credit
for that publication (Lindsey, 1980, pp. 146–150). Cependant, at the meso- or macrolevel,
straight counting scores for institutions or countries are fair approximations of scores resulting
from whole or fractional counting (Brun, Glänzel, & Schubert, 1989, p. 168; Cole & Cole,
1973, pp. 32–33).
As discussed in Section 1.1, aujourd'hui, there are more institutions and countries per publica-
tion. Donc, in analyses with recent publications, scores resulting from whole versus
straight counting are more likely to differ (for examples, see Gauffriau et al., 2008, pp. 156–157;
Lin, Huang, & Chen, 2013). Cependant, straight and complete-fractionalized counting still yield
similar scores at the meso- or macrolevels.
Given the situation described above, it is likely that the results regarding straight counting
would hold for other rank-dependent and fractionalized counting methods. En théorie, ce
means that the 14 rank-dependent and fractionalized counting methods could be substituted
by complete-fractionalized counting for analyses at the meso- and macrolevels. Complete-
fractionalized counting is rank independent and, donc, easier to apply compared to
rank-dependent counting methods with their more complex equations. In future research, it
Études scientifiques quantitatives
969
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
would be interesting to investigate comparisons between complete-fractionalized counting
et le 14 rank dependent and fractionalized counting methods using empirical data at the
meso- and macrolevels.
Observation 3
The third observation is that almost all of the counting methods (28 de 32) are introduced with
an argument from Group 1 in Framework 2: “The indicator measures the (impact of )
contribution/participation/… of an object of study.” This result suggests that the common
understanding of counting methods is that they relate to the concept that the study aims to
measure, Par exemple, the object of study’s participation in a research endeavor as measured
by whole counting. Cependant, four of the counting methods in the review show that there are
alternative arguments for introducing counting methods (see the Supplementary Material,
Section 1). An interpretation that assumes all counting methods aim to measure the
contribution/participation/… of an object of study would be a mistake.
Counting methods that do not fit the study frameworks
In addition to the observations above, as shown in Section 3.2.5, two counting methods do not
fit the selected properties from Framework 1. Aussi, as Section 3.2.6 illustrates, two arguments
for introducing counting methods do not currently comply with Framework 2. In the context of
the present review, it is no surprise that not all counting methods fit Frameworks 1 et 2. Comme
discussed in Section 4.1.2, neither does Xu et al.’s framework cover all counting methods.
En effet, the counting methods outside the frameworks offer a potential opportunity to inves-
tigate the further development of the frameworks. Aussi, an analysis of why the counting methods
do not fit the frameworks could give new perspectives on the counting methods. To this end, le
Supplementary Material, Section 2, presents an example case study using Frameworks 1 et 2 à
analyze the counting method used in the NPI, and through this analysis, to identify potential for
developing the frameworks further and achieving deeper understanding of the NPI.
4.2.3. RQ 3: Assessment of internal validity of counting methods can be developed further
The results related to RQ 3 present the application of three criteria to evaluate the internal
validity of counting methods. D'abord, methods to assess the adequacy of counting methods,
Et ainsi, to support the internal validity of the counting methods are presented. Suivant, le
analysis considers elements that define weak sensitivity of the counting methods. Enfin,
elements in counting methods that make those counting methods heterogeneous are exam-
ined. Elements connected with weak sensitivity and heterogeneous elements do not support
the internal validity of counting methods.
The use of the adequacy criterion in relation to counting methods suggests that adequacy
can be analyzed in relation to the aims of the counting methods (c'est à dire., the arguments for intro-
ducing the counting methods from Framework 2). The use of the sensitivity criterion on count-
ing methods shows that seven counting methods have elements that indicate weak sensitivity
(see the Supplementary Material, Section 1). On the other hand, the remaining counting
methods do not accommodate explicit measures to support sensitivity (c'est à dire., reflecting the
increasing number of authors per publication over time). De même, the use of the homogeneity
criterion in relation to counting methods indicates that a large number (22 of the counting
methods or 15 counting methods if a strict interpretation of the criterion is used; voir
Section 4.1.3) have heterogeneous elements that work against homogeneity. At least for these
counting methods, no specific measures are taken to support homogeneity. Ainsi, the results
related to RQ 4 suggest potential for the consistent use of validity criteria in relation to
counting methods.
Études scientifiques quantitatives
970
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
The bibliometrician can decide not to use counting methods with elements that work against
sensitivity and homogeneity (see the Supplementary Material, Section 1). Cependant, there may
be more elements than those identified in the review, potentially leading to the exclusion of
more counting methods. Alternativement, the heterogeneous counting methods may have high
adequacy and thus prove useful in research evaluations anyway. En tant que tel, the present review’s
application of the three criteria for validity should be used as attention and guiding points for
selecting counting methods—but not as a selection key. Wildgaard reaches a similar conclusion
after her implementation of the three criteria (Wildgaard, 2015, p. 95).
4.2.4. RQ 4: The context in which the counting method are used should be assessed
The results related to RQ 4 investigate to what extent the counting methods identified by RQ 1
are used in research evaluations, and whether research evaluations use the counting methods
in agreement with how the counting methods are described initially in the studies that intro-
duce them. The analysis finds that a large majority of the counting methods (29 de 32) sont
either used in a maximum of three research evaluations or not used at all in research evalu-
ations. The paradox of this moderate use and new counting methods continuously being
introduced into the bibliometric research literature remains unsolved.
Three counting methods are used in at least four research evaluations. In one instance, un
counting method is used in a research evaluation with other basic units of analysis and objects
of study than those defined in the introduction of the counting method. It is important to be
aware of the contexts in which the counting methods are used and whether these contexts
differ from the definition of the counting methods. In a previous study, the results show that
the use of pre-1970 score functions is not consistent across studies. Many of the score func-
tions are used with several arguments from Framework 2 in the bibliometric research literature
(Gauffriau, 2017).
5. CONCLUSION
The aims of the present review are to investigate counting methods in the bibliometric research
literature and to provide insights into their common characteristics, the assessment of their
internal validity, and how they are used.
The review shows that the topic of counting methods in bibliometrics is complex but the
review also demonstrates that consistent analysis of counting methods is possible. The analysis
of counting methods lead to several new findings. Below are some of the main findings and
possible implications of the findings.
(cid:129) One important finding is that 27 of the 32 counting methods covered by the review are
defined at the microlevel (authors). This makes it difficult to use these counting methods
at other aggregation levels in a consistent manner.
(cid:129) Another important finding suggests that the common understanding of counting methods
is that they relate to the concept that the study using the counting methods aims to mea-
sure, Par exemple, the concept “author contribution.” Often, cependant, these concepts
are not well defined in studies that introduce counting methods. Research on counting
methods can benefit from better integration with studies on the concepts to be measured
via the counting methods.
(cid:129) En outre, the review applies three internal validity criteria for well-constructed bib-
liometric indicators (adequacy, sensitivity, and homogeneity) to counting methods for
the first time. The criteria help identify methods and elements useful for assessing the
Études scientifiques quantitatives
971
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
internal validity of counting methods. Some of these methods and elements (for exam-
ple, comparisons of counting methods) are often used in analyses of counting methods.
Cependant, as the results show, many other methods and elements can be used to assess
the internal validity of counting methods, such as to define concepts measured by the
counting methods (see item above).
(cid:129) Enfin, the review documents the paradox between the many counting methods intro-
duced into the bibliometric research literature and the finding that only a few of these
counting methods are used in research evaluations. This finding may indicate a gap
between theoretical and applied approaches to counting methods. Par exemple, many
university rankings do not provide detailed and peer-reviewed documentation about the
applied counting method, with the Leiden Ranking as an exception.
The review provides practitioners in research evaluation and researchers in bibliometrics with a
detailed foundation for working with counting methods. En même temps, many of the findings
in the review provide bases for future investigations of counting methods. Well-defined frame-
works other than those used in the review could be applied to investigate counting methods. Le
categories of counting methods identified in the review could also be analyzed further, tel que
through a study of how to use the counting methods at different aggregation levels. A further
evaluation could be carried out of the methods and elements deemed useful for assessing
internal validity of counting methods. Et, finally, the use of counting methods in contexts
other than research evaluations could be examined. The schematic overview of the results of
the review presented in the Supplementary Material (see Section 1) may be a useful starting
point for inspiring further investigations of counting methods.
REMERCIEMENTS
The author would like to thank Dr Lorna Wildgaard for critically reading earlier versions of the
manuscript, Dr Dorte Drongstrup for fruitful discussions in relation to the manuscript, et
Dr Abigail McBirnie for proof-editing. En outre, the author wishes to thank the two
reviewers for their comments, which helped improve the review.
COMPETING INTERESTS
The author has no competing interests.
INFORMATIONS SUR LE FINANCEMENT
The Royal Danish Library has covered costs for proof-editing.
DATA AVAILABILITY
A list of arguments for introductions of counting methods analyzed under RQ 2, and references
to research evaluations analyzed under RQ 4 are available in the Supplementary Material.
RÉFÉRENCES
Abramo, G., Aksnes, D. W., & D’Angelo, C. UN. (2020). Comparaison
of research performance of Italian and Norwegian professors and
universities. Journal of Informetrics, 14(2), 101023. https://doi.org
/10.1016/j.joi.2020.101023
Abramo, G., & D’Angelo, C. UN. (2014). How do you define and mea-
sure research productivity? Scientometrics, 101(2), 1129–1144.
https://doi.org/10.1007/s11192-014-1269-8
Abramo, G., D’Angelo, C. UN., & Rosati, F. (2013). The importance
of accounting for the number of co-authors and their order when
assessing research performance at the individual level in the life
sciences. Journal of Informetrics, 7(1), 198–208. https://doi.org
/10.1016/j.joi.2012.11.003
Agency for Science and Higher Education. (2019). BFI rules and
règlements. https://ufm.dk/en/research-and-innovation/statistics
Études scientifiques quantitatives
972
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
-and-analyses/ bibliometric-research-indicator/ bfi-rules-and
-règlements?set_language=en&cl=en
Assimakis, N., & Adam, M.. (2010). A new author’s productivity
index: P-index. Scientometrics, 85(2), 415–427. https://doi.org
/10.1007/s11192-010-0255-z
Aziz, N. UN., & Rozing, M.. P.. (2013). Profit (p)-Indice: The degree to
which authors profit from co-authors. PLOS ONE, 8(4), e59814.
https://doi.org/10.1371/journal.pone.0059814, PubMed:
23573211
Balle, R.. (2018). An introduction to bibliometrics: New development
and trends. Chandos Publishing.
Bao, P., & Zhai, C. (2017). Dynamic credit allocation in scientific
literature. Scientometrics, 112(1), 595–606. https://est ce que je.org/10
.1007/s11192-017-2335-9
Bihari, UN., & Tripathi, S. (2019). Key researcher analysis in scientific
collaboration network using eigenvector centrality. In P. K. Sa, S.
Bakshi, je. K. Hatzilygeroudis, & M.. N. Sahoo (Éd.), Recent
findings in intelligent computing techniques (pp. 501–508).
Springer. https://doi.org/10.1007/978-981-10-8639-7_52
Boxenbaum, H., Pivinski, F., & Ruberg, S. J.. (1987). Publication
rates of pharmaceutical scientists: Application of the Waring
distribution. Drug Metabolism Reviews, 18(4), 553–571. https://
doi.org/10.3109/03602538708994132, PubMed: 3371189
Brun, T., Glänzel, W., & Schubert, UN. (1989). Assessing assess-
ments of British science. Some facts and figures to accept or
decline. Scientometrics, 15(3–4), 165–170. https://est ce que je.org/10
.1007/BF02017195
Centre for Science and Technology Studies. (n.d.). Indicateurs.
CWTS Leiden Ranking.
Centre for Science and Technology Studies. (2019). Leiden
Ranking. https://www.leidenranking.com/ranking/2019/list
Cole, J.. R., & Cole, S. (1973). Social stratification in science.
University of Chicago Press.
Cronin, B., & Sugimoto, C. R.. (Éd.). (2014). Beyond bibliometrics:
Harnessing multidimensional indicators of scholarly impact. AVEC
Presse. https://doi.org/10.7551/mitpress/9445.001.0001
de Mesnard, L. (2017). Attributing credit to coauthors in academic
édition: The 1/n rule, parallelization, and team bonuses.
European Journal of Operational Research, 260(2), 778–788.
https://doi.org/10.1016/j.ejor.2017.01.009
Delgado López-Cózar, E., Orduña-Malea, E., & Martín-Martín, UN.
(2019). Google Scholar as a data source for research assessment.
In W. Glänzel, H. F. Moed, U. Schmoch, & M.. Thelwall
(Éd.), Springer handbook of science and technology indicators
(pp. 95–127). Springer International Publishing. https://doi.org
/10.1007/978-3-030-02511-3_4
Egghe, L. (2008). Mathematical theory of the h- and g-index in case
of fractional counting of authorship. Journal of the American
Society for Information Science and Technology, 59(10),
1608–1616. https://doi.org/10.1002/asi.20845
Egghe, L., Guns, R., & Rousseau, R.. (2013). Measuring co-authors’
contribution to an article’s visibility. Scientometrics, 95(1), 55–67.
https://doi.org/10.1007/s11192-012-0832-4
Egghe, L., Rousseau, R., & Van Hooydonk, G. (2000). Methods for
accrediting publications to authors or countries: Consequences
for evaluation studies. Journal of the American Society for
Information Science, 51(2), 145–157. https://doi.org/10.1002
/(SICI)1097-4571(2000)51:2<145::AID-ASI6>3.0.CO;2-9
Ellwein, L. B., Khachab, M., & Waldman, R.. H. (1989). Assessing
research productivity: Evaluating journal publication across
academic departments. Academic Medicine, 64(6), 319–325.
https://doi.org/10.1097/00001888-198906000-00008, PubMed:
2719791
Fidler, F., & Wilcox, J.. (2018). Reproducibility of scientific results.
In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy
(Hiver 2018 Edition). https://plato.stanford.edu/entries
/scientific-reproducibility/
Galam, S. (2011). Tailor based allocations for multiple authorship:
A fractional gh-index. Scientometrics, 89(1), 365–379. https://est ce que je
.org/10.1007/s11192-011-0447-1
Gauffriau, M.. (2017). A categorization of arguments for counting
methods for publication and citation indicators. Journal de
Informetrics, 11(3), 672–684. https://doi.org/10.1016/j.joi.2017
.05.009
Gauffriau, M., & Larsen, P.. Ô. (2005). Counting methods are deci-
sive for rankings based on publication and citation studies.
Scientometrics, 64(1), 85–93. https://doi.org/10.1007/s11192
-005-0239-6
Gauffriau, M., Larsen, P.. O., Maye, JE., Roulin-Perriard, UN., & von
Ins, M.. (2007). Publication, cooperation and productivity mea-
sures in scientific research. Scientometrics, 73(2), 175–214.
https://doi.org/10.1007/s11192-007-1800-2
Gauffriau, M., Larsen, P.. O., Maye, JE., Roulin-Perriard, UN., & von
Ins, M.. (2008). Comparisons of results of publication counting
using different methods. Scientometrics, 77(1), 147–176.
https://doi.org/10.1007/s11192-007-1934-2
Gingras, Oui. (2014). Criteria for evaluating indicators. In Beyond bib-
liometrics: Harnessing multidimensional indicators of scholarly
impact (pp. 109–125). La presse du MIT.
Gingras, Oui. (2016). Bibliometrics and research evaluation: Uses and
abuses. AVEC Presse. https://doi.org/10.7551/mitpress/10719.001.0001
Glänzel, W., Moed, H. F., Schmoch, U., & Thelwall, M.. (Éd.).
(2019). Springer handbook of science and technology indicators.
Springer International Publishing. https://doi.org/10.1007/978-3
-030-02511-3
Hagen, N. T. (2008). Harmonic allocation of authorship credit:
Source-level correction of bibliometric bias assures accurate
publication and citation analysis. PLOS ONE, 3(12), e4021.
https://doi.org/10.1371/journal.pone.0004021, PubMed:
19107201
Halmos, P.. R.. (1950). Measure theory. D. van Nostrand Company,
Inc. https://doi.org/10.1007/978-1-4684-9440-2
Harter, S. P.. (1986). Online information retrieval: Concepts, princi-
ples, and techniques. Academic Press.
Henriksen, D. (2016). The rise in co-authorship in the social sci-
ences (1980–2013). Scientometrics, 107(2), 455–476. https://
doi.org/10.1007/s11192-016-1849-x
Hodge, S. E., & Greenberg, D. UN. (1981). Publication credit.
Science, 213(4511), 950.
Howard, G. S., Cole, D. UN., & Maxwell, S. E. (1987). Research pro-
ductivity in psychology based on publication in the journals of
t h e A m e r i c a n P s y c h o l o g i c a l A s s o c i a t i o n . A m e r i c a n
Psychologist, 42(11), 975–986. https://doi.org/10.1037/0003
-066X.42.11.975
Hu, X., Rousseau, R., & Chen, J.. (2010). In those fields where mul-
tiple authorship is the rule, the h-index should be supplemented
by role-based h-indices. Journal of Information Science, 36(1),
73–85. https://doi.org/10.1177/0165551509348133
Kim, J., & Diesner, J.. (2014). A network-based approach to coau-
thorship credit allocation. Scientometrics, 101(1), 587–602.
https://doi.org/10.1007/s11192-014-1253-3
Kyvik, S. (1989). Productivity differences fields of learning, et
Lotka’s law. Scientometrics, 15(3–4), 205–214. https://doi.org
/10.1007/BF02017199
Lin, C.-S., Huang, M.-H., & Chen, D.-Z. (2013). The influences of
counting methods on university rankings based on paper count
Études scientifiques quantitatives
973
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
and citation count. Journal of Informetrics, 7(3), 611–621. https://
doi.org/10.1016/j.joi.2013.03.007
Lindsey, D. (1980). Production and citation measures in the soci-
ology of science: The problem of multiple authorship. Social
Studies of Science, 10(2), 145–162. https://est ce que je.org/10.1177
/030631278001000202
Liu, X. Z., & Fang, H. (2012). Fairly sharing the credit of
multi-authored papers and its application in the modification of
h-index and g-index. Scientometrics, 91(1), 37–49. https://doi.org
/10.1007/s11192-011-0571-y
Lukovits, JE., & Vinkler, P.. (1995). Correct credit distribution: UN
model for sharing credit among coauthors. Social Indicators
Research, 36(1), 91–98. https://doi.org/10.1007/BF01079398
Aussi, F. (1976). Evaluative bibliometrics: The use of publication
and citation analysis in the evaluation of scientific activity.
Computer Horizons.
Nederhof, UN. J., & Moed, H. F. (1993). Modeling multinational pub-
lication: Development of an on-line fractionation approach to
measure national scientific output. Scientometrics, 27(1), 39–52.
https://doi.org/10.1007/BF02017754
Nielsen, M.. W. (2017). Gender consequences of a national
performance-based funding model: New pieces in an old puzzle.
Studies in Higher Education, 42(6), 1033–1055. https://doi.org
/10.1080/03075079.2015.1075197
Papapetrou, P., Gionis, UN., & Mannila, H. (2011). A Shapley value
approach for influence attribution. In D. Gunopulos, T. Hofmann,
D. Malerba, & M.. Vazirgiannis (Éd.), Machine learning and
knowledge discovery in databases (pp. 549–564). Springer.
https://doi.org/10.1007/978-3-642-23783-6_35
Perianes-Rodriguez, UN., Waltman, L., & Van Eck, N. J.. (2016).
Constructing bibliometric networks: A comparison between full
and fractional counting. Journal of Informetrics, 10(4), 1178–1195.
https://doi.org/10.1016/j.joi.2016.10.006
Persson, R.. UN. X. (2017). Bibliometric author evaluation through
linear regression on the coauthor network. Journal de
Informetrics, 11(1), 299–306. https://doi.org/10.1016/j.joi.2017
.01.003
Prix, D. J.. de S. (1986). Little science, big science—And beyond.
Columbia University Press. http://derekdesollaprice.org/ little
-science-big-science-full-text/
Rahman, M.. T., Regenstein, J.. M., Kassim, N. L. UN., & Haque, N.
(2017). The need to quantify authors’ relative intellectual contri-
butions in a multi-author paper. Journal of Informetrics, 11(1),
275–281. https://doi.org/10.1016/j.joi.2017.01.002
Rousseau, R., Egghe, L., & Guns, R.. (2018). Becoming metric-wise:
A bibliometric guide for researchers. Chandos Publishing.
Schneider, J.. W. (2009). An outline of the bibliometric indicator
used for performance-based funding of research institutions in
Norway. European Political Science, 8(3), 364–378. https://est ce que je
.org/10.1057/eps.2009.19
Shen, H.-W., & Barabasi, A.-L. (2014). Collective credit allocation
en sciences. Actes de l'Académie nationale des sciences,
1 1 1( 3 4 ) , 1 2 32 5 –1 23 3 0. ht t p s : / / d o i . or g / 1 0 . 10 7 3/ p n a s
.1401992111, PubMed: 25114238
Sivertsen, G. (2016). A bibliometric indicator with a balanced
the 21ST
representation of all
International Conference on Science and Technology Indicators,
910–914.
fields. Proceedings of
Stallings, J., Vance, E., Lequel, J., Vannier, M.. W., Liang, J., … Wang, G.
(2013). Determining scientific impact using a collaboration
index. Proceedings of
the National Academy of Sciences,
110(24), 9680–9685. https://doi.org/10.1073/pnas.1220184110,
PubMed: 23720314
Steinbrüchel, C. (2019). A citation index for principal investigators.
Scientometrics, 118(1), 305–320. https://doi.org/10.1007/s11192
-018-2971-8
Sugimoto, C. R., & Larivière, V. (2018). Measuring research: What
everyone needs to know. Presse universitaire d'Oxford. https://doi.org
/10.1093/wentk/9780190640118.001.0001
Todeschini, R., & Baccini, UN. (2016). Handbook of Bibliometric
Indicateurs: Quantitative Tools for Studying and Evaluating
Research. Wiley-VCH Verlag GmbH & Co. KGaA. https://est ce que je
.org/10.1002/9783527681969
Tol, R.. S. J.. (2011). Credit where credit’s due: Accounting for co-
authorship in citation counts. Scientometrics, 89(1), 291–299.
https://doi.org/10.1007/s11192-011-0451-5, PubMed:
21957320
Trueba, F. J., & Guerrero, H. (2004). A robust formula to credit
authors for their publications. Scientometrics, 60(2), 181–204.
https://doi.org/10.1023/B:SCIE.0000027792.09362.3F
Tsai, C., & Lydia Wen, M.. (2005). Research and trends in science
education from 1998 à 2002: A content analysis of publication
in selected journals. International Journal of Science Education,
27(1), 3–14. https://doi.org/10.1080/0950069042000243727
Tscharntke, T., Hochberg, M.. E., Rand, T. UN., Resh, V. H., & Krauss,
J.. (2007). Author sequence and credit for contributions in multi-
authored publications. PLoS Biology, 5(1), e18. https://est ce que je.org/10
.1371/journal.pbio.0050018, PubMed: 17227141
Van Hooydonk, G. (1997). Fractional counting of multiauthored
publications: Consequences for the impact of authors. Journal
of the American Society for Information Science, 48(10),
944–945. https://doi.org/10.1002/(SICI)1097-4571(199710)
48:10<944::AID-ASI8>3.0.CO;2-1
Vinkler, P.. (1993). Research contribution, authorship and team co-
operativeness. Scientometrics, 26(1), 213–230. https://est ce que je.org/10
.1007/BF02016801
Vinkler, P.. (2010). The evaluation of research by scientometric
indicators. Chandos Pub. https://doi.org/10.1533/9781780630250
Vitanov, N. K. (2016). Science dynamics and research production.
Springer International Publishing. https://doi.org/10.1007/978-3
-319-41631-1
Waltman, L. (2016). A review of the literature on citation impact
indicators. Journal of Informetrics, 10(2), 365–391. https://est ce que je
.org/10.1016/j.joi.2016.02.007
Waltman, L., & Van Eck, N. J.. (2015). Field-normalized citation
impact indicators and the choice of an appropriate counting
method. Journal of Informetrics, 9(4), 872–894. https://doi.org
/10.1016/j.joi.2015.08.001
Wang, J.-P., Guo, Q., Lequel, K., Han, J.-T., & Liu, J.-G. (2017).
Credit allocation for research institutes. EPL (Europhysics
Letters), 118(4), 48001. https://doi.org/10.1209/0295-5075/118
/48001
Weigang, L. (2017). First and Others credit-assignment schema for
evaluating the academic contribution of coauthors. Frontiers of
Information Technology & Electronic Engineering, 18(2), 180–194.
https://doi.org/10.1631/FITEE.1600991
Wien, C., Dorch, B. F., & Larsen, UN. V. (2017). Contradicting
incentives for research collaboration. Scientometrics, 112(2),
903–915. https://doi.org/10.1007/s11192-017-2412-0
Wildgaard, L. (2015). Measure Up! The extent author-level bib-
liometric indicators are appropriate measures of
individual-
researcher performance [University of Copenhagen]. https://curis
.ku.dk/portal/files/153371107/PhD_LornaWildgaard.pdf
Wildgaard, L. (2019). An overview of author-level indicators of
research performance.
In W. Glänzel, H. F. Moed, U.
Schmoch, & M.. Thelwall (Éd.), Springer handbook of science
Études scientifiques quantitatives
974
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
/
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Counting methods introduced into the bibliometric research literature 1970–2018
and technology indicators (pp. 361–396). Springer International
Édition. https://doi.org/10.1007/978-3-030-02511-3_14
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing domi-
nance of teams in production of knowledge. Science, 316(5827),
1036–1039. https://doi.org/10.1126/science.1136099, PubMed:
17431139
Xu, J., Ding, Y., Song, M., & Chambers, T. (2016). Author credit-
assignment schemas: A comparison and analysis. Journal of the
Association for Information Science and Technology, 67(8),
1973–1989. https://doi.org/10.1002/asi.23495
Zhang, C.-T. (2009). A proposal for calculating weighted citations
based on author rank. EMBO Reports, 10(5), 416–417. https://est ce que je
.org/10.1038/embor.2009.74, PubMed: 19415071
Zou, C., & Peterson, J.. B. (2016). Quantifying the scientific output
of new researchers using the zp-index. Scientometrics, 106(3),
901–916. https://doi.org/10.1007/s11192-015-1807-z
Zuckermann, H. (1968). Patterns of name ordering among authors of
scientific papers: A study of social symbolism and its ambiguity.
American Journal of Sociology, 74(3), 276–291. https://est ce que je.org/10
.1086/224641
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
/
e
d
toi
q
s
s
/
un
r
t
je
c
e
–
p
d
je
F
/
/
/
/
2
3
9
3
2
1
9
7
0
6
9
7
q
s
s
_
un
_
0
0
1
4
1
p
d
.
/
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Études scientifiques quantitatives
975