ARTÍCULO DE INVESTIGACIÓN
The spread of retracted research
into policy literature
Dmitry Malkov
, Ohid Yaqub
, and Josh Siepel
Science Policy Research Unit (SPRU), University of Sussex, Brighton, Reino Unido
un acceso abierto
diario
Palabras clave: altmetrics, evidence-based policy, policy-based evidence, research impact, retractions
Citación: Malkov, D., Yaqub, o., &
Siepel, j. (2023). The spread of
retracted research into policy literature.
Estudios de ciencias cuantitativas, 4(1),
68–90. https://doi.org/10.1162/qss_a
_00243
DOI:
https://doi.org/10.1162/qss_a_00243
Revisión por pares:
https://www.webofscience.com/api
/gateway/wos/peer-review/10.1162
/qss_a_00243
Recibió: 13 Julio 2022
Aceptado: 29 December 2022
Autor correspondiente:
Ohid Yaqub
o.yaqub@sussex.ac.uk
Editor de manejo:
Juego Waltman
ABSTRACTO
Retractions warn users against relying on problematic evidence. Until recently, it has not been
possible to systematically examine the influence of retracted research on policy literature.
Aquí, we use three databases to measure the extent of the phenomenon and explore what it
might tell us about the users of such evidence. We identify policy-relevant documents that cite
retracted research, we review and categorize the nature of citations, and we interview policy
document authors. En general, we find that 2.3% of retracted research is policy-cited. This seems
higher than one might have expected, similar even to some notable benchmarks for “normal”
nonretracted research that is policy-cited. The phenomenon is also multifaceted. Primero, certain
types of retracted research (those with errors, types 1 y 4) are more likely to be policy-cited
than other types (those without errors, types 2 y 3). Segundo, although some policy-relevant
documents cite retracted research negatively, positive citations are twice as common and
frequently occur after retraction. Tercero, certain types of policy organizations appear better at
identifying problematic research and are perhaps more discerning when selecting and
evaluating research.
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
1.
INTRODUCCIÓN
En 2020, amid the scramble for research to inform COVID-19 policy, one data set stood out.
The Surgisphere database purported to offer real-time patient data from thousands of hospitals
across five continents. It was the basis for a series of grand claims in an April 2020 preprint on
ivermectin, a May 2020 Lancet publication on hydroxychloroquine, and a May 2020 Nuevo
England Journal of Medicine publication on angiotensins. By June 2020, all three publications
were retracted after it was found that the data were falsified and patients were nonexistent.
Within a day of the Lancet publication, the WHO halted its trials of hydroxychloroquine in
response to the paper. A week later, but before the retraction, it reversed its decision, choosing
to disregard the evidence in the paper. A diferencia de, the Peruvian government had included
ivermectin in its treatment guidelines, citing the preprint heavily in its white paper. Ivermectin
remained on its therapeutic guidelines until 2021, long after the retraction.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Derechos de autor: © 2023 Dmitry Malkov, Ohid
Yaqub, and Josh Siepel. Publicado
bajo una atribución Creative Commons
4.0 Internacional (CC POR 4.0) licencia.
The contrasting examples above illustrate our central interests in this study. To what extent
do retracted studies influence policy literature, before and after retraction? What might this say
about the users of such evidence?
La prensa del MIT
Over the past decade, retraction, an esoteric feature of the scholarly publishing system, tiene
come increasingly into the spotlight. Retractions are intended to prevent the spread of
The spread of retracted research into policy literature
problematic research and help to ensure the integrity of the scientific record (Pfeifer & Snodgrass,
1990). As the number of retracted studies has grown (Brainard, 2018), so too has interest in
retractions as a mechanism of scientific self-correction (Steen, Casadevall, & Fang, 2013, Van
Noorden, 2011)1. Guidelines for retraction were proposed (COPE Council, 2009), and an
influential blog called Retraction Watch (RW) laid the groundwork for the first comprehensive
database of retracted articles (marco & Oransky, 2014).
The use of retracted research by other researchers, in the form of scholarly citations, tiene
been one of its most well studied aspects. Retraction can take up to several years, durante
which problematic publications remain unchallenged and can influence other scholars’
research direction, methodology, and results (Teixeira da Silva & Bornemann-Cimenti,
2017). Even after retraction, articles often continue to be cited as valid evidence, albeit at a
reduced rate (Bar-Ilan & Halevi, 2017, 2018; Dinh, Sarol et al., 2019; Furman, Jensen, &
Murray, 2012; Redman, Yarandi, & Merz, 2008).
This emphasis on scholarly citations of retracted articles, sin embargo, has neglected other
important ways in which retracted research may exert an influence. The use of research in
policy and practice is one such area, where retracted studies can produce similarly disruptive
effects sometimes with life-threatening consequences (marco, 2018; Steen, 2011). One rea-
son for the neglect is the relative paucity of databases recording instances where research is
usado, and cited, in policy literature.
The recent arrival of new databases has opened new possibilities for those seeking a broad
estimate of the cognitive influence of retracted research on policy (Tattersall & Carroll, 2018).
The emergence of altmetrics has prompted renewed study of research in the media, en línea
communities, y, most notably for our purposes—policy-relevant documents (Haunschild
& Bornmann, 2017). Hasta ahora, there have been few attempts to apply them to the study of
retracted research.
We deploy a mixed method approach that makes it possible to analyze the spread of
retracted research in policy-relevant documents. By combining data from RW, Overton, y
Altmetric, alongside data gathered from interviews, we analyze how retracted articles make
their way into policy-relevant documents before and after the retraction, how exactly they are
cited, and why this might be happening. Our results suggest that retracted research does
indeed creep into policy literature frequently, perhaps even as much as nonretracted
research—but it does so unevenly.
2. LITERATURE REVIEW
2.1. Retracted Research and Its Afterlife
Retraction seems to be a ubiquitous feature of research. There have been numerous studies
on the spread of retracted research in individual fields2. Some studies have also focused on
individual authors whose retracted publications continue to influence academic literature,
clinical practice, or even public perception of science (Bornemann-Cimenti, Szilagyi, &
1 Por 2022, there have already been over 200 retracted papers on COVID-19.
2 These include genetics (Dal-Ré & Ayuso, 2019, 2020), anesthesiology (Nair, Yean et al., 2020), obstetrics
and gynecology (Chambers, Michener, & Falcone, 2019), chemistry and material science (Coudert, 2019),
dentistry (Theis-Mahon & Bakker, 2020), engineering (Rubbo, Pilatti, & Picinin, 2019), library and informa-
tion science (Ajiferuke & Adekannbi, 2020), cancer research (Bozzo, Bali et al., 2017; hamilton, 2019) y
orthopaedics (Rai & Sabharwal, 2017; yan, MacDonald et al., 2016).
Estudios de ciencias cuantitativas
69
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
The spread of retracted research into policy literature
Sandner-Kiesling, 2016; Kochan & Budd, 1992; Korpela, 2010; McHugh & Yentis, 2019;
Suelzer, Deal et al., 2019). These studies have consistently shown that retracted research
garners citations even after retraction.
Empirical approaches tend to focus on pre- and postretraction citations. Studies have exam-
ined the spillover effect of retracted research on adjacent subject fields (Azoulay, Furman
et al., 2015), career productivity of authors and associated authors (Azoulay, Bonatti, &
Krieger, 2017; Jin, Jones et al., 2019; Mongeon & Larivière, 2016) o, in the case of retracted
investigación médica, the effect on patients’ wellbeing (Steen, 2011). These studies show that the
impact of continued citation of retracted studies is considerable.
To understand why retracted research continues to spread even after the retraction, one line
of inquiry has examined postretraction citation context, and specifically whether a citation
acknowledges the retraction (Deculllier & Maisonneuve, 2018; Fulton, Coates et al., 2015;
Gray, Al-Ghareeb, & McKenna, 2019; Moylan & Kowalczuk, 2016; Vuong, La et al., 2020).
Early work by Budd, Sievert, and Schultz (1998) showed that only a fraction of citations
acknowledged the retraction, whereas the majority implicitly or explicitly used retracted
research as valid evidence. This has been confirmed across different fields, time periods,
and data sources (Bar-Ilan & Halevi, 2017). Juntos, these studies suggest that limited dissem-
ination of retraction information may play a role in why we see persistent citation of retracted
estudios.
Sin embargo, the retraction signal is at least partially effective. Studies employing careful
controls observed a 65% decline in citation rates after the retraction (Furman et al., 2012).
Other studies supported this conclusion and there seems to be some consensus on the fact
that citations decrease after the retraction (Dinh et al., 2019; Pfeifer & Snodgrass, 1990).
The implication still remains that if some citations continue to accumulate after the retraction,
more needs to be done to curb the spread. An analogous phenomenon could be lurking in the
policy arena.
2.2. Spread of Retracted Research Outside of Academia
Concern about retracted research spreading beyond academic circles has been growing
recently. A number of studies have been spurred by the availability of Altmetric data measur-
ing online attention. One reported a positive association between Altmetric Score and shorter
time to retraction, implying that problematic studies receiving more online attention tend to be
scrutinized more and retracted faster (Shema, Hahn et al., 2019).3 Jan and Zainab (2018)
examine postretraction online mentions and report the number of citations in different
Altmetric sources for a handful of retracted studies. Similarmente, Bar-Ilan and Halevi (2018)
analyzed how 995 retracted publications were cited in scholarly literature, Twitter, noticias,
blogs and Wikipedia, and the number of Mendeley readers these publications had. Otro
study used Altmetric Score to show the broader impacts of retracted research (feng, Yuan, &
Cual, 2020). These studies demonstrate the potential of looking beyond scholarly citations,
but they remain small in scale.
The most recent and relevant study investigated Altmetric Attention Scores (AAS) a
retracted articles from the RW database and found that retracted articles had higher AAS than
control unretracted articles, although for popular articles, preretraction attention far exceeded
3 This line of thought was taken further by Copiello (2020) who used five altmetric sources in an attempt to
predict retracted publications.
Estudios de ciencias cuantitativas
70
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
The spread of retracted research into policy literature
postretraction attention (Serghiou, Marton, & Ioannidis, 2021). As seen in the above exam-
ples, although there has been work in combining retraction data with altmetric indicators,
most of it used AAS, which mixes attention from very different online sources from Twitter to
YouTube. The heterogeneity of these sources and different rates of data accumulation
(Sugimoto & Larivière, 2018) might undermine the linkage between the indicator and real-life
phenomena that the indicator attempts to measure (Pequeño, 1978). In this respect, a focus on
one altmetric source could be more warranted. Policy-relevant documents represent one such
source that has been described among the most high-value altmetric indicators due to its
relationship with practice and arguably even societal impacts (Bornmann, Haunschild, &
Marx, 2016; Tahamtan & Bornmann, 2020). Sin embargo, no work in this direction has been
undertaken yet.
2.3. Measuring Research Use by Tracing Back from Policy Literature
Various methodologies have been proposed to measure research use in policy, including inter-
views and documentary analysis (Hanney, Gonzalez-Block et al., 2003). Studies based on
interviews are common and often attempt to qualitatively assess perceptions of research use
among policymakers, identify barriers and facilitators of research use, and understand
research selection and appraisal procedures (eliot & Popay, 2000; Hyde, Mackie et al.,
2016; Innvær, Vist et al., 2002). Por ejemplo, interviews with health authorities and
researchers were used in combination with the analysis of project documentation in a study
of research utilization practices in the NHS (eliot & Popay, 2000). In line with Weiss’s inter-
active model (Weiss, 1979), the authors found that research was often used in indirect ways,
rather than to provide answers (eliot & Popay, 2000). The same conclusion was reached
by another interview-driven study on health policies for children in foster care (Hyde et al.,
2016). Notablemente, this study also found that policymakers were producing research evidence
themselves in addition to research use in the more standard interactive model (Hyde et al.,
2016).
Studies involving documentary analysis often aim to trace back from a starting point to find
prior research inputs as evidence of research utilization (Grant, Cottrell et al., 2000; Innvær,
2009; Kryl, Allen et al., 2012; Newson, Rychetnik et al., 2018; Zardo, Collie, & Livingstone,
2014). As one example, a backward tracing study of transport injury compensation policies in
Australia used quantitative content analysis of 128 policy documents (Zardo et al., 2014). Por
analyzing references to research, the authors found that academic evidence was the least
usado, and most references drew on clinical evidence and internal policies (Zardo et al.,
2014). Another example of documentary backward tracing comes from bibliometric analyses
of research publications featured in NICE guidelines (Grant et al., 2000; Kryl et al., 2012).
This type of analysis can be useful in identifying potentially interesting features of research
used in policy documents. Por ejemplo, the studies pointed out that NICE guidelines fea-
tured U.K. and U.S. research more than other international studies (Grant et al., 2000; Kryl
et al., 2012).
Our search identified only one study that used more detailed citation analysis to eval-
uate how exactly research is cited in policy documents (Newson et al., 2018). This study
offers an important critique of attempts to use policy document analysis for the purpose of
linking research to actual policies and especially impacts. The authors note that using
policy citations of research, without examination of the nature of the citation context, does
not shed light on why individual publications are chosen (Newson et al., 2018). One way to
address this is mixed method triangulation (Williamson & Johanson, 2018) and combining
Estudios de ciencias cuantitativas
71
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
The spread of retracted research into policy literature
interviews with documentary analysis (hutchinson, 2011; Nabyonga-Orem, Ssengooba
et al., 2014)4.
2.4. Measuring Research Use by Tracing Forward from Research
A more fundamental shortcoming of backward tracing approaches is that, by selecting on the
dependent variable (es decir., use in policy), the approach inherently limits the scope for systematic
analysis of research and its characteristics as potential explanatory variables. Over the past
decade, several tools have emerged that greatly facilitate forward tracing analysis of research
utilization using big data sets of scientific publications, most notably Altmetric and Overton.
This section addresses their coverage, along with their possibilities and limitations for the study
of research use in policy.
Much of the research involving Altmetric has centered on its relationship with traditional
scientometric indicators (costas, Zahedi, & Wouters, 2015), potential for research impact
evaluación (Bornmann, 2014; Tahamtan & Bornmann, 2020) and various analyses of scholarly
use of social media (Sugimoto, Work et al., 2017). Little attention has been dedicated to the
phenomenon of research use in policy5. Sin embargo, two studies that have investigated the cov-
erage of research within Altmetric policy data offer useful parallels for our inquiry. These stud-
ies reported that less than 0.5% of all Web of Science publications (Haunschild & Bornmann,
2017) y 1.2% of Web of Science climate change publications (Bornmann et al., 2016) eran
cited in Altmetric-tracked policy-relevant documents.
Overton, another altmetrics company, was launched in 2018 by the founder of Altmetric
and since then has been providing services to universities, research funding organizations, y
NGOs who wish to understand their policy impact (Adie, 2020). Compared to Altmetric,
Overton specializes exclusively in policy-relevant documents6. The overall coverage was
found to be at 3.9% (Fang, Dudek et al., n.d.), which was higher than Altmetric’s in similar
estudios (Bornmann et al., 2016; Haunschild & Bornmann, 2017) although Altmetric’s coverage
may have grown since those studies took place.
4 An elaborate version of such a mixed methodology called SAGE (Staff Assessment of Engagement with Evi-
dencia) was proposed to evaluate how policymakers search for, appraise, and use research to inform policies
(Makkar, Brennan et al., 2016; Redman, Turner et al., 2015). What makes SAGE quite unique is that the
interviews do not just address research utilization broadly, but focus on the development of a specific policy
document that the interviewee was involved in (Redman et al., 2015). The SAGE framework was conse-
quently used in a large-scale study of research use assessment in Australian health policies (Williamson
& Johanson, 2018).
5 Some notable exceptions include the analysis of policy uptake of about 1,500 research publications by
authors from the University of Sheffield (Tattersall & Carroll, 2018) and a recent study done by the same
group on policy impact of NIHR trials (Carroll & Tattersall, 2020). Most other studies used policy documents
along with other altmetric sources and not as a standalone indicator.
6 Overton’s coverage is also qualitatively different from Altmetric and is said to include more policy docu-
ments related to economics (Adie, personal communication, Octubre 30, 2020). An unpublished study by
Leiden University conducted an extensive analysis of Overton’s coverage of research citations and their
distribution across various subject fields (Fang et al., n.d.). The study reported that publications in social
sciences and humanities had the highest coverage in Overton, with life and earth sciences being the second
and biomedical and health sciences the third (Fang et al., n.d.). Another recent analysis used Overton to
track the coevolution of COVID-19 research and policy (Yin, Gao et al., 2021). Overton has also been used
in a study suggesting that cross-disciplinary research is more often utilized by policy documents (Pinheiro,
Vignola-Gagné, & Campbell, 2021).
Estudios de ciencias cuantitativas
72
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
The spread of retracted research into policy literature
Juntos, these studies offer a helpful set of benchmarks, ranging between 0.5% y
3.9%, for the share of normal nonretracted research publications that are cited in policy
relevant documents. A priori, we would expect retracted research to be cited in policy-
relevant documents much less than this, given what we know so far about the impact of
retraction on citations. Sin embargo, the phenomenon may be more multifaceted and varied
for policy citations, because policy users and research users are likely to have different cita-
tion practices.
3. DATA AND METHODS
3.1. Recopilación de datos
We began by collecting retracted publications drawn from the RW database in May 2020
along with their publication title, fecha, journal, author names and affiliations, unique identi-
fiers (DOI and PubMed ID), and retraction-specific information, such as retraction date and
reason. We also collected citation counts via the Crossref API for each retracted publication.
We then matched these retracted publications, using their DOIs, with two databases of
policy-relevant documents: Altmetric and Overton. We retrieved policy-relevant documents
that cited RW articles in November 2020 using researcher API access. Combining data from
these two sources improved matching with RW publications, but also presented some
challenges7.
Próximo, we downloaded all policy-relevant documents in PDF format following URL links
provided by the policy databases. When we could not download a document using the link,
we attempted to retrieve it from alternative sources. If the link led to a web page with multiple
PDF documents, all of them were checked to identify the correct one. When these procedures
failed, the document was labeled as “not found.” Last, we removed documents that were not
en Inglés, or were clearly not policy documents.8
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
3.2. Análisis de los datos
We adopted citation as the main unit of analysis, with supplementary analyses at the level of
policy-relevant documents and retracted publications. This is because individual policy-
relevant documents could contain references to multiple retracted publications, y estafa-
versely, individual retracted publications could be referenced in multiple policy-relevant
documentos.
Following research in citation context analysis (Bornmann & Daniel, 2008; Tahamtan &
Bornmann, 2019), we assume that policy-relevant documents can also cite research either
positively or negatively. We categorized policy citations of retracted articles as either
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
7 Duplicate policy-relevant documents from different databases could not always be identified using auto-
mated approaches, as their titles, URLs, and even policy organizations could be different depending on
the database of origin. Por lo tanto, additional cleaning, document disambiguation, and sorting out of docu-
ments in other languages had to be done manually.
8 Such nonpolicy documents were also identified by other researchers during a recent qualitative analysis of
Overton (Pinheiro et al., 2021). These documents could be considered as data artifacts of policy-tracking
tools and included conference programs, personal CVs and other documents where the reference had little
to do with the actual content of the document.
Estudios de ciencias cuantitativas
73
The spread of retracted research into policy literature
Mesa 1.
Feng et al. (2020) typology of retraction reasons based on Retraction Watch data
Scientific errors
Sí
No
Academic misconduct
Sí
No
Tipo 1: Error and academic misconduct
Tipo 2: No error and academic misconduct
Tipo 4: Error and no academic misconduct
Tipo 3: No error and no academic misconduct
positive/neutral or negative/exclusionary, and as either acknowledging the cited publication as
retracted or not. Para tal fin, each citing policy document was reviewed to locate the citation
within the text and bibliography and to assign the codes. The complete categorization manual
can be found in the Supplementary material.
Coding on the full data set was done by one author, and a random 10% sample was coded
by another author, which made it possible to calculate Cohen’s kappa intercoder reliability
puntaje. The values of Cohen’s kappa were interpreted according to the following scale: 0—poor
agreement; 0.01 to 0.20—slight; 0.21 to 0.40—fair; 0.41 to 0.60—moderate; 0.61 to 0.80—
substantial; 0.81 to 1.00—almost perfect (Landis & Koch, 1977).
We calculated time from publication to retraction, from publication to policy, and from
retraction to policy in full years for each pair of documents. We used time from retraction
to policy to distinguish between pre- and postretraction citations. We analyzed the distribution
of citation types and retraction acknowledgment before and after the retraction.9
Although some studies of retracted literature used citation time windows to account for
publishing delays, no such measures were adopted in this study. The main reason for this is
eso, contrary to scholarly literature, publication delays in policy literature are likely to exhibit
wider variation. Some policy documents (p.ej., policy briefs) could be expected to be published
within days, but others could take years (p.ej., guidelines or committee reviews). So, en esto
estudiar, citations received during the year of retraction were assumed to be in the “grey area”
neither before nor after the retraction.
We analyzed retraction reasons using data from the RW database. Sin embargo, because each
publication in RW could be assigned several reasons from a list of more than 80, the analysis
relied on the methodology from Feng et al. (2020). Their approach grouped retracted publi-
cations, according to four retraction types (Mesa 1). Detailed explanation of the procedure is
available in the Supplementary material.
We used organization data obtained from policy databases for further analyses. Policy orga-
nization names were extracted and disambiguated separately. We categorized each organiza-
tion into one of four types: Government, IGO, NGO/Think Tank, or Aggregator. We adopted
this classification from Overton, where it was already applied to some organizations. The extra
coding was mainly aimed at classifying policy-relevant documents from Altmetric, where no
such classification existed in the data.
We reported data as counts and percentages when categorical, and as medians and inter-
quartile ranges when continuous. We conducted a Wilcoxon rank-sum test to compare the
9 We note that data from policy databases were not always reliable with respect to the publication date of
policy documents. Presumiblemente, when parsing policy repositories, policy-tracking tools gather these meta-
data based on the date when the document was deposited in a policy repository. This date did not always
correspond to the exact date on the document, which affected the subsequent calculation of such param-
eters as time from retraction to policy. Por lo tanto, when retrieved policy date differed from the date on the
documento, we made a correction in favor of the latter.
Estudios de ciencias cuantitativas
74
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
The spread of retracted research into policy literature
distribution of continuous variables between the entire RW data and policy-cited sample, y
between groups of documents with negative and positive citations. We used a Kruskal-Wallis
test to compare continuous variables between the time intervals: before retraction, retraction
año, after retraction. We used for comparisons of categorical variables where all categories
had ≥5 observations chi-square test, and Fisher’s exact test was used when at least some cat-
egories had <5 observations. p-values were considered statistically significant when below
0.05. Statistical analysis was performed in R version 3.6.2.
3.3.
Interviews
To explore specific types of citing behavior that might lead to positive postretraction citations,
we conducted 30-minute semistructured interviews with authors of policy reports who posi-
tively cited retracted publications after the retraction. We sent invitations to prospective par-
ticipants, conditional on the availability of author names in corresponding policy reports and
the availability of authors’ email addresses in the public domain. This resulted in 61 email
invitations and 10 interviews, with interviewees drawn from the United States, Canada,
Australia, New Zealand, France, Switzerland, Germany, and the United Kingdom, and repre-
senting a range of policy organizations including IGOs, NGOs, and government agencies.
Our interviews aimed to address aspects of research use in policy in relation to specific
documents, focusing on research engagement actions and types of research use (see Makkar
et al., 2016). We added additional questions regarding the interviewee’s familiarity with retrac-
tions and the context of the particular citation. We transcribed the interviews and coded them
using the NVivo software to identify recurrent topics.
4. RESULTS
4.1. How Frequently Is Retracted Research Cited in Policy Literature Before and After the Retraction?
Of the 21,424 unique publications in the RW database, we found 16,095 unique publications
with DOIs. Data retrieval from Altmetric produced 167 publication matches, which amounted
to 1% coverage of RW publications. These publications were cited 305 times in 266 docu-
ments. In turn, Overton provided a higher matching rate of 437 (2.7% coverage) publications,
which were cited 852 times in 731 policy documents. Additional curation of the merged
Altmetric and Overton data removed duplicates (n = 199), non-English documents (n =
219), documents that could not be downloaded or where correct citations were not found
(n = 71), and cases that were not actual policy documents (n = 24).
The clean data set amounted to 367 (2.3% coverage) publications cited 644 times in 563
policy documents. The flowchart outlining the entire procedure is shown in Figure 1.
We compared the 367 retracted publications cited in policy documents with the entire
population of publications with DOIs in the RW database. We found significant (p < 0.001)
differences between these two groups. Retracted publications cited in policy tended to have a
longer time to retraction, more Crossref citations, and fewer type 3 retractions (no error and no
misconduct). In fact, 98% of policy-cited publications had one or more scholarly citations, as
opposed to 66% in the complete RW data. Summary statistics comparing complete RW data
with the sample are presented in Table 2.
We categorized citations of retracted publications in policy documents as either positive/neutral
or negative/exclusionary, and as either acknowledging the retraction or not. Cohen’s kappa score
was 0.88 for the first variable and 0.78 for the second, indicating at least substantial agreement.
Quantitative Science Studies
75
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
The spread of retracted research into policy literature
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Figure 1. Key data collection and cleaning steps.
The analysis of 644 citations in 563 documents is summarized in Table 3 as a comparison
between intervals relative to retraction. We found significant (p < 0.001) differences between
the groups with respect to time variables, Crossref citations, and retraction acknowledgment.
We also compared positive/neutral and negative/exclusionary citations, presented in
Table 4. We found significant (p < 0.001) differences with respect to time variables, citation
period, retraction citation types, and policy organization types.
Most publications received 1–3 citations, and very few publications received more
(Figure 2). Two publications received the biggest share of citations, namely the Wakefield,
Murch et al. (1998) study on the alleged link between autism and MMR vaccine and the
Estruch, Ros et al. (2013) study on health benefits of the Mediterranean diet. Notably, most
citations of the Wakefield et al. study were negative (n = 54), whereas the opposite was the
case for Estruch et al., with 32 citations being positive.
4.2. What Shares of Pre- and Postretraction Citations in Policy Documents Are Negative and
Acknowledge the Retraction Itself?
Most citations (72%) occurred in positive/neutral context, and the remaining citations were
negative/exclusionary. More citations in general, and more positive citations in particular,
Quantitative Science Studies
76
The spread of retracted research into policy literature
Table 2.
Comparison of article-level characteristics between complete RW data and sample
Characteristic
Time to retraction
Retraction Watch1
N = 16,095
1 [0, 3]
(Missing)
33
Sample1
N = 367
3 [1, 6]
0
p-value2
< 0.001
Crossref citations
2 [0, 11]
31 [10, 76]
< 0.001
(Missing)
Retraction type
Type 1
Type 2
Type 3
Type 4
Unknown
(Missing)
593
0
3,546 (22%)
4,289 (27%)
4,123 (26%)
2,534 (16%)
1,603 (10.0%)
0
104 (28%)
118 (32%)
22 (6.0%)
111 (30%)
12 (3.3%)
0
1 Statistics presented: Median [IQR]; n (%).
2 Statistical tests performed: Wilcoxon rank-sum test; chi-square test of independence.
< 0.001
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
were identified after the retraction than before, given that citations during the retraction year
were counted separately. The distribution of citation types across time from retraction to policy
document is visualized in Figure 3. The pattern indicates that positive citations declined after
the retraction, but negative citations increased. The presence of negative citations in years
preceding the retraction is notable, as they indicate that some documents questioned or crit-
icized problematic research before it became retracted. However, the persistence of positive
citations after the retraction is equally notable and points to potential problems with the dis-
semination of retraction information.
This observation is reinforced by the distribution of citations that acknowledged the retrac-
tion across the same time intervals (Figure 4). The data suggest that even when retracted pub-
lications were cited negatively, their publication status was correctly acknowledged only in
48% of cases. Notably, some policy documents began to acknowledge retractions during
the retraction year. This supports the idea that although publishing delays can also exist in
the policy arena, it is not uncommon to see documents published quickly—the same year
as the actual writing and citing were done.
4.3. Are Different Types of Retracted Research Cited Differently in Policy Documents?
Figure 5 shows whether publications with different retraction types tend to be cited positively
or negatively over time. The most severe Type 1 (error and misconduct) publications began to
be cited negatively as early as 10 years before the retraction and continued to accumulate
negative citations more than 10 years after.
In contrast, Type 3 (no error and no misconduct) publications were barely cited negatively
at all. Lastly, Type 4 (error and no misconduct) publications attracted the highest number of
positive citations prior to retraction and also continued to receive positive citations long after.
Quantitative Science Studies
77
The spread of retracted research into policy literature
Table 3.
Citation characteristics before, during, and after retraction year
Characteristic
Time from retraction to policy
Overall1
N = 644
0.0 [−2.0, 3.0]
Before retraction1
N = 261
−3.0 [−5.0, −1.0]
Retraction year1
N = 91
0.0 [0.0, 0.0]
After retraction1
N = 292
3.0 [2.0, 6.0]
p-value2
<0.001
Time from publication to policy
4.0 [2.0, 7.2]
3.0 [1.0, 5.0]
2.0 [1.0, 4.0]
7.0 [4.0, 11.0]
<0.001
Citation type
Negative/exclusionary
Positive/neutral
178 (28%)
466 (72%)
37 (14%)
224 (86%)
Retraction acknowledgment
Acknowledged
86 (13%)
0 (0%)
Not acknowledged
558 (87%)
261 (100%)
Retraction type
Type 1
Type 2
Type 3
Type 4
212 (33%)
156 (24%)
98 (38%)
57 (22%)
31 (4.8%)
2 (0.8%)
232 (36%)
102 (39%)
Unknown
13 (2.0%)
2 (0.8%)
Policy organization type
Aggregator
Government
IGO
NGO/Think Tank
76 (12%)
308 (48%)
104 (16%)
156 (24%)
35 (13%)
113 (43%)
43 (16%)
70 (27%)
1 Statistics presented: Median [IQR]; n (%).
2 Statistical tests performed: Kruskal-Wallis test; chi-square test of independence.
<0.001
<0.001
0.4
16 (18%)
75 (82%)
11 (12%)
80 (88%)
25 (27%)
14 (15%)
9 (9.9%)
43 (47%)
0 (0%)
11 (12%)
41 (45%)
17 (19%)
22 (24%)
125 (43%)
167 (57%)
75 (26%)
217 (74%)
89 (30%)
85 (29%)
20 (6.8%)
87 (30%)
11 (3.8%)
30 (10%)
154 (53%)
44 (15%)
64 (22%)
4.4. Which Policy Organizations Demonstrate a Better Ability to Identify Retracted Research
(or Research That Will Be Eventually Retracted)?
There were 146 policy organizations in the combined Altmetric-Overton data, including orga-
nizations with parent-daughter relationships or identical organizations with different names.
After accounting for these differences, the count decreased to 98 unique organizations. These
organizations were classified into policy organization types. Summary statistics for the distri-
bution of organization types between citation intervals and citation types can be found in
Tables 3 and 4 respectively. The most frequent type was found to be Government organiza-
tions (46%) followed by NGO/Think Tanks (24%), IGO (16%) and Aggregators (12%), which
reflected the coverage of organization types in Overton data. We found no significant differ-
ences between organization types with respect to citation intervals from retraction. However,
with respect to citation types, government organizations were found to cite retracted research
more negatively, and the opposite was true for other types.
Quantitative Science Studies
78
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
The spread of retracted research into policy literature
Characteristic
Time from retraction to policy
Table 4.
Comparison summary of citation types
Overall1
N = 644
0.0 [−2.0, 3.0]
Negative/exclusionary1
N = 178
2.0 [0.0, 5.0]
Positive/Neutral1
N = 466
0.0 [−2.0, 2.0]
Time from publication to policy
4.0 [2.0, 7.2]
7.0 [3.0, 12.0]
3.0 [2.0, 6.0]
Citation period
Before retraction
Retraction year
After retraction
Retraction acknowledgment
Acknowledged
Not acknowledged
Retraction type
Type 1
Type 2
Type 3
Type 4
Unknown
Policy organization type
Aggregator
Government
IGO
NGO/Think Tank
261 (41%)
91 (14%)
292 (45%)
86 (13%)
558 (87%)
212 (33%)
156 (24%)
31 (4.8%)
232 (36%)
13 (2.0%)
76 (12%)
308 (48%)
104 (16%)
156 (24%)
37 (21%)
16 (9.0%)
125 (70%)
86 (48%)
92 (52%)
84 (47%)
37 (21%)
3 (1.7%)
49 (28%)
5 (2.8%)
9 (5.1%)
112 (63%)
17 (9.6%)
40 (22%)
1 Statistics presented: Median [IQR]; n (%).
2 Statistical tests performed: Wilcoxon rank-sum test; chi-square test of independence; Fisher’s exact test.
224 (48%)
75 (16%)
167 (36%)
0 (0%)
466 (100%)
128 (27%)
119 (26%)
28 (6.0%)
183 (39%)
8 (1.7%)
67 (14%)
196 (42%)
87 (19%)
116 (25%)
p-value2
< 0.001
< 0.001
< 0.001
< 0.001
< 0.001
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
< 0.001
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Figure 6 provides an overview of how documents authored by different organization types
cited retracted publications before and after the retraction. The visualization highlights the shift
towards negative citations after the retraction for government organizations. Other types
tended to cite retracted research positively both before and after the retraction, with IGOs
and Aggregators showing the most pronounced pattern.
4.5.
If Retracted Studies Are Cited Positively After the Retraction, Which Aspects of Research
Selection and Appraisal by Authors of Policy Reports Can Explain This Citing Behavior?
From the interviews, we identified recurrent themes relating to research selection and
appraisal in policy and spread of retracted research. Many of the interviewees (n = 8) identified
as academics in either the present or the past. In several cases participants were reluctant to
describe their work as policy documents, referring to them instead as policy-relevant research.
Quantitative Science Studies
79
The spread of retracted research into policy literature
Figure 2. Distribution of policy citations among retracted articles.
When asked about research utilization in policy, all interviewees (n = 10) acknowledged
looking for scientific research to inform their documents. However, most (n = 7) also men-
tioned that this process was intuitive and not steered by any guidelines for research selection
and appraisal. Only two participants mentioned such guidelines in their organizations.
The interviewees also pointed out that they used Google Scholar as their primary search
tool (n = 5), sometimes because they lacked access to subscription databases (n = 3). They
also reported using bibliographic managers (n = 3) to keep track of the literature.
Another part of the interview addressed retracted research and its inadvertent spread in policy
documents. Respondents acknowledged that retractions could be easy to overlook (n = 5) and
attempted to identify some reasons why positive citations might slip in. For example, some men-
tioned that checking references is too time consuming (n = 4) and that sometimes working with
unfamiliar topics could make it harder to spot retracted articles (n = 3). Several reasons revolved
around the inability to access up-to-date information about the retraction. Among such reasons
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Figure 3. Citation types before and after the retraction.
Quantitative Science Studies
80
The spread of retracted research into policy literature
Figure 4. Retraction acknowledgment before and after the retraction.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Figure 5. Citations of publications by retraction type over time. Retraction types include four categories (from top to bottom): Type 1 (sci-
entific errors and academic misconduct), Type 2 (academic misconduct without scientific errors), Type 3 (no error and no misconduct), Type 4
(error without academic misconduct).
Quantitative Science Studies
81
The spread of retracted research into policy literature
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
Figure 6. Citation types before and after the retraction for different organization types.
were the habit of reusing articles from personal libraries, where the status of the publication could
not be updated (n = 2), picking references from offline sources (n = 2) and publisher paywalls (n = 1).
In explaining the context of citations of retracted articles in their documents, some authors
mentioned that the reference was central to the argument. Others, however, emphasized that it
was included in passing and had no bearing on the actual conclusions of the document. One
interviewee acknowledged becoming aware of the retraction during the proofreading stage,
but chose not to amend the reference, because the retraction reason did not entail any impli-
cations for the document’s message.
Finally, the interviewees were asked to think of possible solutions for preventing the spread
of retracted research. The most frequent answers called for a user-friendly tool to check bib-
liographic references (n = 6) or an accessible and searchable database of retracted publications
(n = 3). Other respondents emphasized the importance of proactive dissemination of retracted
information on the part of the publishers (n = 2), including through constant data sharing with
various repositories that might host copies of retracted publications without any identification
(n = 1). Overall, the proposed solutions revolved around technical means aimed at improving
the availability and accessibility of retraction information (see, for example, Zotero (2019) and
Scite (2020)), as well as measures to improve general citation customs among researchers and
policymakers alike. Some recent initiatives addressing the spread of retracted research singled
out policy and practice as a separate area of concern (RISRS, 2020)10.
10 The Reducing Inadvertent Spread of Retracted Science (RISRS, 2020) working group recommendations
include (a) ensuring that retraction information is easy to find and use, (b) producing a retraction taxonomy
and metadata standards that can be adopted by all stakeholders, (c) developing best practices for coordinat-
ing retractions, and (d) promoting research literacy on how to deal with retractions and postpublication
stewardship of the scientific record (Schneider, Woods et al., 2021).
Quantitative Science Studies
82
The spread of retracted research into policy literature
5. DISCUSSION
5.1. Extent of the Spread of Retracted Research in Policy Literature
Our measure of the extent to which retracted research is cited in policy documents is based
on matching publications in the RW database with policy databases. We estimate that
2–3% of retracted research publications are cited in policy-relevant documents. This seems
higher than one might have expected for retracted research, similar even to the share of
nonretracted research that is normally cited in policy literature, in the region of 0.5–3.9%
(see Bornmann et al., 2016; Fang et al., n.d.; Haunschild & Bornmann, 2017). However,
the observation does not account for the distinction between negative and positive citations
or between pre- and postretraction citations. Nevertheless, it provides some information on
how retracted publications compare to normal publications when it comes to overall use in
policy documents.
We also find that retracted studies cited in policy documents seem to differ from retracted
studies not cited in policy documents. Retracted research cited by policy documents are older,
with publication dates no later than 2014. This is consistent with the assumption that policy
citations accumulate over longer periods of time (Fang et al., n.d.). This makes policy citations
more like conventional scholarly citations and unlike other altmetric indicators.
In addition, policy-cited articles have longer time to retraction. This creates a longer win-
dow of opportunity for preretraction citations, allowing it to more easily gather momentum
(“citation inertia”) that continues after the retraction (Hagberg, 2020). The same effect could
partially explain significantly higher Crossref citation counts for the policy-cited sample. An
additional explanation could also be that policy document authors tend to select research with
higher citation counts, as was shown to be the case, for example, with NICE clinical guidelines
(Kryl et al., 2012). Interviewees also emphasized that they relied on citation counts and
journal-based metrics when evaluating the quality of research to be cited.
Retraction type is another parameter where policy-cited publications differed from overall
RW publications. The higher relative incidence of Type 4 (error and no misconduct) and lower
incidence of Type 3 (no error and no misconduct) publications in the policy sample could be
underpinned by several explanations. Type 3 publications are retracted faster than other types
and accumulate few Crossref citations. It could be hypothesized that because these publica-
tions are retracted due to administrative reasons, they are more rapidly identified for retraction
and have little time to attract interest among scholars and policymakers. The higher relative
rate of Type 4 publications in policy-cited publications is notable, but not straightforward to
interpret based on available data.
5.2. Citation Context
Citation context and temporal analysis help to understand how exactly retracted publications
are cited once selected by authors of policy reports. A somewhat counterintuitive outcome of
this analysis is that the number of policy citations increases slightly after retraction. One inter-
pretation is that positive citations continue to accrue, but the retraction signal then draws in
negative citations as well. Citations during the retraction year itself make interpretation more
challenging. Nevertheless, as a general conclusion, the dynamic of pre- and postretraction
citation accumulation in policy literature seems to be different from scholarly literature, where
citations often decline after the retraction (Dinh et al., 2019).
A higher rate of postretraction citations does not necessarily in itself indicate that there is a
problem. Citation context analysis shows that negative citations increase sharply after the
Quantitative Science Studies
83
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
The spread of retracted research into policy literature
retraction at the expense of positive citations. Perhaps more troubling is that positive citations
remain higher relative to negative ones even after the retraction and can keep accumulating for
more than 20 years. This points to potential issues with how retraction information is commu-
nicated by publishers. Authors of policy reports could also be more vulnerable to this weakness
than professional researchers. Although researchers often have access to subscription databases
and could be more knowledgeable about how specific journals and publishers display retraction
information, this is not necessarily the case with authors of policy reports. Even when authors
of policy reports identify as researchers, their work in policy context may push them into less
familiar topics, which can make it challenging to identify retraction information.
It is worth mentioning that the distribution of citations among authors and retracted publi-
cations is unsurprisingly skewed. It has long been demonstrated that few researchers are
responsible for a disproportionately high number of publications (Lotka, 1926; Price, 1964).
The same logic applies to citations with only a handful of publications drawing most citations.
It does not come as a surprise therefore that retracted research should behave in the same way.
With 56 policy citations, the Wakefield et al. (1998) study is an illustrative example. Most
policy documents, however, cited this study to deliberately highlight its disruptive influence
on public health and vaccination attitudes. This negative portrayal is consistent with how this
study is cited in scholarly literature. In particular, Suelzer et al. (2019) conducted an exhaus-
tive analysis of 1,153 articles citing the Wakefield study since its publication in 1998 to 2019,
and discovered that the share of negative citations increased substantially after the partial
retraction in 2004 and further after the full retraction in 2010. Overall, negative citations were
found to account for 72.7% of all citations, and the authors concluded that because the case is
so well known in the academic community, positive citations are extremely unlikely (Suelzer
et al., 2019). The same seems to be the case with policy citations of this study.
A contrasting example is the study that has the second highest number of policy citations
(Estruch et al., 2013). This also had the most scholarly citations of all publications in RW as of
December 2020 (Retraction Watch, 2020). This paper was published and subsequently
retracted from The New England Journal of Medicine (Estruch et al., 2018b) because of a meth-
odological flaw and its language implying a direct causal relationship between the Mediterra-
nean diet and various health benefits—a claim which turned out to be unsubstantiated
(McCook, 2018). Nevertheless, after accounting for methodological irregularities, the study
was republished with softer language, but similar conclusions (Estruch et al., 2018a). Our anal-
ysis shows that policy documents cited it mostly positively. However, because the study was
republished with similar conclusions, these positive citations are unlikely to represent serious
risks. Therefore, both Wakefield’s and Estruch’s publications illustrate the importance of cita-
tion skewness and citation context, because both positive and negative citations can have
widely different meaning depending on retraction circumstances.
Interviews particularly helped to shed light on this contextual information. For example,
one interviewee became aware of the partial retraction of a cited publication during the proof-
reading process but chose to go ahead with the publication as it was. After reading the retrac-
tion, the authors realized that the partial retraction actually helped support their statement. The
interviewee summarized the reasoning as follows:
It’s important actually to think about what has been retracted and what that means … if
the whole article has been retracted because all of the findings were found to be faulty or
nonverifiable, that is critical … If it is a study that has many different findings and one of
them has been changed because the confidence interval was found to have been
Quantitative Science Studies
84
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
The spread of retracted research into policy literature
reported inaccurately … I think it is somewhat different. It is still important to pay atten-
tion to, but the implications of citing the original versus the revised are less significant.
(Interview 5)
Several other interviewees also expressed the idea that they may still cite a retracted paper if
it is retracted for administrative reasons or minor errors. However, they always acknowledged
that it might be better to avoid citing retracted research altogether for status reasons. One inter-
viewee emphasized reputational risks:
Since international organizations work in the political context we are sensitive to image
and credibility, and even if there is a good part of a retracted publication that was not the
focus of the retraction, we’ll probably not cite it. (Interview 4).
This points to a larger problem with the current retraction process. Rather than retracting
problematic publications entirely, some have proposed that publishers consider a wider
amendment framework so that some cases can still be cited with reservations (Barbour, Bloom
et al., 2017; Fanelli, Ioannidis, & Goodman, 2018). This could even perhaps accommodate
some disagreement about retraction decisions11. Similarly, it remains an open question as to
how policy organizations respond to discovery that their literature partly rests on retracted
research, as our interviewees have indicated the need for a nuanced view of how retraction
affects the policy document as a whole.
5.3. Retraction Awareness and Research Appraisal by Policy Authors
Authors of policy reports are often simply not aware of the retraction. We found a continuing
flow of positive citations and, even where citations were negative, almost one half of negative
citations did not mention the retraction. Poor acknowledgment of retractions in both scholarly
and policy literature could be a reflection of noncompliance with retraction guidelines12.
Compliance with these recommendations has improved in recent years, but they are still
not followed consistently (Deculllier & Maisonneuve, 2018). Another possibility is that authors
may feel it is not necessary to mention retractions if they have already described the study in
highly critical terms (fabricated, discredited, etc.).
A further problem associated with the identification of retracted publications is that of data
exchange between publishers and other searchable databases and repositories that can host
versions of the publication. It has been reported that Google Scholar, Scopus, Web of Science,
and Embase, among others, do not consistently warn their users about retractions (Bakker &
Riegelman, 2018; van der Vet & Nijveen, 2016; Wright & McDaid, 2011). The situation is
worse with informal platforms and repositories. Publication copies deposited on nonpublisher
11 For those who might disagree with a retraction decision, they may continue to cite the study, perhaps even
without mentioning the (unjustified) retraction.
12 The COPE guidelines specify what information a retraction notice should include and how a retracted paper
is to be identified (COPE Council, 2009). These recommendations include clearly stating the retraction rea-
son in the notice, linking the notice to the retracted paper and including retraction information on the web
page of the retracted paper. It is also recommended that the publication PDF is identified with watermarks
that make the retraction status obvious. Yet, retraction notices often provide a vague and insufficient descrip-
tion of retraction reasons (Yan et al., 2016). Information presented in retraction notices is often arbitrary and
not standardized (Azoulay et al., 2015; Nair et al., 2020). Additional inconsistencies exist with respect to
how different publishers identify retracted research on their websites, either with a watermark or vignette or
plain text, with or without the date of retraction and link to the retraction notice (Yan et al., 2016).
Quantitative Science Studies
85
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
The spread of retracted research into policy literature
platforms, such as SciHub, almost never provide retraction information, while remaining easily
accessible (Mine, 2019).
The accessibility of retraction information is further complicated by the use of personal
libraries and reference managers. This software is extremely useful to researchers but can lead
to situations when retracted publications survive unamended in personal libraries and are then
used by unsuspecting researchers (Teixeira da Silva & Dobránszki, 2017). Most of these factors
were also mentioned by the interviewees as explanations of the incidence of positive post-
retraction citations in policy documents.
A reassuring sign is that at least some citing authors question problematic articles even
before they become retracted. Most negative citations (both pre- and postretraction) originate
from government organizations. These organizations may have “in-house” research depart-
ments and, as our interviews suggested, those citing the research often identify as researchers
themselves with considerable expertise over the material that they cite. The bulk of negative
postretraction citations from these organizations are of the exclusionary kind and usually
appear in a special section of the bibliography along with other excluded studies and reasons
for exclusion. These sections rarely refer to retraction as the reason for exclusion, even for
postretraction citations. Rather, they often pinpoint actual problems with methodology or
data.
This is consistent with the notion that negative citations could be more likely to come from
users who are familiar with the research field of the retracted study in question. A fruitful line of
further inquiry then could be to explore the proximity of expertise between the citing policy
organization and the cited evidence, to the extent that these are observable and measurable
features of evidence-based policymaking.
5.4. Conclusions
Concern about retractions has been growing, with dozens of studies exploring the causes and
consequences of the increasing retraction rate over the past decade. Persistent citation of
retracted research in scholarly literature has dominated the agenda. However, alternative
routes by which retracted research can exert influence have remained underexplored. Build-
ing on prior work in retraction studies and research use, this study has shown some of the
potential for analyzing the spread of retracted research into policy documents.
Studies on the spread of retracted research emphasize a variety of measures that need to be
put in place to mitigate the problem. It could be argued that most measures that could prevent
the spread of retracted evidence in scholarly literature would also be effective for policy liter-
ature. However, this work suggests that due to the nature of research selection and appraisal in
the policy context, authors of policy reports could be even more vulnerable to retracted
research.
ACKNOWLEDGMENTS
We thank Ivan Oransky, Euan Aidie, and three anonymous reviewers.
AUTHOR CONTRIBUTIONS
Dmitry Malkov: Conceptualization, Investigation, Methodology, Writing—original draft. Ohid
Yaqub: Conceptualization, Investigation, Methodology, Writing—review & editing. Josh
Siepel: Writing—review & editing.
Quantitative Science Studies
86
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
The spread of retracted research into policy literature
COMPETING INTERESTS
The authors have no competing interests.
FUNDING INFORMATION
We acknowledge support from European Research Council grant 759897.
DATA AVAILABILITY
Data from Retraction Watch was obtained under a data use agreement signed with the owner
of the database. Overton and Altmetric are commercial databases that in this case provided
researcher access to their data. The authors are not able to provide unaggregated Retraction
Watch, Overton, and Altmetric data under their license agreement. Enquiries about the raw
unaggregated data should be directed to the proprietary owners of Retraction Watch, Alt-
metric, and Overton.
REFERENCES
Adie, E. (2020). What is Overton? An overview of what Overton is
and does. https:// help.overton.io/en/articles/3822563-what-is
-overton
Ajiferuke, I., & Adekannbi, J. O. (2020). Correction and retraction
practices in library and information science journals. Journal of
Librarianship and Information Science, 52(1), 169–183. https://
doi.org/10.1177/0961000618785408
Azoulay, P., Bonatti, A., & Krieger, J. L. (2017). The career effects of
scandal: Evidence from scientific retractions. Research Policy,
46(9), 1552–1569. https://doi.org/10.1016/j.respol.2017.07.003
Azoulay, P., Furman, J. L., Krieger, J. L., & Murray, F. (2015). Retrac-
tions. Review of Economics and Statistics, 97(5), 1118–1136.
https://doi.org/10.1162/REST_a_00469
Bakker, C., & Riegelman, A. (2018). Retracted publications in men-
tal health literature: Discovery across bibliographic platforms.
Journal of Librarianship and Scholarly Communication, 6(1),
eP2199. https://doi.org/10.7710/2162-3309.2199
Barbour, V., Bloom, T., Lin, J., & Moylan, E. (2017) Amending pub-
lished articles: Time to rethink retractions and corrections?
F 1 0 0 0 R e s e a rc h , 6 , 1 9 6 0 . h t t p s : / / d o i . o r g / 1 0 . 1 2 6 8 8
/f1000research.13060.1
Bar-Ilan, J., & Halevi, G. (2017). Post retraction citations in context:
A case study. Scientometrics, 113(1), 547–565. https://doi.org/10
.1007/s11192-017-2242-0, PubMed: 29056790
Bar-Ilan, J., & Halevi, G. (2018). Temporal characteristics of
retracted articles. Scientometrics, 116(3), 1771–1783. https://doi
.org/10.1007/s11192-018-2802-y
Bornemann-Cimenti, H., Szilagyi, I. S., & Sandner-Kiesling, A.
(2016). Perpetuation of retracted publications using the example
of the Scott S. Reuben case: Incidences, reasons and possible
improvements. Science and Engineering Ethics, 22(4),
1063–1072. https://doi.org/10.1007/s11948-015-9680-y,
PubMed: 26150092
Bornmann, L. (2014). Do altmetrics point to the broader impact of
research? An overview of benefits and disadvantages of alt-
metrics. Journal of Informetrics, 8(4), 895–903. https://doi.org
/10.1016/j.joi.2014.09.005
Bornmann, L., & Daniel, H. D. (2008). What do citation counts
measure? A review of studies on citing behavior. Journal of
Documentation, 64(1), 45–80. https://doi.org/10.1108
/00220410810844150
Bornmann, L., Haunschild, R., & Marx, W. (2016). Policy docu-
ments as sources for measuring societal impact: How often is
climate change research mentioned in policy-related docu-
ments? Scientometrics, 109(3), 1477–1495. https://doi.org/10
.1007/s11192-016-2115-y, PubMed: 27942080
Bozzo, A., Bali, K., Evaniew, N., & Ghert, M. (2017). Retractions in
cancer research: A systematic survey. Research Integrity and Peer
Review, 2, 5. https://doi.org/10.1186/s41073-017-0031-1,
PubMed: 29451549
Brainard, J. (2018). Rethinking retractions. Science, 362(6413),
390–393. https://doi.org/10.1126/science.362.6413.390,
PubMed: 30361352
Budd, J. M., Sievert, M. E., & Schultz, T. R. (1998). Phenomena of
retraction: Reasons for retraction and citations to the publica-
tions. Journal of the American Medical Association, 280(3),
296–297. https://doi.org/10.1001/jama.280.3.296, PubMed:
9676689
Carroll, C., & Tattersall, A. (2020). Research and policy impact of
trials published by the UK National Institute of Health Research
(2006–2015). Value in Health, 23(6), 727–733. https://doi.org/10
.1016/j.jval.2020.01.012, PubMed: 32540230
Chambers, L. M., Michener, C. M., & Falcone, T. (2019). Plagiarism
and data falsification are the most common reasons for retracted
publications in obstetrics and gynaecology. BJOG: An Interna-
tional Journal of Obstetrics and Gynaecology, 126(9),
1134–1140. https://doi.org/10.1111/1471-0528.15689,
PubMed: 30903641
COPE Council. (2009). Retraction guidelines (Tech. Rep.). https://
doi.org/10.24318/cope.2019.1.4
Copiello, S. (2020). Other than detecting impact in advance, alter-
native metrics could act as early warning signs of retractions:
Tentative findings of a study into the papers retracted by PLoS
ONE. Scientometrics, 125(3), 2449–2469. https://doi.org/10
.1007/s11192-020-03698-w
Costas, R., Zahedi, Z., & Wouters, P. (2015). Do “altmetrics” corre-
late with citations? Extensive comparison of altmetric indicators
with citations from a multidisciplinary perspective. Journal of the
Association for Information Science and Technology, 66(10),
2003–2019. https://doi.org/10.1002/asi.23309
Coudert, F. X. (2019). Correcting the scientific record: Retraction
practices in chemistry and materials science. Chemistry of
Quantitative Science Studies
87
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
q
s
s
/
a
r
t
i
c
e
-
p
d
l
f
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
The spread of retracted research into policy literature
Materials, 31(10), 3593–3598. https://doi.org/10.1021/acs
.chemmater.9b00897
Dal-Ré, R., & Ayuso, C. (2019). Reasons for and time to retraction of
genetics articles published between 1970 and 2018. Journal of
Medical Genetics, 56(11), 734–740. https://doi.org/10.1136
/jmedgenet-2019-106137, PubMed: 31300549
Dal-Ré, R., & Ayuso, C. (2020). For how long and with what rele-
vance do genetics articles retracted due to research misconduct
remain active in the scientific literature. Accountability in
Research, 28(5), 280–296. https://doi.org/10.1080/08989621
.2020.1835479, PubMed: 33124464
Deculllier, E., & Maisonneuve, H. (2018). Correcting the literature:
Improvement trends seen in contents of retraction notices. BMC
Research Notes, 11(1), 490. https://doi.org/10.1186/s13104-018
-3576-2, PubMed: 30016985
Dinh, L., Sarol, J., Cheng, Y. Y., Hsiao, T. K., Parulian, N., & Schneider,
J. (2019). Systematic examination of pre- and post-retraction
citations. Proceedings of the Association for Information Science
and Technology, 56(1), 390–394. https://doi.org/10.1002/pra2.35
Elliott, H., & Popay, J. (2000). How are policy makers using
evidence? Models of research utilisation and local NHS policy
making. Journal of Epidemiology and Community Health,
54(6), 461–468. https://doi.org/10.1136/jech.54.6.461,
PubMed: 10818123
Estruch, R., Ros, E., Salas-Salvadó, J., Covas, M.-I., Corella, D., …
Martínez-González, M. A. (2018a). Primary prevention of cardio-
vascular disease with a Mediterranean diet supplemented with
extra-virgin olive oil or nuts. New England Journal of Medicine,
378(25), e34. https://doi.org/10.1056/ NEJMoa1800389,
PubMed: 29897866
Estruch, R., Ros, E., Salas-Salvadó, J., Covas, M.-I., Corella, D., …
Martínez-González, M. A. (2018b). Retraction and republication:
Primary prevention of cardiovascular disease with a Mediterra-
nean diet. N Engl J Med 2013;368:1279–90. New England Jour-
nal of Medicine, 378(25), 2441–2442. https://doi.org/10.1056
/NEJMc1806491, PubMed: 29897867
Estruch, R., Ros, E., Salas-Salvadó, J., Covas, M.-I., Corella, D., …
Martínez-González, M. A. (2013). RETRACTED: Primary preven-
tion of cardiovascular disease with a Mediterranean diet. New
England Journal of Medicine, 368(14), 1279–1290. https://doi
.org/10.1056/NEJMoa1200303, PubMed: 23432189
Fanelli, D., Ioannidis, J. P., & Goodman, S. (2018). Improving the
integrity of published science: An expanded taxonomy of retrac-
tions and corrections. European Journal of Clinical Investigation,
48(4), e12898. https://doi.org/10.1111/eci.12898, PubMed:
29369337
Fang, Z., Dudek, J., Noyons, E., & Costas, R. (n.d.). Science cited in
policy documents: Evidence from the Overton database.
Unpublished.
Feng, L., Yuan, J., & Yang, L. (2020). An observation framework for
retracted publications in multiple dimensions. Scientometrics,
125, 1445–1457. https://doi.org/10.1007/s11192-020-03702-3
Fulton, A., Coates, A., Williams, M., Howe, P., & Hill, A. (2015).
Persistent citation of the only published randomised controlled
trial of Omega-3 supplementation in chronic obstructive pulmo-
nary disease six years after its retraction. Publications, 3(1),
17–26. https://doi.org/10.3390/publications3010017
Furman, J. L., Jensen, K., & Murray, F. (2012). Governing knowl-
edge in the scientific community: Exploring the role of retractions
in biomedicine. Research Policy, 41(2), 276–290. https://doi.org
/10.1016/j.respol.2011.11.001
Grant, J., Cottrell, R., Cluzeau, F., & Fawcett, G. (2000). Evaluating
“payback” on biomedical research from papers cited in clinical
guidelines: Applied bibliometric study. British Medical Journal,
320(7242), 1107–1111. https://doi.org/10.1136/ bmj.320.7242
.1107, PubMed: 10775218
Gray, R., Al-Ghareeb, A., & McKenna, L. (2019). Why articles con-
tinue to be cited after they have been retracted: An audit of
retraction notices. International Journal of Nursing Studies, 90,
11–12. https://doi.org/10.1016/j.ijnurstu.2018.10.003, PubMed:
30476725
Hagberg, J. M. (2020). The unfortunately long life of some retracted
biomedical research publications. Journal of Applied Physiology,
128(5), 1381–1391. https://doi.org/10.1152/japplphysiol.00003
.2020, PubMed: 32240014
Hamilton, D. G. (2019). Continued citation of retracted radiation
oncology literature—Do we have a problem? International Jour-
nal of Radiation Oncology, Biology, Physics, 103(5), 1036–1042.
https://doi.org/10.1016/j.ijrobp.2018.11.014, PubMed:
30465848
Hanney, S., Gonzalez-Block, M., Buxton, M. J., & Kogan, M.
(2003). The utilisation of health research in policy-making:
Concepts, examples and methods of assessment. Health
Research Policy and Systems, 1(1), 2. https://doi.org/10.1186
/1478-4505-1-2, PubMed: 12646071
Haunschild, R., & Bornmann, L. (2017). How many scientific
papers are mentioned in policy-related documents? An empirical
investigation using Web of Science and Altmetric data. Sciento-
metrics, 110(3), 1209–1216. https://doi.org/10.1007/s11192-016
-2237-2, PubMed: 28255186
Hutchinson, E. (2011). The development of health policy in
Malawi: The influence of context, evidence and links in the cre-
ation of a national policy for cotrimoxazole prophylaxis. Malawi
Medical Journal, 23(4), 110–115.
Hyde, J. K., Mackie, T. I., Palinkas, L. A., Niemi, E., & Leslie, L. K.
(2016). Evidence use in mental health policy making for children
in foster care. Administration and Policy in Mental Health and
Mental Health Services Research, 43(1), 52–66. https://doi.org
/10.1007/s10488-015-0633-1, PubMed: 25711392
Innvær, S. (2009). The use of evidence in public governmental
reports on health policy: An analysis of 17 Norwegian official
reports (NOU). BMC Health Services Research, 9, 177. https://
doi.org/10.1186/1472-6963-9-177, PubMed: 19785760
Innvær, S., Vist, G., Trommald, M., & Oxman, A. (2002). Health
policy-makers’ perceptions of their use of evidence: A systematic
review. Journal of Health Services Research and Policy, 7(4),
239–244. https://doi.org/10.1258/135581902320432778,
PubMed: 12425783
Jan, R., & Zainab, T. (2018). The impact story of retracted articles:
Altmetric it! In IEEE 5th International Symposium on Emerging
Trends and Technologies in Libraries and Information Services,
ETTLIS 2018 (pp. 402–406). https://doi.org/10.1109/ ETTLIS
.2018.8485245
Jin, G. Z., Jones, B., Lu, S. F., & Uzzi, B. (2019). The reverse
Matthew effect: Consequences of retraction in scientific teams.
Review of Economics and Statistics, 101(3), 492–506. https://
doi.org/10.1162/rest_a_00780
Kochan, C. A., & Budd, J. M. (1992). The persistence of fraud in the
literature: The Darsee case. Journal of the American Society for
Information Science, 43(7), 488–493. https://doi.org/10.1002
/(SICI)1097-4571(199208)43:7<488::AID-ASI3>3.0.CO;2-7,
PubMed: 11653988
Korpela, k. METRO. (2010). How long does it take for the scientific literature
to purge itself of fraudulent material? The Breuning case revisited.
Current Medical Research and Opinion, 26(4), 843–847. https://
doi.org/10.1185/03007991003603804, PubMed: 20136577
Estudios de ciencias cuantitativas
88
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
The spread of retracted research into policy literature
Kryl, D., allen, l., Dolby, K., Sherbon, B., & Viney, I. (2012). Track-
ing the impact of research on policy and practice: Investigating
the feasibility of using citations in clinical guidelines for research
evaluación. BMJ Open, 2(2), e000897. https://doi.org/10.1136
/bmjopen-2012-000897, PubMed: 22466037
Landis, j. r., & Koch, GRAMO. GRAMO. (1977). The measurement of observer
agreement for categorical data. Biometrics, 33(1), 159–174.
https://doi.org/10.2307/2529310, PubMed: 843571
Lotka, A. (1926). The frequency distribution of scientific produc-
actividad. Journal of the Washington Academy of Sciences, 16(12),
317–323.
Makkar, S. r., Brennan, S., Tornero, T., Williamson, A., Redman, S.,
& Verde, S. (2016). The development of SAGE: A tool to evaluate
how policymakers’ engage with and use research in health pol-
icymaking. Research Evaluation, 25(3), 315–328. https://doi.org
/10.1093/reseval/rvv044
marco, A. (2018). A scientist’s fraudulent studies put patients at
riesgo. Ciencia, 362(6413), 394. https://doi.org/10.1126/science
.362.6413.394-a, PubMed: 30361354
marco, A., & Oransky, I. (2014). What studies of retractions tell us.
Journal of Microbiology & Biology Education, 15(2), 151–154.
https://doi.org/10.1128/jmbe.v15i2.855, PubMed: 25574267
McCook, A. (2018). Errors trigger retraction of study on Mediterra-
nean diet’s heart benefits. https://www.npr.org/sections/health
-shots/2018/06/13/619619302/errors-trigger-retraction-of-study
-on-mediterranean-diets-heart-benefits?t=1610488492955
McHugh, Ud.. METRO., & Yentis, S. METRO. (2019). An analysis of retractions of
papers authored by Scott Reuben, Joachim Boldt and Yoshitaka
Fujii. Anaesthesia, 74(1), 17–21. https://doi.org/10.1111/anae
.14414, PubMed: 30144024
Mine, S. (2019). Toward responsible scholarly communication and
innovation: A survey of the prevalence of retracted articles on
scholarly communication platforms. Proceedings of the Associa-
tion for Information Science and Technology, 56(1), 738–739.
https://doi.org/10.1002/pra2.155
Mongeon, PAG., & Larivière, V. (2016). Costly collaborations: El
impact of scientific fraud on co-authors’ careers. Journal of the
Association for Information Science and Technology, 67(3),
535–542. https://doi.org/10.1002/asi.23421
Moylan, mi. C., & Kowalczuk, METRO. k. (2016). Why articles are
retracted: A retrospective cross-sectional study of retraction
notices at BioMed Central. BMJ Open, 6(11), e012047. https://
doi.org/10.1136/bmjopen-2016-012047, PubMed: 27881524
Nabyonga-Orem, J., Ssengooba, F., Macq, J., & Criel, B. (2014).
Malaria treatment policy change in Uganda: What role did evi-
dence play? Malaria Journal, 13, 345. https://doi.org/10.1186
/1475-2875-13-345, PubMed: 25179532
Nair, S., Yean, C., Yoo, J., Leff, J., Delphin, MI., & Adams, D. C. (2020).
Reasons for article retraction in anesthesiology: A comprehensive
análisis. Canadian Journal of Anesthesia, 67(1), 57–63. https://doi
.org/10.1007/s12630-019-01508-3, PubMed: 31617069
Newson, r., Rychetnik, l., Rey, l., Milat, A., & Bauman, A. (2018).
Does citation matter? Research citation in policy documents as
an indicator of research impact—An Australian obesity policy
case-study. Health Research Policy and Systems, 16, 55. https://
doi.org/10.1186/s12961-018-0326-9, PubMed: 29950167
Pfeifer, METRO. PAG., & Snodgrass, GRAMO. l. (1990). The continued use of
retracted, invalid scientific literature. Journal of the American
Medical Association, 263(10), 1420–1423. https://doi.org/10
.1001/jama.1990.03440100140020, PubMed: 2406475
Pinheiro, h., Vignola-Gagné, MI., & Campbell, D. (2021). A
large-scale validation of the relationship between cross-
disciplinary research and its uptake in policy-related documents,
using the novel Overton altmetrics database. Quantitative Science
Estudios, 2(2), 616–642. https://doi.org/10.1162/qss_a_00137
Precio, D. j. de Solla. (1964). Networks of scientific papers. Ciencia,
149(3683), 510–515. https://doi.org/10.1126/science.149.3683
.510, PubMed: 14325149
Rai, r., & Sabharwal, S. (2017). Retracted publications in orthopae-
dics: Prevalence, características, and trends. Journal of Bone and
Joint Surgery, 99(9), e44. https://doi.org/10.2106/JBJS.16.01116,
PubMed: 28463926
Redman, B. K., Yarandi, h. NORTE., & Merz, j. F. (2008). Empirical
developments in retraction. Journal of Medical Ethics, 34(11),
807–809. https://doi.org/10.1136/jme.2007.023069, PubMed:
18974415
Redman, S., Tornero, T., Davies, h., Williamson, A., Haynes, A., …
Verde, S. (2015). The SPIRIT Action Framework: A structured
approach to selecting and testing strategies to increase the use
of research in policy. Social Science and Medicine, 136–137,
147–155. https://doi.org/10.1016/j.socscimed.2015.05.009,
PubMed: 26004208
Retraction Watch. (2020). Top 10 most highly cited retracted
documentos. Retrieved 2020 from https://retractionwatch.com/the
-retraction-watch-leaderboard/top-10-most-highly-cited
-retracted-papers/.
RISRS. (2020). RISRS website. Retrieved 2020 from https://
infoqualitylab.org/projects/risrs2020/.
Rubbo, PAG., Pilatti, l. A., & Picinin, C. t. (2019). Citation of retracted
articles in engineering: A study of the Web of Science database.
Ethics and Behavior, 29(8), 661–679. https://doi.org/10.1080
/10508422.2018.1559064
Schneider, J., Woods, norte. D., Proescholdt, r., Fu, y., & The RISRS
Equipo. (2021). Recommendations from the Reducing the inadver-
tent spread of retracted science: Shaping a research and implemen-
tation agenda project. https://doi.org/10.31222/osf.io/ms579
Scite. (2020). Reference Check: An easy way to check the reliability
of your references. https://scite.ai/blog/reference-check-an-easy
-way-to-check-the-reliability-of-your-references-b2afcd64abc6
Serghiou, S., Marton, R. METRO., & Ioannidis, j. PAG. A. (2021). Media and
social media attention to retracted articles according to altmetric.
MÁS UNO, 16(5), e0248625. https://doi.org/10.1371/journal
.pone.0248625, PubMed: 33979339
Shema, h., Hahn, o., Mazarakis, A., & Peters, I. (2019). Retractions
from altmetric and bibliometric perspectives. Información-
Wissenschaft und Praxis, 70(2–3), 98–110. https://doi.org/10
.1515/iwp-2019-2006
Pequeño, h. GRAMO. (1978). Cited documents as concept symbols. Social
Studies of Science, 8(3), 327–340. https://doi.org/10.1177
/030631277800800305
Steen, GRAMO. (2011). Retractions in the scientific literature: Do authors
deliberately commit research fraud? Journal of Medical Ethics,
37(2), 113–117. https://doi.org/10.1136/jme.2010.038125,
PubMed: 21081306
Steen, GRAMO., Casadevall, A., & Fang, F. C. (2013). Why has the number of
scientific retractions increased? MÁS UNO, 8(7), e68397. https://
doi.org/10.1371/journal.pone.0068397, PubMed: 23861902
Suelzer, mi. METRO., Deal, J., Hanus, k. l., Ruggeri, B., Sieracki, r., &
Witkowski, mi. (2019). Assessment of citations of the retracted arti-
cle by Wakefield et al with fraudulent claims of an association
between vaccination and autism. JAMA Network Open, 2(11),
e1915552. https://doi.org/10.1001/jamanetworkopen.2019
.15552, PubMed: 31730183
Sugimoto, C. r., & Larivière, V. (2018). Measuring research: Qué
everyone needs to know. Oxford: prensa de la Universidad de Oxford. https://
doi.org/10.1093/wentk/9780190640118.001.0001
Estudios de ciencias cuantitativas
89
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
The spread of retracted research into policy literature
Sugimoto, C. r., Work, S., Larivière, v., & Haustein, S. (2017).
Scholarly use of social media and altmetrics: A review of the
literature. Journal of the Association for Information Science
and Technology, 68(9), 2037–2062. https://doi.org/10.1002/asi
.23833
Tahamtan, I., & Bornmann, l. (2019). What do citation counts
measure? An updated review of studies on citations in scientific
documents published between 2006 y 2018. cienciometria,
121, 1635–1784. https://doi.org/10.1007/s11192-019-03243-4
Tahamtan, I., & Bornmann, l. (2020). Altmetrics and societal
impact measurements: Match or mismatch? A literature review.
El profesional de la información, 29(1), 1699–2407. https://doi
.org/10.3145/epi.2020.ene.02
Tattersall, A., & Carroll, C. (2018). What can altmetric.com tell us
about policy citations of research? An analysis of altmetric.com
data for research articles from the University of Sheffield. Fron-
tiers in Research Metrics and Analytics, 2, 9. https://doi.org/10
.3389/frma.2017.00009
Teixeira da Silva, j. A., & Bornemann-Cimenti, h. (2017). Why do
some retracted papers continue to be cited? cienciometria,
110(1), 365–370. https://doi.org/10.1007/s11192-016-2178-9
Teixeira da Silva, j. A., & Dobránszki, j. (2017). Highly cited
retracted papers. cienciometria, 110(3), 1653–1661. https://doi
.org/10.1007/s11192-016-2227-4
Theis-Mahon, norte. r., & Bakker, C. j. (2020). The continued citation
of retracted publications in dentistry. Journal of the Medical
Library Association, 108(3), 389–397. https://doi.org/10.5195
/jmla.2020.824, PubMed: 32843870
van der Vet, PAG. MI., & Nijveen, h. (2016). Propagation of errors in
citation networks: A study involving the entire citation network
of a widely cited paper published in, and later retracted from, el
journal Nature. Research Integrity and Peer Review, 1, 3. https://
doi.org/10.1186/s41073-016-0008-5, PubMed: 29451542
Van Noorden, R. (2011). Science publishing: The trouble with
retractions. Naturaleza, 478(7367), 26–28. https://doi.org/10.1038
/478026a, PubMed: 21979026
Vuong, Q.-H., La, V.-P., A, M.-T., Vuong, T.-T., & A, M.-T. (2020).
Characteristics of retracted articles based on retraction data from
online sources through February 2019. Science Editing, 7(1),
34–44. https://doi.org/10.6087/kcse.187
Wakefield, A. J., Murch, S. h., Antonio, A., Linnell, J., Casson,
D . METRO . , … Wa l k e r – S m i t h ,
( 1 9 9 8 ) . R E T R A C T E D :
Ileal-lymphoid-nodular hyperplasia, non-specific colitis, y
pervasive developmental disorder in children. The Lancet,
351(9103), 637–641. https://doi.org/10.1016/S0140-6736(97)
11096-0, PubMed: 9500320
j . A .
Weiss, C. h. (1979). The many meanings of research utilization.
Public Administration Review, 39(5), 426–431. https://doi.org
/10.2307/3109916
Williamson, K., & Johanson, GRAMO. (Editores.). (2018). Research methods:
Información, sistemas, and contexts (2y ed.). Chandos Publish-
En g. https://doi.org/10.1016/C2016-0-03932-3
Wright, K., & McDaid, C. (2011). Reporting of article retractions in
bibliographic databases and online journals. Journal of the Med-
ical Library Association, 99(2), 164–167. https://doi.org/10.3163
/1536-5050.99.2.010, PubMed: 21464856
yan, J., macdonald, A., Baisi, l. PAG., Evaniew, NORTE., Bhandari, METRO., &
Ghert, METRO. (2016). Retractions in orthopaedic research: A systematic
revisar. Bone and Joint Research, 5(6), 263–268. https://doi.org/10
.1302/2046-3758.56.BJR-2016-0047, PubMed: 27354716
Yin, y., gao, J., jones, B. F., & Wang, D. (2021). Coevolution of
policy and science during the pandemic. Ciencia, 371(6525),
128–130. https://doi.org/10.1126/science.abe3084, PubMed:
33414211
Zardo, PAG., Collie, A., & Livingstone, C. (2014). External factors
affecting decision-making and use of evidence in an Australian
public health policy environment. Social Science and Medicine,
108, 120–127. https://doi.org/10.1016/j.socscimed.2014.02.046,
PubMed: 24632115
Zotero. (2019). Retracted item notifications with Retraction Watch
integración. https://www.zotero.org/ blog/retracted-item
-notifications/
yo
D
oh
w
norte
oh
a
d
mi
d
F
r
oh
metro
h
t
t
pag
:
/
/
d
i
r
mi
C
t
.
metro
i
t
.
/
mi
d
tu
q
s
s
/
a
r
t
i
C
mi
–
pag
d
yo
F
/
/
/
/
/
4
1
6
8
2
0
7
8
4
2
9
q
s
s
_
a
_
0
0
2
4
3
pag
d
.
F
b
y
gramo
tu
mi
s
t
t
oh
norte
0
7
S
mi
pag
mi
metro
b
mi
r
2
0
2
3
Estudios de ciencias cuantitativas
90